Share on Social Media

Learn how to seamlessly install Apache Spark on Rocky Linux 9 with our step-by-step guide. Unlock the power of distributed computing and data processing with this comprehensive tutorial, tailored for smooth integration and optimal performance on your Linux environment. #centlinux #linux #ApacheSpark

What is Apache Spark?:

Apache Spark is an open-source unified analytics engine for large-scale data processing. Spark provides an interface for programming clusters with implicit data parallelism and fault tolerance. Originally developed at the University of California, Berkeley’s AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since.

Apache Spark has its architectural foundation in the resilient distributed dataset (RDD), a read-only multiset of data items distributed over a cluster of machines, that is maintained in a fault-tolerant way. The Dataframe API was released as an abstraction on top of the RDD, followed by the Dataset API. In Spark 1.x, the RDD was the primary application programming interface (API), but as of Spark 2.x use of the Dataset API is encouraged even though the RDD API is not deprecated. The RDD technology still underlies the Dataset API.

Spark and its RDDs were developed in 2012 in response to limitations in the MapReduce cluster computing paradigm, which forces a particular linear dataflow structure on distributed programs: MapReduce programs read input data from disk, map a function across the data, reduce the results of the map, and store reduction results on disk. Spark’s RDDs function as a working set for distributed programs that offers a (deliberately) restricted form of distributed shared memory.

Inside Apache Spark the workflow is managed as a directed acyclic graph (DAG). Nodes represent RDDs while edges represent the operations on the RDDs. (Source: Wikipedia)

Apache Spark Alternatives

Certainly! While Apache Spark is a powerful and widely used distributed computing framework, there are several alternatives available, each with its own unique features and strengths. Here are some notable alternatives to Apache Spark:

  1. Hadoop MapReduce: MapReduce is the classic distributed processing framework that inspired Apache Spark. It’s part of the Apache Hadoop project and is known for its reliability and scalability. While not as feature-rich or flexible as Spark, MapReduce is still widely used in certain contexts.
  2. Apache Flink: Apache Flink is another open-source stream processing framework that offers both batch and stream processing capabilities. It provides low-latency processing, fault tolerance, and native support for event time processing, making it suitable for real-time analytics and event-driven applications.
  3. Apache Storm: Apache Storm is a real-time stream processing system designed for high-throughput, fault-tolerant processing of large volumes of data. It’s particularly well-suited for use cases requiring low latency and event-driven processing, such as real-time analytics and stream processing pipelines.
  4. Databricks Delta: Databricks Delta is a unified data management system built on top of Apache Spark. It provides ACID transactions, data versioning, and schema enforcement, making it suitable for building reliable data pipelines and data lakes.
  5. Google Cloud Dataflow: Google Cloud Dataflow is a fully managed stream and batch processing service offered by Google Cloud Platform. It provides a unified programming model for both batch and stream processing, along with features like auto-scaling and serverless execution.
  6. Apache Beam: Apache Beam is an open-source unified programming model for batch and stream processing. It supports multiple execution engines, including Apache Spark, Apache Flink, and Google Cloud Dataflow, allowing users to write portable data processing pipelines.

These are just a few examples of alternatives to Apache Spark, each offering unique features and capabilities suited to different use cases and requirements. When choosing a distributed computing framework, consider factors such as scalability, fault tolerance, real-time processing capabilities, and integration with existing systems and tools.

Apache Spark vs Kafka

Apache Spark and Apache Kafka are both popular distributed computing platforms, but they serve different purposes and excel in different areas. Here’s a comparison between the two:

  • Purpose:
  • Apache Spark: Apache Spark is primarily a distributed data processing framework. It’s designed for performing complex data processing tasks such as batch processing, real-time stream processing, machine learning, and graph processing.
  • Apache Kafka: Apache Kafka is a distributed event streaming platform. It’s designed for building real-time data pipelines and streaming applications, enabling high-throughput, fault-tolerant messaging and storage of large volumes of data streams.
  • Use Cases:
  • Apache Spark: Apache Spark is commonly used for data analytics, ETL (Extract, Transform, Load) processes, machine learning, and interactive querying. It’s suitable for scenarios where complex data transformations and analytics are required.
  • Apache Kafka: Apache Kafka is used for building real-time data pipelines, event-driven architectures, log aggregation, and stream processing. It’s ideal for scenarios where data needs to be ingested, processed, and distributed in real-time.
  • Architecture:
  • Apache Spark: Apache Spark follows a distributed computing model with a master-slave architecture. It utilizes in-memory processing and data parallelism to perform computations efficiently.
  • Apache Kafka: Apache Kafka is designed as a distributed messaging system with a distributed commit log architecture. It uses partitions and replication to achieve high scalability and fault tolerance.
  • Data Processing Model:
  • Apache Spark: Apache Spark supports both batch processing and stream processing. It provides high-level APIs for batch processing (e.g., Spark SQL, DataFrame API) and stream processing (e.g., Spark Streaming, Structured Streaming).
  • Apache Kafka: Apache Kafka is primarily focused on stream processing. It provides APIs and libraries for building stream processing applications, consuming and processing data in real-time.
  • Integration:
  • Apache Spark: Apache Spark can integrate with Apache Kafka for stream processing tasks. It provides connectors and libraries for reading data from Kafka topics and processing it using Spark Streaming or Structured Streaming.
  • Apache Kafka: Apache Kafka can be used as a data source or sink for Apache Spark applications. Spark can consume data from Kafka topics, process it, and write the results back to Kafka or other storage systems.

In summary, while Apache Spark and Apache Kafka are both powerful distributed computing platforms, they serve different purposes and are often used together in complementary ways. Spark is focused on data processing and analytics, while Kafka is focused on real-time data streaming and event-driven architectures. Depending on your requirements, you may choose to use one or both of these platforms in your data processing pipelines.

Recommended Online Training: Apache Spark with Scala By Example

354532 36b2 3show?id=oLRJ54lcVEg&offerid=1486687.391976143759395693432875&bids=1486687

Environment Specification:

We are using a minimal Rocky Linux 9 virtual machine with following specifications.

  • CPU – 3.4 Ghz (2 cores)
  • Memory – 2 GB
  • Storage – 20 GB
  • Operating System – Rocky Linux release 9.1 (Blue Onyx)
  • Hostname –
  • IP Address –

Update your Rocky Linux Server:

By using a ssh client, login to your Rocky Linux server as root user.

Set a Fully Qualified Domain Name (FQDN) and Local Name Resolution for your Linux machine.

# hostnamectl set-hostname
# echo " spark-01" >> /etc/hosts

Execute following commands to refresh your Yum cache and update software packages in your Rocky Linux server.

# dnf makecache
# dnf update -y

If above commands update your Linux Kernel and you should reboot your Linux operating system with new Kernel before setup Apache Spark software.

# reboot

After reboot, check the Linux operating system and Kernel versions.

# uname -r

# cat /etc/rocky-release
Rocky Linux release 9.1 (Blue Onyx)

Setup Apache Spark Prerequisites

Apache Spark is written in Scala programming language; thus it requires Scala support for deployment. Whereas Scala requires Java language support.

There are some other software packages that you may require to download and install Apache Spark software.

Therefore, you can install all these packages in a single shot of dnf command.

# dnf install -y wget gzip tar java-17-openjdk

After installation, verify the version of active Java.

# java --version
openjdk 17.0.6 2023-01-17 LTS
OpenJDK Runtime Environment (Red_Hat- (build 17.0.6+10-LTS)
OpenJDK 64-Bit Server VM (Red_Hat- (build 17.0.6+10-LTS, mixed mode, sharing)

Install Scala Programming Language

Although, we have already written a complete tutorial on installation of Scala on Rocky Linux 9. But we are repeating most necessary steps here for the sake of completeness of this article.

Download Coursier Setup by executing following wget command.

# wget
--2023-03-07 21:38:25--
Resolving (
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: [following]
--2023-03-07 21:38:26--
Resolving (,,, ...
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 20759374 (20M) [application/octet-stream]
Saving to: ‘cs-x86_64-pc-linux.gz’

cs-x86_64-pc-linux. 100%[===================>]  19.80M   508KB/s    in 40s

2023-03-07 21:39:08 (502 KB/s) - ‘cs-x86_64-pc-linux.gz’ saved [20759374/20759374]

Unzip downloaded Coursier Setup file by using gunzip command.

# gunzip cs-x86_64-pc-linux.gz

Rename extracted file to cs for convenience and grant execute permissions on this file.

# mv cs-x86_64-pc-linux cs
# chmod +x cs

Execute Coursier Setup file to initiate installation of Scala programming language.

# ./cs setup
Checking if a JVM is installed
Found a JVM installed under /usr/lib/jvm/java-17-openjdk-

Checking if ~/.local/share/coursier/bin is in PATH
  Should we add ~/.local/share/coursier/bin to your PATH via ~/.profile, ~/.bash_profile? [Y/n] Y

Checking if the standard Scala applications are installed
  Installed ammonite
  Installed cs
  Installed coursier
  Installed scala
  Installed scalac
  Installed scala-cli
  Installed sbt
  Installed sbtn
  Installed scalafmt

Execute ~/.bash_profile once to setup environment for your current session.

# source ~/.bash_profile

Check the version of Scala software.

# scala -version
Scala code runner version 3.2.2 -- Copyright 2002-2023, LAMP/EPFL

Install Apache Spark Software on Rocky Linux

Apache Spark is a free software, thus it is available to download at their official website.

You can copy the download link of Apache Spark software and then use it with wget command to download this open-source analytics engine.

# wget
--2023-03-07 22:00:35--
Resolving (, 2a04:4e42::644
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 299360284 (285M) [application/x-gzip]
Saving to: ‘spark-3.3.2-bin-hadoop3.tgz’

spark-3.3.2-bin-had 100%[===================>] 285.49M   898KB/s    in 4m 28s

2023-03-07 22:05:04 (1.06 MB/s) - ‘spark-3.3.2-bin-hadoop3.tgz’ saved [299360284/299360284]

Use tar command to extract Apache Spark software and then use mv command to move it to /opt directory.

# tar xf spark-3.3.2-bin-hadoop3.tgz
# mv spark-3.3.2-bin-hadoop3 /opt/spark

Create a file in /etc/profile.d directory to setup environment for Apache Spark during session startup.

# echo "export SPARK_HOME=/opt/spark" >> /etc/profile.d/
# echo "export PATH=$PATH:/opt/spark/bin:/opt/spark/sbin" >> /etc/profile.d/

Create a user spark and grant ownership of Apache Spark software to this user.

# useradd spark
# chown -R spark:spark /opt/spark

Configure Linux Firewall

Apache Spark uses a master-slave architecture. The Spark master distributes tasks among Spark Slave services, which can exist on the same or other Apache Spark nodes.

Allow the default service ports of Apache Spark Master and Apache Spart Worker nodes in Linux Firewall.

# firewall-cmd --permanent --add-port=6066/tcp
# firewall-cmd --permanent --add-port=7077/tcp
# firewall-cmd --permanent --add-port=8080-8081/tcp
# firewall-cmd --reload

Create Systemd Services

Create a systemd service for Spark Master by using vim text editor.

# vi /etc/systemd/system/spark-master.service

Add following directives in this file.

Description=Apache Spark Master

ExecStart=/bin/bash /opt/spark/sbin/


Enable and start Spark Master service.

# systemctl enable --now spark-master.service
Created symlink /etc/systemd/system/ → /etc/systemd/system/spark-master.service.

Check the status for Spark Master service.

# systemctl status spark-master.service
● spark-master.service - Apache Spark Master
Loaded: loaded (/etc/systemd/system/spark-master.service; enabled; vendor >
Active: active (running) since Tue 2023-03-07 22:13:13 PKT; 4min 13s ago
Main PID: 6856 (java)
Tasks: 29 (limit: 10904)
Memory: 175.8M
CPU: 5.490s
CGroup: /system.slice/spark-master.service
└─6856 /usr/lib/jvm/java-17-openjdk->

Mar 07 22:13:11 systemd[1]: Starting Apache Spark Master>
Mar 07 22:13:11 bash[6850]: starting>
Mar 07 22:13:13 systemd[1]: Started Apache Spark Master.

Create a systemd service for Spark Slave by using vim text editor.

# vi /etc/systemd/system/spark-slave.service

Add following directives in this file.

Description=Apache Spark Slave

ExecStart=/bin/bash /opt/spark/sbin/ spark://
ExecStop=/bin/bash /opt/spark/sbin/


Enable and start Spark Slave service.

# systemctl enable --now spark-slave.service
Created symlink /etc/systemd/system/ → /etc/systemd/system/spark-slave.service.

Check the status of Apache Slave service.

# systemctl status spark-slave.service
● spark-slave.service - Apache Spark Slave
Loaded: loaded (/etc/systemd/system/spark-slave.service; enabled; vendor p>
Active: active (running) since Tue 2023-03-07 22:16:33 PKT; 34s ago
Process: 6937 ExecStart=/bin/bash /opt/spark/sbin/ spark://19>
Main PID: 6950 (java)
Tasks: 33 (limit: 10904)
Memory: 121.0M
CPU: 5.022s
CGroup: /system.slice/spark-slave.service
└─6950 /usr/lib/jvm/java-17-openjdk->

Mar 07 22:16:30 systemd[1]: Starting Apache Spark Slave.>
Mar 07 22:16:30 bash[6937]: This script is deprecated, u>
Mar 07 22:16:31 bash[6944]: starting>
Mar 07 22:16:33 systemd[1]: Started Apache Spark Slave.

Access Apache Spark Server:

To access Apache Master Dashboard, open URL in a web browser.

Apache Spark Master Dashboard
Apache Spark Master Dashboard

Similarly, you can access Apache Slave Dashboard by opening URL in a web browser.

Apache Spark Slave Dashboard
Apache Spark Slave Dashboard

Video to install Apache Spark on Linux:

YouTube player

Final Thoughts

Equip yourself with the knowledge to effortlessly install Apache Spark on Rocky Linux 9 and embark on a journey of distributed computing excellence. Harness the full potential of your data processing tasks with this powerful framework, paving the way for scalable and efficient solutions tailored to your needs.

Leave a Reply

Your email address will not be published. Required fields are marked *