Looking for software development internships? Hackr.io is hiring!

Apache Spark and Hadoop

Hadoop vs Spark

Posted in Apache Spark, Hadoop
Hadoop vs Spark

In this article, we will see the key differences between Hadoop vs Spark. Before that, let us get a fix on what gave way to Hadoop and Spark. As all of us know, the growth of the Internet has resulted in huge volumes of data being continuously generated – what is called Big Data. This data in both structured and unstructured form is generated from primary sources such as social networks, the Internet of Things and traditional transactional business. Distributed Computing a new generation of Data Management of the Big Data is a revolutionary innovation in hardware and software technology. Distributed data processing facilitates the storage, processing, and access of this high velocity, large volume, and a wide variety of data.

With Distributed computing taking the front seat in the Big Data ecosystem, two powerful products of Apache - Hadoop, and Spark gained significant importance. Life of Big Data management professionals became a lot easier with Hadoop and Spark. Now, let us delve into the nitty-gritty.

The Hadoop Framework

The Apache Hadoop software library is a framework for distributed processing of large data sets – the Big Data, across clusters of computers using simple programming models. It is designed to scale up from a single server to thousands of machines, each offering local computation and storage. Rather than relying on hardware to deliver high-availability, the framework by itself is designed to detect and handle failures at the application layer. Hadoop delivers a high distributed processing service on top of a cluster of computers.

The framework comprises of the following modules:

  • Hadoop Common: The common utilities that support the other Hadoop modules.
  • Hadoop Distributed File System (HDFS™): It is a primary data storage used by Hadoop applications. Also, it is a distributed file system that provides high-throughput access to application data.
  • Hadoop YARN: A framework for job scheduling and cluster resource management.
  • Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.
  • Hadoop Ozone: An object store for Hadoop.
  • Hadoop Submarine: A machine learning engine for Hadoop

Spark Framework

Apache Spark is a general-purpose distributed data processing framework where the core engine is suitable for use in a wide range of computing circumstances. On top of the Spark core, there are libraries for SQL, machine learning, graph computation, and stream processing, which can be used together in an application. Resilient Distributed Dataset (RDD) is a fundamental data structure of Spark.

Programming languages supported by Spark include: Java, Python, Scala, and R. Application developers and data scientists incorporate Spark into their applications to rapidly query, analyze, and transform data at scale. Tasks most frequently associated with Spark include ETL and SQL batch jobs across large data sets, processing of streaming data from sensors, IoT, or financial systems, and machine learning tasks.

Both are from the Apache product rack. Then What paved the way to Apache Spark? Apache Hadoop MapReduce is an extremely famous and widely used execution engine. The users have been consistently complaining about the high latency problem with Hadoop MapReduce stating that the batch mode response for all these real-time applications is highly painful when it comes to processing and analyzing data.

This leads us to Spark, a successor system that is more powerful and flexible than Hadoop MapReduce.

Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. Data access for Spark can be from HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.

You can learn a lot more about the two from https://hackr.io/tutorials/learn-hadoop-big-data and https://hackr.io/tutorials/learn-apache-spark.

Now that we have understood the fundamentals of Hadoop and Spark, let us understand how they are similar and how they differ.

  1. Open Source: Both Hadoop and Spark are Apache products and are open-source software for reliable scalable distributed computing.
  2. Fault Tolerance: Fault refers to failure, both Hadoop and Spark are fault-tolerant. Hadoop systems function properly even after a node in the cluster fails. Fault tolerance is mainly achieved using data replication and Heartbeat messages. RDDs are building blocks of Apache Spark. RDDs provide fault tolerance to Spark.
  3. Data integration: Data produced by different systems across a business is rarely clean or consistent enough to simply and easily be combined for reporting or analysis. Extract, transform, and load (ETL) processes are often used to pull data from different systems, clean and standardize it, and then load it into a separate system for analysis. Both Spark and Hadoop are used to reduce the cost and time required for this ETL process.
  4. Speed: Spark runs workloads up to 100 times faster than Hadoop. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark is designed for speed, operating both in memory and on disk. A classic comparison on Speed is - The Databricks team was able to process 100 terabytes of data stored on solid-state drives in just 23 minutes on one-tenth of the machines, and the previous winner took 72 minutes by using Hadoop and a different cluster configuration.

However, if Spark is running on YARN with other shared services, performance might degrade and cause RAM overhead memory leaks. For this reason, if a user has a use-case of batch processing, Hadoop has been found to be a more efficient system.

  1. Ease of Use: Hadoop MapReduce code is more verbose and lengthy. In Spark, you can write applications quickly in Java, Scala, Python, R, and SQL. Spark offers over 80 high-level operators that make it easy to build parallel applications. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark’s capabilities are accessible via a set of rich APIs, all designed specifically for interacting quickly and easily with data at scale. These APIs are well-documented and structured in a way that makes it straightforward for data scientists and application developers to quickly put Spark to work.
  2. General usage: In case of Hadoop MapReduce you just get to process a batch of stored data but with Hadoop Spark, it is as well possible to modify the data in real-time through Spark Streaming.

With Spark Streaming, it is possible to pass data through various software functions, for instance, performing data analytics as and when it is collected. Developers can also make use of Apache Spark for Graph processing which maps the relationships in data amongst various entities such as people and objects. Organizations can make use of Apache Spark with predefined machine learning libraries so that machine learning can be performed on the data that is stored in various Hadoop clusters.

Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application.

  1. Latency: Hadoop is a high latency computing framework, which does not have an interactive mode whereas Spark is a low latency computing and can process data interactively.
  2. Support: Both Hadoop and Spark being open source have enough support for the applications. The Apache Spark community is large, active, and international.

A growing set of commercial providers, including Databricks, IBM, and all of the main Hadoop vendors, deliver comprehensive support for Spark-based solutions.

Top six vendors offering Big Data Hadoop solutions are:

  • Cloudera
  • Hortonworks
  • Amazon Web Services Elastic MapReduce Hadoop Distribution
  • Microsoft
  • MapR
  • IBM InfoSphere Insights
  1. Costs: Hadoop and Spark are both Apache open source projects, so there’s no cost for the software. Cost is only associated with the infrastructure. Both the products are designed in such a way that it can run on commodity hardware with low TCO (Total cost of ownership).
  2. Memory Usage: Storage and processing in Hadoop are disk-based and Hadoop uses standard amounts of memory. So, with Hadoop, we need a lot of disk space as well as faster disks. Hadoop also requires multiple systems to distribute the disk I/O.

Due to Apache Spark’s in-memory processing, it requires a lot of memory. As disk space is a relatively inexpensive commodity and since Spark does not use disk I/O for processing, the Spark system incurs more cost.

One important thing to keep in mind is that Spark significantly reduces the need for hardware systems and hence a lower TCO.

Hadoop vs Spark: Head to Head Comparison

Here’s an easy to understand table that captures the differences and similarities of Hadoop and Spark:

Feature Hadoop Spark
Open Source Yes Yes
Fault Tolerance Yes Yes
Data integration Yes Yes
Speed Low performance High Performance 100 times faster
Ease of Use Verbose and Lengthy – slow development cycle Fast development cycle
Generality usage Batch data processing Real-time and batch Data processing
Latency High Low
Support Yes Yes
Cost Low TCO Low TCO
Memory Usage Disk-Based RAM Based

Is Hadoop required for spark?

As per Spark documentation, Spark can run without Hadoop. You may run it as a standalone mode without any resource manager. But if you want to run in multi-node setup, you need a resource manager like YARN or Mesos and a distributed file system like HDFS or S3.

What to choose, Hadoop or Spark?

Hadoop and Spark are not mutually exclusive and can work together. Real-time and faster data processing in Hadoop is not possible without Spark. Spark doesn’t need a Hadoop cluster to work. On the other hand, Spark doesn't have any file system for distributed storage. Spark can read and then process data from other file systems as well. HDFS is just one of the file systems that Spark supports. However, there are a lot of advantages to running Spark on top of Hadoop (HDFS (for storage) + YARN (resource manager)), but it’s not the mandatory requirement.

The current trends and user feedback favor the in-memory technique, which means, Apache Spark is the preferred choice.

People are also Reading:

Sindhuja Hari

Sindhuja Hari

Sindhuja Hari, a technology geek, content producer & writer brings over 20 years of experience. She builds and delivers best in class content for global audiences. Her favorite domains/ genres are IT, Manufacturing, Logistics & Supply Chain, and Travel & Leisure. View all posts by the Author

Leave a comment

Your email will not be published
Cancel