Introduction to Apache Spark
Thomas Ropars
[Link]@[Link]
[Link]
2018
1
References
The content of this lectures is inspired by:
• The lecture notes of Yann Vernaz.
• The lecture notes of Vincent Leroy.
• The lecture notes of Renaud Lachaize.
• The lecture notes of Henggang Cui.
2
Goals of the lecture
• Present the main challenges associated with distributed
computing
• Review the MapReduce programming model for distributed
computing
I Discuss the limitations of Hadoop MapReduce
• Learn about Apache Spark and its internals
• Start programming with PySpark
3
Agenda
Computing at large scale
Programming distributed systems
MapReduce
Introduction to Apache Spark
Spark internals
Programming with PySpark
4
Agenda
Computing at large scale
Programming distributed systems
MapReduce
Introduction to Apache Spark
Spark internals
Programming with PySpark
5
Distributed computing: Definition
A distributed computing system is a system including several
computational entities where:
• Each entity has its own local memory
• All entities communicate by message passing over a network
Each entity of the system is called a node.
6
Distributed computing: Motivation
There are several reasons why one may want to distribute data and
processing:
• Scalability
I The data do not fit in the memory/storage of one node
I The processing power of more processor can reduce the time
to solution
• Fault tolerance / availability
I Continuing delivering a service despite node crashes.
• Latency
I Put computing resources close to the users to decrease latency
7
Increasing the processing power
Goals
• Increasing the amount of data that can be processed (weak
scaling)
• Decreasing the time needed to process a given amount of data
(strong scaling)
Two solutions
• Scaling up
• Scaling out
8
Vertical scaling (scaling up)
Idea
Increase the processing power by adding resources to existing
nodes:
• Upgrade the processor (more cores, higher frequency)
• Increase memory capacity
• Increase storage capacity
Pros and Cons
9
Vertical scaling (scaling up)
Idea
Increase the processing power by adding resources to existing
nodes:
• Upgrade the processor (more cores, higher frequency)
• Increase memory capacity
• Increase storage capacity
Pros and Cons
© Performance improvement without modifying the application
§ Limited scalability (capabilities of the hardware)
§ Expensive (non linear costs)
9
Horizontal scaling (scaling out)
Idea
Increase the processing power by adding more nodes to the system
• Cluster of commodity servers
Pros and Cons
10
Horizontal scaling (scaling out)
Idea
Increase the processing power by adding more nodes to the system
• Cluster of commodity servers
Pros and Cons
§ Often requires modifying applications
© Less expensive (nodes can be turned off when not needed)
© Infinite scalability
10
Horizontal scaling (scaling out)
Idea
Increase the processing power by adding more nodes to the system
• Cluster of commodity servers
Pros and Cons
§ Often requires modifying applications
© Less expensive (nodes can be turned off when not needed)
© Infinite scalability
Main focus of this lecture
10
Large scale infrastructures
Figure: Google Data-center
Figure: Barcelona Supercomputing
Center
Figure: Amazon Data-center
11
Programming for large-scale infrastructures
Challenges
• Performance
I How to take full advantage of the available resources?
I Moving data is costly
• How to maximize the ratio between computation and
communication?
• Scalability
I How to take advantage of a large number of distributed
resources?
• Fault tolerance
I The more resources, the higher the probability of failure
I MTBF (Mean Time Between Failures)
• MTBF of one server = 3 years
• MTBF of 1000 servers ' 19 hours (beware: over-simplified
computation)
12
Programming in the Clouds
Cloud computing
• A service provider gives access to computing resources
through an internet connection.
Pros and Cons
13
Programming in the Clouds
Cloud computing
• A service provider gives access to computing resources
through an internet connection.
Pros and Cons
© Pay only for the resources you use
© Get access to large amount of resources
I Amazon Web Services features millions of servers
§ Volatility
I Low control on the resources
I Example: Access to resources based on bidding
I See ”The Netflix Simian Army”
§ Performance variability
I Physical resources shared with other users
13
Architecture of a data center
Simplified
Switch
: storage : memory : processor
14
Architecture of a data center
A shared-nothing architecture
• Horizontal scaling
• No specific hardware
A hierarchical infrastructure
• Resources clustered in racks
• Communication inside a rack is more efficient than between
racks
• Resources can even be geographically distributed over several
datacenters
15
A warning about distributed computing
You can have a second computer once you’ve shown you
know how to use the first one. (P. Braham)
Horizontal scaling is very popular.
• But not always the most efficient solution (both in time and
cost)
Examples
• Processing a few 10s of GB of data is often more efficient on
a single machine that on a cluster of machines
• Sometimes a single threaded program outperforms a cluster of
machines (F. McSherry et al. “Scalability? But at what
COST!”. 2015.)
16
Agenda
Computing at large scale
Programming distributed systems
MapReduce
Introduction to Apache Spark
Spark internals
Programming with PySpark
17
Summary of the challenges
Context of execution
• Large number of resources
• Resources can crash (or disappear)
I Failure is the norm rather than the exception.
• Resources can be slow
Objectives
• Run until completion
I And obtain a correct result :-)
• Run fast
18
Shared memory and message passing
Two paradigms for communicating between computing entities:
• Shared memory
• Message passing
19
Shared memory
• Entities share a global memory
• Communication by reading and writing to the globally shared
memory
• Examples: Pthreads, OpenMP, etc
20
Message passing
• Entities have their own private memory
• Communication by sending/receiving messages over a network
• Example: MPI
21
Dealing with failures: Checkpointing
Checkpointing
App
ckpt 1 ckpt 2 ckpt 3 ckpt 4
22
Dealing with failures: Checkpointing
Checkpointing
App
ckpt 1 ckpt 2 ckpt 3 ckpt 4
• Saving the complete state of the application periodically
22
Dealing with failures: Checkpointing
Checkpointing
App
ckpt 1 ckpt 2 ckpt 3 ckpt 4
• Saving the complete state of the application periodically
• Restart from the most recent checkpoint in the event of a
failure.
22
About checkpointing
Main solution when processes can apply fine-grained modifications
to the data (Pthreads or MPI)
• A process can modify any single byte independently
• Impossible to log all modifications
Limits
• Performance cost
• Difficult to implement
• The alternatives (passive or active replication) are even more
costly and difficult to implement in most cases
23
About slow resources (stragglers)
Performance variations
• Both for the nodes and the network
• Resources shared with other users
Impact on classical message-passing systems (MPI)
• Tightly-coupled processes
I Process A waits for a message from process B before
continuing its computation
Do some computation
new_data = Recv(from B) /*blocking*/
Resume computing with new_data
Figure: Code of process A. If B is slow, A becomes idle.
24
The Big Data approach
Provide a distributed computing execution framework
• Simplify parallelization
I Define a programming model
I Handle distribution of the data and the computation
• Fault tolerant
I Detect failure
I Automatically takes corrective actions
• Code once (expert), benefit to all
Limit the operations that a user can run on data
• Inspired from functional programming (eg, MapReduce)
• Examples of frameworks:
I Hadoop MapReduce, Apache Spark, Apache Flink, etc
25
Agenda
Computing at large scale
Programming distributed systems
MapReduce
Introduction to Apache Spark
Spark internals
Programming with PySpark
26
MapReduce at Google
References
• The Google file system, S. Ghemawat et al. SOSP 2003.
• MapReduce: simplified data processing on large clusters, D.
Jeffrey and S. Ghemawat. OSDI 2004.
Main ideas
• Data represented as key-value pairs
• Two main operations on data: Map and Reduce
• A distributed file system
I Compute where the data are located
Use at Google
• Compute the index of the World Wide Web.
• Google has moved on to other technologies
27
Apache Hadoop
28
Apache Hadoop
In a few words
• Built on top of the ideas of Google
• A full data processing stack
• The core elements
I A distributed file system: HDFS (Hadoop Distributed File
System)
I A programming model and execution framework: Hadoop
MapReduce
MapReduce
• Allows simply expressing many parallel/distributed
computational algorithms
29
MapReduce
The Map operation
• Transformation operation
• map(f )[x0 , ..., xn ] = [f (x0 ), ..., f (xn )]
• map(∗2)[2, 3, 6] = [4, 6, 12]
The Reduce operation
• Aggregation operation (fold)
• reduce(f )[x0 , ..., xn ] = [f ((x0 ), f ((x1 ), ..., f (xn−1 , xn )))]
• reduce(+)[2, 3, 6] = (2 + (3 + 6)) = 11
30
Hadoop MapReduce
Key/Value pairs
• MapReduce manipulate sets of Key/Value pairs
• Keys and values can be of any types
Functions to apply
• The user defines the functions to apply
• In Map, the function is applied independently to each pair
• In Reduce, the function is applied to all values with the same
key
31
Hadoop MapReduce
About the Map operation
• A given input pair may map to zero or many output pairs
• Output pairs need not be of the same type as input pairs
About the Reduce operation
• Applies operation to all pairs with the same key
• 3 steps:
I Shuffle and Sort: Groups and merges the output of mappers by
key
I Reduce: Apply the reduce operation to the new key/value pairs
32
A first MapReduce program
Word Count
Description
• Input: A set of lines including words
I Pairs < line number, line content >
I The initial keys are ignored in this example
• Output: A set of pairs < word, nb of occurrences >
Input Output
• < 1, ”aaa bb ccc” > • < ”aaa”, 2 >
• < 2, ”aaa bb” > • < ”bb”, 2 >
• < ”ccc”, 1 >
33
A first MapReduce program
Word Count
map(key, value): /* pairs of {line num, content} */
foreach word in [Link]():
emit(word, 1)
reduce(key, values): /* {word, list nb occurences} */
result = 0
for value in values:
result += value
emit(key, result) /* -> {word, nb occurences} */
34
A first MapReduce program
Word Count
”aaa”, 1
”bb”, 1
”ccc”, 1
”bb”, 1
1, ”aaa bb ccc” ”aaa”, 2
map ”bb”, 1
2, ”bb bb d” reduce ”bb”, 4
”d”, 1
3, ”d aaa bb” ”ccc”, 1
”d”, 1
4, ”d” ”d”, 3
”aaa”, 1
”bb”, 1
”d”, 1
Logical representation (no notion of distribution)
35
Distributed execution of Word Count
node B node B node B
1, ”bb bb” ”bb”, 1 ”bb”, 3
2, ”bb” map ”bb”, 1
comb node C
red
”bb”, 1
uc
”aa”, 3
e
”bb”, 4
node A node A node A
e
uc
1, ”aa bb” ”aa”, 1 ”aa”, 3
red
2, ”aa aa” map ”bb”, 1 ”bb”, 1
comb
”aa”, 1
”aa”, 1
36
Example: Web index
Description
Construct an index of the pages in which a word appears.
• Input: A set of web pages
I Pairs < URL, content of the page >
• Output: A set of pairs < word, set of URLs >
37
Example: Web index
map(key, value): /* pairs of {URL, page_content} */
foreach word in [Link]():
emit(word, key)
reduce(key, values): /* {word, URLs} */
list=[]
for value in values:
[Link](value)
emit(key, list) /* {word, list of URLs} */
38
Running at scale
How to distribute data?
• Partitioning • Replication
Partitioning
• Splitting the data into partitions
• Partitions are assigned to different nodes
• Main goal: Performance
I Partitions can be processed in parallel
Replication
• Several nodes host a copy of the data
• Main goal: Fault tolerance
I No data lost if one node crashes
39
Hadoop Distributed File System (HDFS)
Main ideas
• Running on a cluster of commodity servers
I Each node has a local disk
I A node may fail at any time
• The content of files is stored on the disks of the nodes
I Partitioning: Files are partitioned into blocks that can be
stored in different Datanodes
I Replication: Each block is replicated in multiple Datanodes
• Default replication degree: 3
I A Namenode regulates access to files by clients
• Master-worker architecture
40
HDFS architecture
Figure from [Link]
41
Hadoop data workflow
Figure from
[Link]
42
Hadoop workflow: a few comments
Data movements
• Map tasks are executing on nodes where the data blocks are
hosted
I Or on close nodes
I Less expensive to move computation than to move data
• Load balancing between the reducers
I Output of mappers are partitioned according to the number of
reducers (modulo on a hash of the key)
43
Hadoop workflow: a few comments
I/O operations
• Map tasks read data from disks
• Output of the mappers are stored in memory if possible
I Otherwise flushed to disk
• The result of reduce tasks in written into HDFS
Fault tolerance
• Execution of tasks is monitored by the master node
I Tasks are launched again on other nodes if crashed or too slow
44
Agenda
Computing at large scale
Programming distributed systems
MapReduce
Introduction to Apache Spark
Spark internals
Programming with PySpark
45
Apache Spark
• Originally developed at Univ. of California
• Resilient distributed datasets: A fault-tolerant abstraction for
in-memory cluster computing, M. Zaharia et al. NSDI, 2012.
• One of the most popular Big Data project today.
46
Spark vs Hadoop
Spark added value
• Performance
I Especially for iterative algorithms
• Interactive queries
• Supports more operations on data
• A full ecosystem (High level libraries)
• Running on your machine or at scale
Main novelties
• Computing in memory
• A new computing abstraction: Resilient Distributed Datasets
(RDD)
47
Programming with Spark
Spark Core API
• Scala
• Java
• Python
Integration with Hadoop
Works with any storage source supported by Hadoop
• Local file systems • Cassandra
• HDFS • Amazon S3
48
Many resources to get started
• [Link]
• [Link]
• Many courses, tutorials, and examples available online
49
Starting with Spark
Running in local mode
• Spark runs in a JVM
I Spark is coded in Scala
• Read data from your local file system
Use interactive shell
• Scala (spark-shell)
• Python (pyspark)
• Run locally or distributed at scale
50
A very first example with pyspark
Counting lines
51
The Spark Web UI
52
The Spark built-in libraries
• Spark SQL: For structured data (Dataframes)
• Spark Streaming: Stream processing (micro-batching)
• MLlib: Machine learning
• GraphX: Graph processing
53
Agenda
Computing at large scale
Programming distributed systems
MapReduce
Introduction to Apache Spark
Spark internals
Programming with PySpark
54
In-memory computing: Insights
See Latency Numbers Every Programmer Should Know
Memory is way faster than disks
Read latency
• HDD: a few milliseconds
• SDD: 10s of microseconds (100X faster than HDD)
• DRAM: 100 nanoseconds (100X faster than SDD)
55
In-memory computing: Insights
Graph by P. Johnson
Cost of memory decreases = More memory per server
56
Efficient iterative computation
Hadoop: At each step, data go through the disks
Spark: Data remain in memory (if possible)
57
Main challenge
Fault Tolerance
Failure is the norm rather than the exception
On a node failure, all data in memory is lost
58
Resilient Distributed Datasets
Restricted form of distributed shared memory
• Read-only partitioned collection of records
• Creation of an RDD through deterministic operations
(transformations) on either:
I Data stored on disk
I an existing RDD
59
Transformations and actions
Programming with RDDs
• An RDD is represented as an object
• Programmer defines RDDs using Transformations
I Applied to data on disk or to existing RDDs
I Examples of transformations: map, filter, join
• Programmer uses RDDs in Actions
I Operations that return a value or export data to the file system
I Examples of actions: count, reduce
60
Fault tolerance with Lineage
Lineage = a description of a RDD
• The data source on disk
• The sequence of applied transformations
I Same transformation applied to all elements
I Low footprint for storing a lineage
Fault tolerance
• RDD partition lost
I Replay all transformations on the subset of input data or the
most recent RDD available
• Deal with stragglers
I Generate a new copy of a partition on another node
61
Spark runtime
Figure by M. Zaharia et al
• Driver
I Executes the user
program
I Defines RDDs and invokes
actions
I Tracks RDD’s lineage
• Workers
I Store RDD partitions
I Perform transformations
and actions
• Run tasks
62
Persistence and partitioning
See https:
//[Link]/docs/latest/[Link]#rdd-persistence
Different options of persistence for RDDs
• Options:
I Storage: memory/disk/both
I Replication: yes/no
I Serialization: yes/no
Partitions
• RDDs are automatically partitioned based on:
I The configuration of the target platform (nodes, CPUs)
I The size of the RDD
I User can also specify its own partitioning
• Tasks are created for each partition
63
RDD dependencies
Transformations create dependencies between RDDs.
2 kinds of dependencies
• Narrow dependencies
I Each partition in the parent is used by at most one partition in
the child
• Wide (shuffle) dependencies
I Each partition in the parent is used by multiple partitions in
the child
Impact of dependencies
• Scheduling: Which tasks can be run independently
• Fault tolerance: Which partitions are needed to recreate a lost
partition
• Communication: Shuffling implies large amount of data
exchanges
64
RDD dependencies
Figure by M. Zaharia et al
65
Executing transformations and actions
Lazy evaluation
• Transformations are executed only when an action is called on
the corresponding RDD
• Examples of optimizations allowed by lazy evaluation
I Read file from disk + action first(): no need to read the
whole file
I Read file from disk + transformation filter(): No need to
create an intermediate object that contains all lines
66
Persist an RDD
• By default, an RDD is recomputed for each action run on it.
• A RDD can be cached in memory calling persist() or
cache()
I Useful is multiple actions to be run on the same RDD
(iterative algorithms)
I Can lead to 10X speedup
I Note that a call to persist does not trigger transformations
evaluation
67
Agenda
Computing at large scale
Programming distributed systems
MapReduce
Introduction to Apache Spark
Spark internals
Programming with PySpark
68
The SparkContext
What is it?
• Object representing a connection to an execution cluster
• We need a SparkContext to build RDDs
Creation
• Automatically created when running in shell (variable sc)
• To be initialized when writing a standalone application
Initialization
• Run in local mode with nb threads = nb cores: local[*]
• Run in local mode with 2 threads: local[2]
• Run on a spark cluster: spark://HOST:PORT
69
The SparkContext
Python shell
$ pyspark --master local[*]
Python program
import pyspark
sc = [Link]("local[*]")
70
The first RDDs
Create RDD from existing iterator
• Use of [Link]()
• Optional second argument to define the number of partitions
data = [1, 2, 3, 4, 5]
distData = [Link](data)
Create RDD from a file
• Use of [Link]()
data = [Link]("[Link]")
hdfsData = [Link]("hdfs://[Link]")
71
Some transformations
see https:
//[Link]/docs/latest/[Link]#transformations
• map(f): Applies f to all elements of the RDD. f generates a single
item
• flatMap(f): Same as map but f can generate 0 or several items
• filter(f): New RDD with the elements for which f return true
• union(other)/intersection(other): New RDD being the
union/intersection of the initial RDD and other .
• cartesian(other): When called on datasets of types T and U, returns
a dataset of (T, U) pairs (all pairs of elements)
• distinct(): New RDD with the distinct elements
• repartition(n): Reshuffle the data in the RDD randomly to create
either more or fewer partitions and balance it across them
72
Some transformations with <K,V> pairs
• groupByKey(): When called on a dataset of (K, V) pairs, returns a
dataset of (K, Iterable<V>) pairs.
• reduceByKey(f): When called on a dataset of (K, V) pairs, Merge
the values for each key using an associative and commutative
reduce function.
• aggregateByKey(): see documentation
• join(other): Called on datasets of type (K, V) and (K, W), returns a
dataset of (K, (V, W)) pairs with all pairs of elements for each key.
73
Some actions
see
[Link]
• reduce(f): Aggregate the elements of the dataset using f (takes two
arguments and returns one).
• collect(): Return all the elements of the dataset as an array.
• count(): Return the number of elements in the dataset.
• take(n): Return an array with the first n elements of the dataset.
• takeSample(): Return an array with a random sample of num
elements of the dataset.
• countByKey(): Only available on RDDs of type (K, V). Returns a
hashmap of (K, Int) pairs with the count of each key.
74
An example
from [Link] import SparkContext
sc = SparkContext("local")
# define a first RDD
lines = [Link]("[Link]")
# define a second RDD
lineLengths = [Link](lambda s: len(s))
# Make the RDD persist in memory
[Link]()
# At this point no transformation has been run
# Launch the evaluation of all transformations
totalLength = [Link](lambda a, b: a + b)
75
An example with key-value pairs
lines = [Link]("[Link]")
words = [Link](lambda s: [Link](’ ’))
pairs = [Link](lambda s: (s, 1))
counts = [Link](lambda a, b: a + b)
# Warning: sortByKey implies shuffle
result = [Link]().collect()
76
Another example with key-value pairs
rdd = [Link]([("a", 1), ("b", 1), ("a", 1)])
# mapValues applies f to each value
# without changing the key
sorted([Link]().mapValues(len).collect())
# [(’a’, 2), (’b’, 1)]
sorted([Link]().mapValues(list).collect())
# [(’a’, [1, 1]), (’b’, [1])]
77
Shared Variables
see [Link]
shared-variables
Broadcast variables
• Use-case: A read-only large variable should be made available
to all tasks (e.g., used in a map function)
• Costly to be shipped with each task
• Declare a broadcast variable
I Spark will make the variable available to all tasks in an
efficient way
78
Example with a Broadcast variable
b = [Link]([1, 2, 3, 4, 5])
print([Link])
# [1, 2, 3, 4, 5]
print([Link]([0, 0]).
flatMap(lambda x: [Link]).collect())
# [1, 2, 3, 4, 5, 1, 2, 3, 4, 5]
[Link]()
79
Shared Variables
Accumulator
• Use-case: Accumulate values over all tasks
• Declare an Accumulator on the driver
I Updates by the tasks are automatically propagated to the
driver.
• Default accumulator: operator ’+=’ on int and float.
I User can define custom accumulator functions
80
Example with an Accumulator
file = [Link](inputFile)
# Create Accumulator[Int] initialized to 0
blankLines = [Link](0)
def splitLine(line):
# Make the global variable accessible
global blankLines
if not line:
blankLines += 1
return [Link](" ")
words = [Link](splitLine)
print([Link])
81
additional slides
82
Job scheduling
Main ideas
• Tasks are run when the user calls an action
• A Directed Acyclic Graph (DAG) of transformations is built
based on the RDD’s lineage
• The DAG is divided into stages. Boundaries of a stage defined
by:
I Wide dependencies
I Already computed RDDs
• Tasks are launch to compute missing partitions from each
stage until target RDD is computed
I Data locality is taken into account when assigning tasks to
workers
83
Stages in a RDD’s DAG
Figure by M. Zaharia et al
84