SlideShare uma empresa Scribd logo
1 de 68
Baixar para ler offline
Parallel Programming Concepts
OpenHPI Course
Week 1 : Terminology and fundamental concepts
Unit 1.1: Welcome !
Dr. Peter Tröger + Teaching Team
Course Content
■  Overview of theoretical and practical concepts
■  This course is for you if …
□  … you have skills in software development,
regardless of the programming language.
□  … you want to get an overview of parallelization concepts.
□  … you want to assess the feasibility of parallel hardware,
software and libraries for your parallelization problem.
■  This course is not for you if …
□  … you have no practical experience with software
development at all.
□  … you want a solution for a specific parallelization problem.
□  … you want to learn one specific parallel programming tool
or language in detail.
2
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Parallel Programming Concepts
3
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Course Organization
■  Six lecture weeks, final exam in week 7
■  Several lecture units per week, per unit:
□  Video, slides, non-graded self-test
□  Sometimes mandatory and optional readings
□  Sometimes optional programming tasks
□  Week finished with a graded assignment
■  Six graded assignments sum up to max. 90 points
■  Graded final exam with max. 90 points
■  OpenHPI certificate awarded for getting ≥90 points in total
■  Forum can be used to discuss with other participants
■  FAQ is constantly updated
4
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Course Organization
■  Week 1: Terminology and fundamental concepts
□  Moore’s law, power wall, memory wall, ILP wall,
speedup vs. scaleup, Amdahl’s law, Flynn’s taxonomy, …
■  Week 2: Shared memory parallelism – The basics
□  Concurrency, race condition, semaphore, mutex,
deadlock, monitor, …
■  Week 3: Shared memory parallelism – Programming
□  Threads, OpenMP, Intel TBB, Cilk, Scala, …
■  Week 4: Accelerators
□  Hardware today, CUDA, GPU Computing, OpenCL, …
■  Week 5: Distributed memory parallelism
□  CSP, Actor model, clusters, HPC, MPI, MapReduce, …
■  Week 6: Patterns, best practices and examples
5
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Why Parallel?
6
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Computer Markets
■  Embedded and Mobile Computing
□  Cars, smartphones, entertainment industry, medical devices, …
□  Power/performance and price as relevant issues
■  Desktop Computing
□  Price/performance ratio and extensibility as relevant issues
■  Server Computing
□  Business service provisioning as typical goal
□  Web servers, banking back-end, order processing, ...
□  Performance and availability as relevant issues
■  Most software benefits from having better performance
■  The computer hardware industry is constantly delivering
7
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Running Applications
Application
Instructions
8
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Three Ways Of Doing Anything Faster
[Pfister]
■  Work harder
(clock speed)
□  Hardware solution
□  No longer feasible
■  Work smarter
(optimization, caching)
□  Hardware solution
□  No longer feasible
as only solution
■  Get help
(parallelization)
□  Hardware + Software
in cooperation
Application
Instructions
t
9
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Parallel Programming Concepts
OpenHPI Course
Week 1 : Terminology and fundamental concepts
Unit 1.2: Moore’s Law and the Power Wall
Dr. Peter Tröger + Teaching Team
Processor Hardware
■  First computers had fixed programs (e.g. electronic calculator)
■  Von Neumann architecture (1945)
□  Instructions for central processing unit (CPU) in memory
□  Program is treated as data
□  Loading of code during runtime, self-modification
■  Multiple such processors: Symmetric multiprocessing (SMP)
CPU
Memory
Control Unit
Arithmetic Logic UnitInput
Output
Bus
11
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Moore’s Law
■  “...the number of transistors that can be inexpensively placed on
an integrated circuit is increasing exponentially, doubling
approximately every two years. ...” (Gordon Moore, 1965)
□  CPUs contain different hardware parts, such as logic gates
□  Parts are built from transistors
□  Rule of exponential growth for the number
of transistors on one CPU chip
□  Meanwhile a self-fulfilling prophecy
□  Applied not only in processor industry,
but also in other areas
□  Sometimes misinterpreted as
performance indication
□  May still hold for the next 10-20 years
[Wikipedia]
12
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Moore’s Law
[Wikimedia]
13
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Moore’s Law vs. Software
■  Nathan P. Myhrvold, “The Next Fifty Years of Software”, 1997
□  “Software is a gas. It expands to fit the container it is in.”
◊  Constant increase in the amount of code
□  “Software grows until it becomes limited by Moore’s law.”
◊  Software often grows faster than hardware capabilities
□  “Software growth makes Moore’s Law possible.”
◊  Software and hardware market stimulate each other
□  “Software is only limited by human ambition & expectation.”
◊  People will always find ways for exploiting performance
■  Jevon’s paradox:
□  “Technological progress that increases the efficiency with
which a resource is used tends to increase (rather than
decrease) the rate of consumption of that resource.”
14
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Processor Performance Development
Transistors)#)
Clock)Speed)(MHz))
Power)(W))
Perf/Clock)(ILP))
“Work harder”
“Work smarter”
[HerbSutter,2009]
15
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
A Physics Problem
■  Power: Energy needed to run the processor
■  Static power (SP): Leakage in transistors while being inactive
■  Dynamic power (DP): Energy needed to switch a transistor
■  Moore’s law: N goes up exponentially, C goes down with size
■  Power dissipation demands cooling
□  Power density: Watt/cm2
■  Make dynamic power increase less dramatic:
□  Bringing down V reduces energy consumption, quadratically!
□  Don’t use N only for logic gates
■  Industry was able to increase the frequency (F) for decades
DP (approx.) = Number of Transistors (N) x Capacitance (C) x
Voltage2 (V2) x Frequency (F)
16
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Processor Supply Voltage
1
10
100
1970 1980 1990 2000 2010
PowerSupply(Volt)
Processor Supply VoltageProcessor Supply Voltage
[Moore,ISSCC]
17
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Power Density
■  Growth of watts per square centimeter in microprocessors
■  Higher temperatures: Increased leakage, slower transistors
0 W
20 W
40 W
60 W
80 W
100 W
120 W
140 W
1992 1995 1997 2000 2002 2005
Hot Plate
Air Cooling Limit
18
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Power Density
[Kevin Skadron, 2007]
“Cooking-Aware” Computing?
19
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Second Problem: Leakage Increase
0.001
0.01
0.1
1
10
100
1000
1960 1970 1980 1990 2000 2010
Power(W)
Processor Power (Watts)Processor Power (Watts) -- Active & LeakageActive & Leakage
ActiveActive
LeakageLeakage
[www.ieeeghn.org]
■  Static leakage today: Up to 40% of CPU power consumption
20
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
The Power Wall
■  Air cooling capabilities are limited
□  Maximum temperature of 100-125 °C, hot spot problem
□  Static and dynamic power consumption must be limited
■  Power consumption increases with Moore‘s law,
but grow of hardware performance is expected
■  Further reducing voltage as compensation
□  We can’t do that endlessly, lower limit around 0.7V
□  Strange physical effects
■  Next-generation processors need to use even less power
□  Lower the frequencies, scale them dynamically
□  Use only parts of the processor at a time (‘dark silicon’)
□  Build energy-efficient special purpose hardware
■  No chance for faster processors through frequency increase
21
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
The Free Lunch Is Over
■  Clock speed curve
flattened in 2003
□  Heat, power,
leakage
■  Speeding up the serial
instruction execution
through clock speed
improvements no
longer works
■  Additional issues
□  ILP wall
□  Memory wall
[HerbSutter,2009]
22
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Parallel Programming Concepts
OpenHPI Course
Week 1 : Terminology and fundamental concepts
Unit 1.3: ILP Wall and Memory Wall
Dr. Peter Tröger + Teaching Team
Three Ways Of Doing Anything Faster
[Pfister]
■  Work harder
(clock speed)
□  Hardware solution
!  Power wall problem
■  Work smarter
(optimization, caching)
□  Hardware solution
■  Get help
(parallelization)
□  Hardware + Software
Application
Instructions
24
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Instruction Level Parallelism
■  Increasing the frequency is no longer an option
■  Provide smarter instruction processing for better performance
■  Instruction level parallelism (ILP)
□  Processor hardware optimizes low-level instruction execution
□  Instruction pipelining
◊  Overlapped execution of serial instructions
□  Superscalar execution
◊  Multiple units of one processor are used in parallel
□  Out-of-order execution
◊  Reorder instructions that do not have data dependencies
□  Speculative execution
◊  Control flow speculation and branch prediction
■  Today’s processors are packed with such ILP logic
25
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
The ILP Wall
■  No longer cost-effective to dedicate
new transistors to ILP mechanisms
■  Deeper pipelines make the
power problem worse
■  High ILP complexity effectively
reduces the processing
speed for a given frequency
(e.g. misprediction)
■  More aggressive ILP
technologies too risky due to
unknown real-world workloads
■  No ground-breaking new ideas
■  " “ILP wall”
■  Ok, let’s use the transistors for better caching
[Wikipedia]
26
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Caching
■  von Neumann architecture
□  Instructions are stored in main memory
□  Program is treated as data
□  For each instruction execution, data must be fetched
■  When the frequency increases, main memory becomes a
performance bottleneck
■  Caching: Keep data copy in very fast, small memory on the CPU
CPU
Memory
Control Unit
Arithmetic Logic UnitInput
Output
Bus
Cache
27
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Small
Memory Hardware Hierarchy
volatile
non-volatile
Registers
Processor
Caches
Random Access Memory
(RAM)
Flash / SSD Memory
Hard Drives
Tapes
Fast Expensive
Slow Large
28
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Cheap
Memory Hardware Hierarchy
CPU core CPU core CPU core CPU core
L2 Cache L2 Cache
L3 Cache
L1 Cache L1 Cache L1 Cache L1 Cache
Bus
Bus Bus
L = Level
29
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Caching for Performance
■  Well established optimization technique for performance
■  Caching relies on data locality
□  Some instructions are often used (e.g. loops)
□  Some data is often used (e.g. local variables)
□  Hardware keeps a copy of the data in the faster cache
□  On read attempts, data is taken directly from the cache
□  On write, data is cached and eventually written to memory
■  Similar to ILP, the potential is limited
□  Larger caches do not help automatically
□  At some point, all data locality in the
code is already exploited
□  Manual vs. compiler-driven optimization
[arstechnica.com]
30
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Memory Wall
■  If caching is limited, we simply need faster memory
■  The problem: Shared memory is ‘shared’
□  Interconnect contention
□  Memory bandwidth
◊  Memory transfer speed is limited by the power wall
◊  Memory transfer size is limited by the power wall
■  Transfer technology cannot
keep up with GHz processors
■  Memory is too slow, effects
cannot be hidden through
caching completely
" “Memory wall”
[dell.com]
31
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Problem Summary
■  Hardware perspective
□  Number of transistors N is still increasing
□  Building larger caches no longer helps (memory wall)
□  ILP is out of options (ILP wall)
□  Voltage / power / frequency is at the limit (power wall)
◊  Some help with dynamic scaling approaches
□  Remaining option: Use N for more cores per processor chip
■  Software perspective
□  Performance must come from the utilization of this increasing
core count per chip, since F is now fixed
□  Software must tackle the memory wall
32
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Three Ways Of Doing Anything Faster
[Pfister]
■  Work harder
(clock speed)
!  Power wall problem
!  Memory wall problem
■  Work smarter
(optimization, caching)
!  ILP wall problem
!  Memory wall problem
■  Get help
(parallelization)
□  More cores per single CPU
□  Software needs to exploit
them in the right way
!  Memory wall problem
Problem
CPU
Core
Core
Core
Core
Core
33
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Parallel Programming Concepts
OpenHPI Course
Week 1 : Terminology and fundamental concepts
Unit 1.4: Parallel Hardware Classification
Dr. Peter Tröger + Teaching Team
Parallelism [Mattson et al.]
■  Task
□  Parallel program breaks a problem into tasks
■  Execution unit
□  Representation of a concurrently running task (e.g. thread)
□  Tasks are mapped to execution units
■  Processing element (PE)
□  Hardware element running one execution unit
□  Depends on scenario - logical processor vs. core vs. machine
□  Execution units run simultaneously on processing elements,
controlled by some scheduler
■  Synchronization - Mechanism to order activities of parallel tasks
■  Race condition - Program result depends on the scheduling order
35
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Faster Processing through Parallelization
Program
Task
Task
Task
Task
Task
36
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Flynn‘s Taxonomy (1966)
■  Classify parallel hardware architectures according to their
capabilities in the instruction and data processing dimension
Single Instruction,
Single Data (SISD)
Single Instruction,
Multiple Data (SIMD)
37
Processing Step
Instruction
Data Item
Output
Processing Step
Instruction
Data Items
Output
Multiple Instruction,
Single Data (MISD)
Processing Step
Instructions
Data Item
Output
Multiple Instruction,
Multiple Data (MIMD)
Processing Step
Instructions
Data Items
Output
Flynn‘s Taxonomy (1966)
■  Single Instruction, Single Data (SISD)
□  No parallelism in the execution
□  Old single processor architectures
■  Single Instruction, Multiple Data (SIMD)
□  Multiple data streams processed with one instruction stream
at the same time
□  Typical in graphics hardware and GPU accelerators
□  Special SIMD machines in high-performance computing
■  Multiple Instructions, Single Data (MISD)
□  Multiple instructions applied to the same data in parallel
□  Rarely used in practice, only for fault tolerance
■  Multiple Instructions, Multiple Data (MIMD)
□  Every modern processor, compute clusters
38
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Parallelism on Different Levels
ProgramProgramProgram
ProcessProcessProcessProcessTask
PE
ProcessProcessProcessProcessTask
ProcessProcessProcessProcessTask
PE
PE
PE
Memory
Node
Network
PE
PE
PE
Memory
PE
PE
PE
Memory
PE
PE
PE
Memory
PE
PE
PE
Memory
39
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Parallelism on Different Levels
■  A processor chip (socket)
□  Chip multi-processing (CMP)
◊  Multiple CPU’s per chip, called cores
◊  Multi-core / many-core
□  Simultaneous multi-threading (SMT)
◊  Interleaved execution of tasks on one core
◊  Example: Intel Hyperthreading
□  Chip multi-threading (CMT) = CMP + SMT
□  Instruction-level parallelism (ILP)
◊  Parallel processing of single instructions per core
■  Multiple processor chips in one machine (multi-processing)
□  Symmetric multi-processing (SMP)
■  Multiple processor chips in many machines (multi-computer)
40
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Parallelism on Different Levels
[arstechnica.com]
ILP, SMT ILP, SMTILP, SMTILP, SMT
ILP, SMT ILP, SMT ILP, SMT ILP, SMT
CMPArchitecture
41
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Parallel Programming Concepts
OpenHPI Course
Week 1 : Terminology and fundamental concepts
Unit 1.5: Memory Architectures
Dr. Peter Tröger + Teaching Team
Parallelism on Different Levels
ProgramProgramProgram
ProcessProcessProcessProcessTask
PE
ProcessProcessProcessProcessTask
ProcessProcessProcessProcessTask
PE
PE
PE
Memory
Node
Network
PE
PE
PE
Memory
PE
PE
PE
Memory
PE
PE
PE
Memory
PE
PE
PE
Memory
43
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Shared Memory vs. Shared Nothing
■  Organization of parallel processing hardware as …
□  Shared memory system
◊  Tasks can directly access a common address space
◊  Implemented as memory hierarchy with different cache levels
□  Shared nothing system
◊  Tasks can only access local memory
◊  Global coordination of parallel execution by explicit
communication (e.g. messaging) between tasks
□  Hybrid architectures possible in practice
◊  Cluster of shared memory systems
◊  Accelerator hardware in a shared memory system
●  Dedicated local memory on the accelerator
●  Example: SIMD GPU hardware in SMP computer system
44
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Shared Memory vs. Shared Nothing
■  Pfister: “shared memory” vs. “distributed memory”
■  Foster: “multiprocessor” vs. “multicomputer”
■  Tannenbaum: “shared memory” vs. “private memory”


Processing
Element
Task
Shared Memory


Processing
Element
Task


Processing
Element
Task


Processing
Element
Task
Message
Message
Message
Message
Data DataData Data
45
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Shared Memory
■  Processing elements act independently
■  Use the same global address space
■  Changes are visible for all processing elements
■  Uniform memory access (UMA) system
□  Equal access time for all PE’s to all memory locations
□  Default approach for SMP systems of the past
■  Non-uniform memory access (NUMA) system
□  Delay on memory access according to the accessed region
□  Typically due to core / processor interconnect technology
■  Cache-coherent NUMA (CC-NUMA) system
◊  NUMA system that keeps all caches consistent
◊  Transparent hardware mechanisms
◊  Became standard approach with recent X86 chips
46
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Socket
UMA Example
■  Two dual-core processor chips in an SMP system
■  Level 1 cache (fast, small), Level 2 cache (slower, larger)
■  Hardware manages cache coherency among all cores
Core Core
L1 Cache L1 Cache
L2 Cache
RAM
Chipset / Memory Controller
System Bus
Socket
Core Core
L1 Cache L1 Cache
L2 Cache
47
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
RAM RAM RAM
Socket
NUMA Example
■  Eight cores on 2 sockets in an SMP system
■  Memory controllers + chip interconnect realize a single memory
address space for the software
Core Core
L1 L1
L3 Cache
RAM
L2 L2
Core Core
L1
L2
L1
L2
Memory Controller
RAM
Chip
Interconnect
Socket
Core Core
L1 L1
L3 Cache
L2 L2
Core Core
L1
L2
L1
L2
Memory Controller
48
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
NUMA Example: 4-way Intel Nehalem SMP
Core
 Core
Core
 Core
Q
P
I
Core
 Core
Core
 Core
Q
P
I
Core
 Core
Core
 Core
Q
P
I
Core
 Core
Core
 Core
Q
P
I
L3Cache
L3Cache
L3Cache
MemoryController
MemoryController
MemoryController
L3Cache
MemoryController
I/O
 I/O
I/O
I/O
Memory
Memory
Memory
Memory
49
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Shared Nothing
■  Processing elements no longer share a common global memory
■  Easy scale-out by adding machines to the messaging network
■  Cluster computing: Combine machines with cheap interconnect
□  Compute cluster: Speedup for an application
◊  Batch processing, data parallelism
□  Load-balancing cluster: Better throughput for some service
□  High Availability (HA) cluster: Fault tolerance
■  Cluster to the extreme
□  High Performance Computing (HPC)
□  Massively Parallel Processing (MPP) hardware
□  TOP500 list of the fastest supercomputers
50
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Clusters


Processing
Element
Task


Processing
Element
Task
Message
Message
Message
Message
Data Data
51
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Shared Nothing Example
…
Socket
Core Core
L1 L1
L3 Cache
RAM
L2 L2
Memory
Controller
Network
Interface
Socket
Core Core
L1 L1
L3 Cache
RAM
L2 L2
Memory
Controller
Network
Interface
Socket
Core Core
L1 L1
L3 Cache
RAM
L2 L2
Memory
Controller
Network
Interface
Machine Machine Machine
52
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Interconnection Network
Hybrid Example
…
Machine
Socket
Core Core
L1D L1D
L3 Cache
RAM
L2 L2
Memory
Controller
Network Interface
Chip Inter-
connect
Socket
Core Core
L1D L1D
L3 Cache
RAM
L2 L2
Memory
Controller
Machine
Socket
Core Core
L1D L1D
L3 Cache
RAM
L2 L2
Memory
Controller
Network Interface
Chip Inter-
connect
Socket
Core Core
L1D L1D
L3 Cache
RAM
L2 L2
Memory
Controller
53
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Interconnection Network
Example: Cluster of Nehalem SMPs
Network
54
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
The Parallel Programming Problem
■  Execution environment has a particular type
(SIMD, MIMD, UMA, NUMA, …)
■  Execution environment maybe configurable (number of resources)
■  Parallel application must be mapped to available resources
Execution EnvironmentParallel Application Match ?
Configuration
Flexible
Type
55
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Parallel Programming Concepts
OpenHPI Course
Week 1 : Terminology and fundamental concepts
Unit 1.6: Speedup and Scaleup
Dr. Peter Tröger + Teaching Team
Which One Is Faster ?
■  Usage scenario
□  Transporting a fridge
■  Usage environment
□  Driving through a forest
■  Perception of performance
□  Maximum speed
□  Average speed
□  Acceleration
■  We need some kind of
application-specific benchmark
57
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Parallelism for …
■  Speedup – compute faster
■  Throughput – compute more in the same time
■  Scalability – compute faster / more with additional resources
■  …
Processing Element A1
Processing Element A2
Processing Element A3
Processing Element B1
Processing Element B2
Processing Element B3
ScalingUp
Scaling Out
MainMemory
MainMemory
58
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Metrics
■  Parallelization metrics are application-dependent,
but follow a common set of concepts
□  Speedup: Adding more resources leads to less time for
solving the same problem.
□  Linear speedup:
n times more resources " n times speedup
□  Scaleup: Adding more resources solves a larger version of the
same problem in the same time.
□  Linear scaleup:
n times more resources " n times larger problem solvable
■  The most important goal depends on the application
□  Throughput demands scalability of the software
□  Response time demands speedup of the processing
59
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Tasks: v=12
Processing elements: N= 3
Time needed: T3= 4
(Linear) Speedup: T1/T3=12/4=3
Speedup
■  Idealized assumptions
□  All tasks are equal sized
□  All code parts can run in parallel
Application
1
2
3
4
5
6
7
8
9
10
11
12
1
2
3
4
5
6
7
8
9
10
11
12
t t
Tasks: v=12
Processing elements: N=1
Time needed: T1=12
60
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Speedup with Load Imbalance
■  Assumptions
□  Tasks have different size,
best-possible speedup depends
on optimized resource usage
□  All code parts can run in parallel
Application
2
3
4
5
6
7
8
9
10
11
12
t t
1
2
3
4
1
5
6
7
8
9
10
11
12
Tasks: v=12
Processing elements: N= 3
Time needed: T3= 6
Speedup: T1/T3=16/6=2.67
Tasks: v=12
Processing elements: N=1
Time needed: T1=16
61
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Speedup with Serial Parts
■  Each application has inherently non-parallelizable serial parts
□  Algorithmic limitations
□  Shared resources acting as bottleneck
□  Overhead for program start
□  Communication overhead in shared-nothing systems
23
45
6
7
8
9
10
11
12
tSER1
1
tPAR1
tSER2 tPAR2
tSER3
62
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Amdahl’s Law
■  Gene Amdahl. “Validity of the single processor approach to achieving
large scale computing capabilities”. AFIPS 1967
□  Serial parts TSER = tSER1 + tSER2 + tSER3 + …
□  Parallelizable parts TPAR = tPAR1 + tPAR2 + tPAR3 + …
□  Execution time with one processing element:
T1 = TSER+TPAR
□  Execution time with N parallel processing elements:
TN >= TSER + TPAR / N
◊  Equal only on perfect parallelization,
e.g. no load imbalance
□  Amdahl’s Law for maximum speedup with N processing elements
S =
T1
TN
63
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
S =
TSER + TP AR
TSER + TP AR/N
Amdahl’s Law
64
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Amdahl’s Law
■  Speedup through parallelism is hard to achieve
■  For unlimited resources, speedup is bound by the serial parts:
□  Assume T1=1
■  Parallelization problem relates to all system layers
□  Hardware offers some degree of parallel execution
□  Speedup gained is bound by serial parts:
◊  Limitations of hardware components
◊  Necessary serial activities in the operating system,
virtual runtime system, middleware and the application
◊  Overhead for the parallelization itself
65
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
SN!1 =
T1
TN!1
SN!1 =
1
TSER
Amdahl’s Law
■  “Everyone knows Amdahl’s law, but quickly forgets it.”
[Thomas Puzak, IBM]
■  90% parallelizable code leads to not more than 10x speedup
□  Regardless of the number of processing elements
■  Parallelism is only useful …
□  … for small number of processing elements
□  … for highly parallelizable code
■  What’s the sense in big parallel / distributed hardware setups?
■  Relevant assumptions
□  Put the same problem on different hardware
□  Assumption of fixed problem size
□  Only consideration of execution time for one problem
66
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
Gustafson-Barsis’ Law (1988)
■  Gustafson and Barsis: People are typically not interested in the
shortest execution time
□  Rather solve a bigger problem in reasonable time
■  Problem size could then scale with the number of processors
□  Typical in simulation and farmer / worker problems
□  Leads to larger parallel fraction with increasing N
□  Serial part is usually fixed or grows slower
■  Maximum scaled speedup by N processors:
■  Linear speedup now becomes possible
■  Software needs to ensure that serial parts remain constant
■  Other models exist (e.g. Work-Span model, Karp-Flatt metric)
67
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
S =
TSER + N · TP AR
TSER + TP AR
Summary: Week 1
■  Moore’s Law and the Power Wall
□  Processing element speed no longer increases
■  ILP Wall and Memory Wall
□  Memory access is not fast enough for modern hardware
■  Parallel Hardware Classification
□  From ILP to SMP, SIMD vs. MIMD
■  Memory Architectures
□  UMA vs. NUMA
■  Speedup and Scaleup
□  Amdahl’s Law and Gustavson’s Law
Since we need parallelism for speedup,
how can we express it in software?
68
OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger

Mais conteúdo relacionado

Semelhante a OpenHPI - Parallel Programming Concepts - Week 1

OpenHPI - Parallel Programming Concepts - Week 4
OpenHPI - Parallel Programming Concepts - Week 4OpenHPI - Parallel Programming Concepts - Week 4
OpenHPI - Parallel Programming Concepts - Week 4Peter Tröger
 
Intro to open source observability with grafana, prometheus, loki, and tempo(...
Intro to open source observability with grafana, prometheus, loki, and tempo(...Intro to open source observability with grafana, prometheus, loki, and tempo(...
Intro to open source observability with grafana, prometheus, loki, and tempo(...LibbySchulze
 
CK: from ad hoc computer engineering to collaborative and reproducible data s...
CK: from ad hoc computer engineering to collaborative and reproducible data s...CK: from ad hoc computer engineering to collaborative and reproducible data s...
CK: from ad hoc computer engineering to collaborative and reproducible data s...Grigori Fursin
 
unit-1-181211045120.pdf
unit-1-181211045120.pdfunit-1-181211045120.pdf
unit-1-181211045120.pdfVhhvf
 
Data Science in Production: Technologies That Drive Adoption of Data Science ...
Data Science in Production: Technologies That Drive Adoption of Data Science ...Data Science in Production: Technologies That Drive Adoption of Data Science ...
Data Science in Production: Technologies That Drive Adoption of Data Science ...Nir Yungster
 
OpenCL & the Future of Desktop High Performance Computing in CAD
OpenCL & the Future of Desktop High Performance Computing in CADOpenCL & the Future of Desktop High Performance Computing in CAD
OpenCL & the Future of Desktop High Performance Computing in CADDesign World
 
Assignment 1-mtat
Assignment 1-mtatAssignment 1-mtat
Assignment 1-mtatzafargilani
 
Introduction to data science
Introduction to data scienceIntroduction to data science
Introduction to data scienceSampath Kumar
 
AI LAB using IBM Power 9 Processor
AI LAB using IBM Power 9 ProcessorAI LAB using IBM Power 9 Processor
AI LAB using IBM Power 9 ProcessorGanesan Narayanasamy
 
CS101- Introduction to Computing- Lecture 45
CS101- Introduction to Computing- Lecture 45CS101- Introduction to Computing- Lecture 45
CS101- Introduction to Computing- Lecture 45Bilal Ahmed
 
DARPA ERI Summit 2018: The End of Moore’s Law & Faster General Purpose Comput...
DARPA ERI Summit 2018: The End of Moore’s Law & Faster General Purpose Comput...DARPA ERI Summit 2018: The End of Moore’s Law & Faster General Purpose Comput...
DARPA ERI Summit 2018: The End of Moore’s Law & Faster General Purpose Comput...zionsaint
 
lec01.pdf
lec01.pdflec01.pdf
lec01.pdfBeiYu6
 

Semelhante a OpenHPI - Parallel Programming Concepts - Week 1 (20)

Data Science as Scale
Data Science as ScaleData Science as Scale
Data Science as Scale
 
OpenHPI - Parallel Programming Concepts - Week 4
OpenHPI - Parallel Programming Concepts - Week 4OpenHPI - Parallel Programming Concepts - Week 4
OpenHPI - Parallel Programming Concepts - Week 4
 
Os Lamothe
Os LamotheOs Lamothe
Os Lamothe
 
Intro to open source observability with grafana, prometheus, loki, and tempo(...
Intro to open source observability with grafana, prometheus, loki, and tempo(...Intro to open source observability with grafana, prometheus, loki, and tempo(...
Intro to open source observability with grafana, prometheus, loki, and tempo(...
 
CK: from ad hoc computer engineering to collaborative and reproducible data s...
CK: from ad hoc computer engineering to collaborative and reproducible data s...CK: from ad hoc computer engineering to collaborative and reproducible data s...
CK: from ad hoc computer engineering to collaborative and reproducible data s...
 
Chap10.pdf
Chap10.pdfChap10.pdf
Chap10.pdf
 
unit-1-181211045120.pdf
unit-1-181211045120.pdfunit-1-181211045120.pdf
unit-1-181211045120.pdf
 
Data Science in Production: Technologies That Drive Adoption of Data Science ...
Data Science in Production: Technologies That Drive Adoption of Data Science ...Data Science in Production: Technologies That Drive Adoption of Data Science ...
Data Science in Production: Technologies That Drive Adoption of Data Science ...
 
OpenCL & the Future of Desktop High Performance Computing in CAD
OpenCL & the Future of Desktop High Performance Computing in CADOpenCL & the Future of Desktop High Performance Computing in CAD
OpenCL & the Future of Desktop High Performance Computing in CAD
 
slides.pdf
slides.pdfslides.pdf
slides.pdf
 
Available HPC Resources at CSUC
Available HPC Resources at CSUCAvailable HPC Resources at CSUC
Available HPC Resources at CSUC
 
Assignment 1-mtat
Assignment 1-mtatAssignment 1-mtat
Assignment 1-mtat
 
Introduction to data science
Introduction to data scienceIntroduction to data science
Introduction to data science
 
Parallel Algorithms
Parallel AlgorithmsParallel Algorithms
Parallel Algorithms
 
Data Science
Data ScienceData Science
Data Science
 
Lecture1
Lecture1Lecture1
Lecture1
 
AI LAB using IBM Power 9 Processor
AI LAB using IBM Power 9 ProcessorAI LAB using IBM Power 9 Processor
AI LAB using IBM Power 9 Processor
 
CS101- Introduction to Computing- Lecture 45
CS101- Introduction to Computing- Lecture 45CS101- Introduction to Computing- Lecture 45
CS101- Introduction to Computing- Lecture 45
 
DARPA ERI Summit 2018: The End of Moore’s Law & Faster General Purpose Comput...
DARPA ERI Summit 2018: The End of Moore’s Law & Faster General Purpose Comput...DARPA ERI Summit 2018: The End of Moore’s Law & Faster General Purpose Comput...
DARPA ERI Summit 2018: The End of Moore’s Law & Faster General Purpose Comput...
 
lec01.pdf
lec01.pdflec01.pdf
lec01.pdf
 

Mais de Peter Tröger

WannaCry - An OS course perspective
WannaCry - An OS course perspectiveWannaCry - An OS course perspective
WannaCry - An OS course perspectivePeter Tröger
 
Cloud Standards and Virtualization
Cloud Standards and VirtualizationCloud Standards and Virtualization
Cloud Standards and VirtualizationPeter Tröger
 
Distributed Resource Management Application API (DRMAA) Version 2
Distributed Resource Management Application API (DRMAA) Version 2Distributed Resource Management Application API (DRMAA) Version 2
Distributed Resource Management Application API (DRMAA) Version 2Peter Tröger
 
OpenSubmit - How to grade 1200 code submissions
OpenSubmit - How to grade 1200 code submissionsOpenSubmit - How to grade 1200 code submissions
OpenSubmit - How to grade 1200 code submissionsPeter Tröger
 
Design of Software for Embedded Systems
Design of Software for Embedded SystemsDesign of Software for Embedded Systems
Design of Software for Embedded SystemsPeter Tröger
 
Humans should not write XML.
Humans should not write XML.Humans should not write XML.
Humans should not write XML.Peter Tröger
 
What activates a bug? A refinement of the Laprie terminology model.
What activates a bug? A refinement of the Laprie terminology model.What activates a bug? A refinement of the Laprie terminology model.
What activates a bug? A refinement of the Laprie terminology model.Peter Tröger
 
Dependable Systems - Summary (16/16)
Dependable Systems - Summary (16/16)Dependable Systems - Summary (16/16)
Dependable Systems - Summary (16/16)Peter Tröger
 
Dependable Systems - Hardware Dependability with Redundancy (14/16)
Dependable Systems - Hardware Dependability with Redundancy (14/16)Dependable Systems - Hardware Dependability with Redundancy (14/16)
Dependable Systems - Hardware Dependability with Redundancy (14/16)Peter Tröger
 
Dependable Systems - System Dependability Evaluation (8/16)
Dependable Systems - System Dependability Evaluation (8/16)Dependable Systems - System Dependability Evaluation (8/16)
Dependable Systems - System Dependability Evaluation (8/16)Peter Tröger
 
Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)
Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)
Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)Peter Tröger
 
Dependable Systems -Software Dependability (15/16)
Dependable Systems -Software Dependability (15/16)Dependable Systems -Software Dependability (15/16)
Dependable Systems -Software Dependability (15/16)Peter Tröger
 
Dependable Systems -Reliability Prediction (9/16)
Dependable Systems -Reliability Prediction (9/16)Dependable Systems -Reliability Prediction (9/16)
Dependable Systems -Reliability Prediction (9/16)Peter Tröger
 
Dependable Systems -Fault Tolerance Patterns (4/16)
Dependable Systems -Fault Tolerance Patterns (4/16)Dependable Systems -Fault Tolerance Patterns (4/16)
Dependable Systems -Fault Tolerance Patterns (4/16)Peter Tröger
 
Dependable Systems - Introduction (1/16)
Dependable Systems - Introduction (1/16)Dependable Systems - Introduction (1/16)
Dependable Systems - Introduction (1/16)Peter Tröger
 
Dependable Systems -Dependability Means (3/16)
Dependable Systems -Dependability Means (3/16)Dependable Systems -Dependability Means (3/16)
Dependable Systems -Dependability Means (3/16)Peter Tröger
 
Dependable Systems - Hardware Dependability with Diagnosis (13/16)
Dependable Systems - Hardware Dependability with Diagnosis (13/16)Dependable Systems - Hardware Dependability with Diagnosis (13/16)
Dependable Systems - Hardware Dependability with Diagnosis (13/16)Peter Tröger
 
Dependable Systems -Dependability Attributes (5/16)
Dependable Systems -Dependability Attributes (5/16)Dependable Systems -Dependability Attributes (5/16)
Dependable Systems -Dependability Attributes (5/16)Peter Tröger
 
Dependable Systems -Dependability Threats (2/16)
Dependable Systems -Dependability Threats (2/16)Dependable Systems -Dependability Threats (2/16)
Dependable Systems -Dependability Threats (2/16)Peter Tröger
 
Verteilte Software-Systeme im Kontext von Industrie 4.0
Verteilte Software-Systeme im Kontext von Industrie 4.0Verteilte Software-Systeme im Kontext von Industrie 4.0
Verteilte Software-Systeme im Kontext von Industrie 4.0Peter Tröger
 

Mais de Peter Tröger (20)

WannaCry - An OS course perspective
WannaCry - An OS course perspectiveWannaCry - An OS course perspective
WannaCry - An OS course perspective
 
Cloud Standards and Virtualization
Cloud Standards and VirtualizationCloud Standards and Virtualization
Cloud Standards and Virtualization
 
Distributed Resource Management Application API (DRMAA) Version 2
Distributed Resource Management Application API (DRMAA) Version 2Distributed Resource Management Application API (DRMAA) Version 2
Distributed Resource Management Application API (DRMAA) Version 2
 
OpenSubmit - How to grade 1200 code submissions
OpenSubmit - How to grade 1200 code submissionsOpenSubmit - How to grade 1200 code submissions
OpenSubmit - How to grade 1200 code submissions
 
Design of Software for Embedded Systems
Design of Software for Embedded SystemsDesign of Software for Embedded Systems
Design of Software for Embedded Systems
 
Humans should not write XML.
Humans should not write XML.Humans should not write XML.
Humans should not write XML.
 
What activates a bug? A refinement of the Laprie terminology model.
What activates a bug? A refinement of the Laprie terminology model.What activates a bug? A refinement of the Laprie terminology model.
What activates a bug? A refinement of the Laprie terminology model.
 
Dependable Systems - Summary (16/16)
Dependable Systems - Summary (16/16)Dependable Systems - Summary (16/16)
Dependable Systems - Summary (16/16)
 
Dependable Systems - Hardware Dependability with Redundancy (14/16)
Dependable Systems - Hardware Dependability with Redundancy (14/16)Dependable Systems - Hardware Dependability with Redundancy (14/16)
Dependable Systems - Hardware Dependability with Redundancy (14/16)
 
Dependable Systems - System Dependability Evaluation (8/16)
Dependable Systems - System Dependability Evaluation (8/16)Dependable Systems - System Dependability Evaluation (8/16)
Dependable Systems - System Dependability Evaluation (8/16)
 
Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)
Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)
Dependable Systems - Structure-Based Dependabiilty Modeling (6/16)
 
Dependable Systems -Software Dependability (15/16)
Dependable Systems -Software Dependability (15/16)Dependable Systems -Software Dependability (15/16)
Dependable Systems -Software Dependability (15/16)
 
Dependable Systems -Reliability Prediction (9/16)
Dependable Systems -Reliability Prediction (9/16)Dependable Systems -Reliability Prediction (9/16)
Dependable Systems -Reliability Prediction (9/16)
 
Dependable Systems -Fault Tolerance Patterns (4/16)
Dependable Systems -Fault Tolerance Patterns (4/16)Dependable Systems -Fault Tolerance Patterns (4/16)
Dependable Systems -Fault Tolerance Patterns (4/16)
 
Dependable Systems - Introduction (1/16)
Dependable Systems - Introduction (1/16)Dependable Systems - Introduction (1/16)
Dependable Systems - Introduction (1/16)
 
Dependable Systems -Dependability Means (3/16)
Dependable Systems -Dependability Means (3/16)Dependable Systems -Dependability Means (3/16)
Dependable Systems -Dependability Means (3/16)
 
Dependable Systems - Hardware Dependability with Diagnosis (13/16)
Dependable Systems - Hardware Dependability with Diagnosis (13/16)Dependable Systems - Hardware Dependability with Diagnosis (13/16)
Dependable Systems - Hardware Dependability with Diagnosis (13/16)
 
Dependable Systems -Dependability Attributes (5/16)
Dependable Systems -Dependability Attributes (5/16)Dependable Systems -Dependability Attributes (5/16)
Dependable Systems -Dependability Attributes (5/16)
 
Dependable Systems -Dependability Threats (2/16)
Dependable Systems -Dependability Threats (2/16)Dependable Systems -Dependability Threats (2/16)
Dependable Systems -Dependability Threats (2/16)
 
Verteilte Software-Systeme im Kontext von Industrie 4.0
Verteilte Software-Systeme im Kontext von Industrie 4.0Verteilte Software-Systeme im Kontext von Industrie 4.0
Verteilte Software-Systeme im Kontext von Industrie 4.0
 

Último

Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Seán Kennedy
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptxSherlyMaeNeri
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONHumphrey A Beña
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Celine George
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfGrade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfJemuel Francisco
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management SystemChristalin Nelson
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parentsnavabharathschool99
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)cama23
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptxmary850239
 

Último (20)

Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...Student Profile Sample - We help schools to connect the data they have, with ...
Student Profile Sample - We help schools to connect the data they have, with ...
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdfGrade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
Grade 9 Quarter 4 Dll Grade 9 Quarter 4 DLL.pdf
 
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
Transaction Management in Database Management System
Transaction Management in Database Management SystemTransaction Management in Database Management System
Transaction Management in Database Management System
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parents
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx4.16.24 21st Century Movements for Black Lives.pptx
4.16.24 21st Century Movements for Black Lives.pptx
 

OpenHPI - Parallel Programming Concepts - Week 1

  • 1. Parallel Programming Concepts OpenHPI Course Week 1 : Terminology and fundamental concepts Unit 1.1: Welcome ! Dr. Peter Tröger + Teaching Team
  • 2. Course Content ■  Overview of theoretical and practical concepts ■  This course is for you if … □  … you have skills in software development, regardless of the programming language. □  … you want to get an overview of parallelization concepts. □  … you want to assess the feasibility of parallel hardware, software and libraries for your parallelization problem. ■  This course is not for you if … □  … you have no practical experience with software development at all. □  … you want a solution for a specific parallelization problem. □  … you want to learn one specific parallel programming tool or language in detail. 2 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 3. Parallel Programming Concepts 3 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 4. Course Organization ■  Six lecture weeks, final exam in week 7 ■  Several lecture units per week, per unit: □  Video, slides, non-graded self-test □  Sometimes mandatory and optional readings □  Sometimes optional programming tasks □  Week finished with a graded assignment ■  Six graded assignments sum up to max. 90 points ■  Graded final exam with max. 90 points ■  OpenHPI certificate awarded for getting ≥90 points in total ■  Forum can be used to discuss with other participants ■  FAQ is constantly updated 4 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 5. Course Organization ■  Week 1: Terminology and fundamental concepts □  Moore’s law, power wall, memory wall, ILP wall, speedup vs. scaleup, Amdahl’s law, Flynn’s taxonomy, … ■  Week 2: Shared memory parallelism – The basics □  Concurrency, race condition, semaphore, mutex, deadlock, monitor, … ■  Week 3: Shared memory parallelism – Programming □  Threads, OpenMP, Intel TBB, Cilk, Scala, … ■  Week 4: Accelerators □  Hardware today, CUDA, GPU Computing, OpenCL, … ■  Week 5: Distributed memory parallelism □  CSP, Actor model, clusters, HPC, MPI, MapReduce, … ■  Week 6: Patterns, best practices and examples 5 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 6. Why Parallel? 6 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 7. Computer Markets ■  Embedded and Mobile Computing □  Cars, smartphones, entertainment industry, medical devices, … □  Power/performance and price as relevant issues ■  Desktop Computing □  Price/performance ratio and extensibility as relevant issues ■  Server Computing □  Business service provisioning as typical goal □  Web servers, banking back-end, order processing, ... □  Performance and availability as relevant issues ■  Most software benefits from having better performance ■  The computer hardware industry is constantly delivering 7 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 8. Running Applications Application Instructions 8 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 9. Three Ways Of Doing Anything Faster [Pfister] ■  Work harder (clock speed) □  Hardware solution □  No longer feasible ■  Work smarter (optimization, caching) □  Hardware solution □  No longer feasible as only solution ■  Get help (parallelization) □  Hardware + Software in cooperation Application Instructions t 9 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 10. Parallel Programming Concepts OpenHPI Course Week 1 : Terminology and fundamental concepts Unit 1.2: Moore’s Law and the Power Wall Dr. Peter Tröger + Teaching Team
  • 11. Processor Hardware ■  First computers had fixed programs (e.g. electronic calculator) ■  Von Neumann architecture (1945) □  Instructions for central processing unit (CPU) in memory □  Program is treated as data □  Loading of code during runtime, self-modification ■  Multiple such processors: Symmetric multiprocessing (SMP) CPU Memory Control Unit Arithmetic Logic UnitInput Output Bus 11 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 12. Moore’s Law ■  “...the number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years. ...” (Gordon Moore, 1965) □  CPUs contain different hardware parts, such as logic gates □  Parts are built from transistors □  Rule of exponential growth for the number of transistors on one CPU chip □  Meanwhile a self-fulfilling prophecy □  Applied not only in processor industry, but also in other areas □  Sometimes misinterpreted as performance indication □  May still hold for the next 10-20 years [Wikipedia] 12 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 13. Moore’s Law [Wikimedia] 13 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 14. Moore’s Law vs. Software ■  Nathan P. Myhrvold, “The Next Fifty Years of Software”, 1997 □  “Software is a gas. It expands to fit the container it is in.” ◊  Constant increase in the amount of code □  “Software grows until it becomes limited by Moore’s law.” ◊  Software often grows faster than hardware capabilities □  “Software growth makes Moore’s Law possible.” ◊  Software and hardware market stimulate each other □  “Software is only limited by human ambition & expectation.” ◊  People will always find ways for exploiting performance ■  Jevon’s paradox: □  “Technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource.” 14 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 15. Processor Performance Development Transistors)#) Clock)Speed)(MHz)) Power)(W)) Perf/Clock)(ILP)) “Work harder” “Work smarter” [HerbSutter,2009] 15 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 16. A Physics Problem ■  Power: Energy needed to run the processor ■  Static power (SP): Leakage in transistors while being inactive ■  Dynamic power (DP): Energy needed to switch a transistor ■  Moore’s law: N goes up exponentially, C goes down with size ■  Power dissipation demands cooling □  Power density: Watt/cm2 ■  Make dynamic power increase less dramatic: □  Bringing down V reduces energy consumption, quadratically! □  Don’t use N only for logic gates ■  Industry was able to increase the frequency (F) for decades DP (approx.) = Number of Transistors (N) x Capacitance (C) x Voltage2 (V2) x Frequency (F) 16 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 17. Processor Supply Voltage 1 10 100 1970 1980 1990 2000 2010 PowerSupply(Volt) Processor Supply VoltageProcessor Supply Voltage [Moore,ISSCC] 17 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 18. Power Density ■  Growth of watts per square centimeter in microprocessors ■  Higher temperatures: Increased leakage, slower transistors 0 W 20 W 40 W 60 W 80 W 100 W 120 W 140 W 1992 1995 1997 2000 2002 2005 Hot Plate Air Cooling Limit 18 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 19. Power Density [Kevin Skadron, 2007] “Cooking-Aware” Computing? 19 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 20. Second Problem: Leakage Increase 0.001 0.01 0.1 1 10 100 1000 1960 1970 1980 1990 2000 2010 Power(W) Processor Power (Watts)Processor Power (Watts) -- Active & LeakageActive & Leakage ActiveActive LeakageLeakage [www.ieeeghn.org] ■  Static leakage today: Up to 40% of CPU power consumption 20 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 21. The Power Wall ■  Air cooling capabilities are limited □  Maximum temperature of 100-125 °C, hot spot problem □  Static and dynamic power consumption must be limited ■  Power consumption increases with Moore‘s law, but grow of hardware performance is expected ■  Further reducing voltage as compensation □  We can’t do that endlessly, lower limit around 0.7V □  Strange physical effects ■  Next-generation processors need to use even less power □  Lower the frequencies, scale them dynamically □  Use only parts of the processor at a time (‘dark silicon’) □  Build energy-efficient special purpose hardware ■  No chance for faster processors through frequency increase 21 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 22. The Free Lunch Is Over ■  Clock speed curve flattened in 2003 □  Heat, power, leakage ■  Speeding up the serial instruction execution through clock speed improvements no longer works ■  Additional issues □  ILP wall □  Memory wall [HerbSutter,2009] 22 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 23. Parallel Programming Concepts OpenHPI Course Week 1 : Terminology and fundamental concepts Unit 1.3: ILP Wall and Memory Wall Dr. Peter Tröger + Teaching Team
  • 24. Three Ways Of Doing Anything Faster [Pfister] ■  Work harder (clock speed) □  Hardware solution !  Power wall problem ■  Work smarter (optimization, caching) □  Hardware solution ■  Get help (parallelization) □  Hardware + Software Application Instructions 24 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 25. Instruction Level Parallelism ■  Increasing the frequency is no longer an option ■  Provide smarter instruction processing for better performance ■  Instruction level parallelism (ILP) □  Processor hardware optimizes low-level instruction execution □  Instruction pipelining ◊  Overlapped execution of serial instructions □  Superscalar execution ◊  Multiple units of one processor are used in parallel □  Out-of-order execution ◊  Reorder instructions that do not have data dependencies □  Speculative execution ◊  Control flow speculation and branch prediction ■  Today’s processors are packed with such ILP logic 25 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 26. The ILP Wall ■  No longer cost-effective to dedicate new transistors to ILP mechanisms ■  Deeper pipelines make the power problem worse ■  High ILP complexity effectively reduces the processing speed for a given frequency (e.g. misprediction) ■  More aggressive ILP technologies too risky due to unknown real-world workloads ■  No ground-breaking new ideas ■  " “ILP wall” ■  Ok, let’s use the transistors for better caching [Wikipedia] 26 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 27. Caching ■  von Neumann architecture □  Instructions are stored in main memory □  Program is treated as data □  For each instruction execution, data must be fetched ■  When the frequency increases, main memory becomes a performance bottleneck ■  Caching: Keep data copy in very fast, small memory on the CPU CPU Memory Control Unit Arithmetic Logic UnitInput Output Bus Cache 27 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 28. Small Memory Hardware Hierarchy volatile non-volatile Registers Processor Caches Random Access Memory (RAM) Flash / SSD Memory Hard Drives Tapes Fast Expensive Slow Large 28 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger Cheap
  • 29. Memory Hardware Hierarchy CPU core CPU core CPU core CPU core L2 Cache L2 Cache L3 Cache L1 Cache L1 Cache L1 Cache L1 Cache Bus Bus Bus L = Level 29 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 30. Caching for Performance ■  Well established optimization technique for performance ■  Caching relies on data locality □  Some instructions are often used (e.g. loops) □  Some data is often used (e.g. local variables) □  Hardware keeps a copy of the data in the faster cache □  On read attempts, data is taken directly from the cache □  On write, data is cached and eventually written to memory ■  Similar to ILP, the potential is limited □  Larger caches do not help automatically □  At some point, all data locality in the code is already exploited □  Manual vs. compiler-driven optimization [arstechnica.com] 30 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 31. Memory Wall ■  If caching is limited, we simply need faster memory ■  The problem: Shared memory is ‘shared’ □  Interconnect contention □  Memory bandwidth ◊  Memory transfer speed is limited by the power wall ◊  Memory transfer size is limited by the power wall ■  Transfer technology cannot keep up with GHz processors ■  Memory is too slow, effects cannot be hidden through caching completely " “Memory wall” [dell.com] 31 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 32. Problem Summary ■  Hardware perspective □  Number of transistors N is still increasing □  Building larger caches no longer helps (memory wall) □  ILP is out of options (ILP wall) □  Voltage / power / frequency is at the limit (power wall) ◊  Some help with dynamic scaling approaches □  Remaining option: Use N for more cores per processor chip ■  Software perspective □  Performance must come from the utilization of this increasing core count per chip, since F is now fixed □  Software must tackle the memory wall 32 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 33. Three Ways Of Doing Anything Faster [Pfister] ■  Work harder (clock speed) !  Power wall problem !  Memory wall problem ■  Work smarter (optimization, caching) !  ILP wall problem !  Memory wall problem ■  Get help (parallelization) □  More cores per single CPU □  Software needs to exploit them in the right way !  Memory wall problem Problem CPU Core Core Core Core Core 33 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 34. Parallel Programming Concepts OpenHPI Course Week 1 : Terminology and fundamental concepts Unit 1.4: Parallel Hardware Classification Dr. Peter Tröger + Teaching Team
  • 35. Parallelism [Mattson et al.] ■  Task □  Parallel program breaks a problem into tasks ■  Execution unit □  Representation of a concurrently running task (e.g. thread) □  Tasks are mapped to execution units ■  Processing element (PE) □  Hardware element running one execution unit □  Depends on scenario - logical processor vs. core vs. machine □  Execution units run simultaneously on processing elements, controlled by some scheduler ■  Synchronization - Mechanism to order activities of parallel tasks ■  Race condition - Program result depends on the scheduling order 35 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 36. Faster Processing through Parallelization Program Task Task Task Task Task 36 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 37. OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger Flynn‘s Taxonomy (1966) ■  Classify parallel hardware architectures according to their capabilities in the instruction and data processing dimension Single Instruction, Single Data (SISD) Single Instruction, Multiple Data (SIMD) 37 Processing Step Instruction Data Item Output Processing Step Instruction Data Items Output Multiple Instruction, Single Data (MISD) Processing Step Instructions Data Item Output Multiple Instruction, Multiple Data (MIMD) Processing Step Instructions Data Items Output
  • 38. Flynn‘s Taxonomy (1966) ■  Single Instruction, Single Data (SISD) □  No parallelism in the execution □  Old single processor architectures ■  Single Instruction, Multiple Data (SIMD) □  Multiple data streams processed with one instruction stream at the same time □  Typical in graphics hardware and GPU accelerators □  Special SIMD machines in high-performance computing ■  Multiple Instructions, Single Data (MISD) □  Multiple instructions applied to the same data in parallel □  Rarely used in practice, only for fault tolerance ■  Multiple Instructions, Multiple Data (MIMD) □  Every modern processor, compute clusters 38 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 39. Parallelism on Different Levels ProgramProgramProgram ProcessProcessProcessProcessTask PE ProcessProcessProcessProcessTask ProcessProcessProcessProcessTask PE PE PE Memory Node Network PE PE PE Memory PE PE PE Memory PE PE PE Memory PE PE PE Memory 39 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 40. Parallelism on Different Levels ■  A processor chip (socket) □  Chip multi-processing (CMP) ◊  Multiple CPU’s per chip, called cores ◊  Multi-core / many-core □  Simultaneous multi-threading (SMT) ◊  Interleaved execution of tasks on one core ◊  Example: Intel Hyperthreading □  Chip multi-threading (CMT) = CMP + SMT □  Instruction-level parallelism (ILP) ◊  Parallel processing of single instructions per core ■  Multiple processor chips in one machine (multi-processing) □  Symmetric multi-processing (SMP) ■  Multiple processor chips in many machines (multi-computer) 40 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 41. Parallelism on Different Levels [arstechnica.com] ILP, SMT ILP, SMTILP, SMTILP, SMT ILP, SMT ILP, SMT ILP, SMT ILP, SMT CMPArchitecture 41 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 42. Parallel Programming Concepts OpenHPI Course Week 1 : Terminology and fundamental concepts Unit 1.5: Memory Architectures Dr. Peter Tröger + Teaching Team
  • 43. Parallelism on Different Levels ProgramProgramProgram ProcessProcessProcessProcessTask PE ProcessProcessProcessProcessTask ProcessProcessProcessProcessTask PE PE PE Memory Node Network PE PE PE Memory PE PE PE Memory PE PE PE Memory PE PE PE Memory 43 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 44. Shared Memory vs. Shared Nothing ■  Organization of parallel processing hardware as … □  Shared memory system ◊  Tasks can directly access a common address space ◊  Implemented as memory hierarchy with different cache levels □  Shared nothing system ◊  Tasks can only access local memory ◊  Global coordination of parallel execution by explicit communication (e.g. messaging) between tasks □  Hybrid architectures possible in practice ◊  Cluster of shared memory systems ◊  Accelerator hardware in a shared memory system ●  Dedicated local memory on the accelerator ●  Example: SIMD GPU hardware in SMP computer system 44 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 45. Shared Memory vs. Shared Nothing ■  Pfister: “shared memory” vs. “distributed memory” ■  Foster: “multiprocessor” vs. “multicomputer” ■  Tannenbaum: “shared memory” vs. “private memory” Processing Element Task Shared Memory Processing Element Task Processing Element Task Processing Element Task Message Message Message Message Data DataData Data 45 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 46. Shared Memory ■  Processing elements act independently ■  Use the same global address space ■  Changes are visible for all processing elements ■  Uniform memory access (UMA) system □  Equal access time for all PE’s to all memory locations □  Default approach for SMP systems of the past ■  Non-uniform memory access (NUMA) system □  Delay on memory access according to the accessed region □  Typically due to core / processor interconnect technology ■  Cache-coherent NUMA (CC-NUMA) system ◊  NUMA system that keeps all caches consistent ◊  Transparent hardware mechanisms ◊  Became standard approach with recent X86 chips 46 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 47. Socket UMA Example ■  Two dual-core processor chips in an SMP system ■  Level 1 cache (fast, small), Level 2 cache (slower, larger) ■  Hardware manages cache coherency among all cores Core Core L1 Cache L1 Cache L2 Cache RAM Chipset / Memory Controller System Bus Socket Core Core L1 Cache L1 Cache L2 Cache 47 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger RAM RAM RAM
  • 48. Socket NUMA Example ■  Eight cores on 2 sockets in an SMP system ■  Memory controllers + chip interconnect realize a single memory address space for the software Core Core L1 L1 L3 Cache RAM L2 L2 Core Core L1 L2 L1 L2 Memory Controller RAM Chip Interconnect Socket Core Core L1 L1 L3 Cache L2 L2 Core Core L1 L2 L1 L2 Memory Controller 48 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 49. NUMA Example: 4-way Intel Nehalem SMP Core Core Core Core Q P I Core Core Core Core Q P I Core Core Core Core Q P I Core Core Core Core Q P I L3Cache L3Cache L3Cache MemoryController MemoryController MemoryController L3Cache MemoryController I/O I/O I/O I/O Memory Memory Memory Memory 49 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 50. Shared Nothing ■  Processing elements no longer share a common global memory ■  Easy scale-out by adding machines to the messaging network ■  Cluster computing: Combine machines with cheap interconnect □  Compute cluster: Speedup for an application ◊  Batch processing, data parallelism □  Load-balancing cluster: Better throughput for some service □  High Availability (HA) cluster: Fault tolerance ■  Cluster to the extreme □  High Performance Computing (HPC) □  Massively Parallel Processing (MPP) hardware □  TOP500 list of the fastest supercomputers 50 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 52. Shared Nothing Example … Socket Core Core L1 L1 L3 Cache RAM L2 L2 Memory Controller Network Interface Socket Core Core L1 L1 L3 Cache RAM L2 L2 Memory Controller Network Interface Socket Core Core L1 L1 L3 Cache RAM L2 L2 Memory Controller Network Interface Machine Machine Machine 52 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger Interconnection Network
  • 53. Hybrid Example … Machine Socket Core Core L1D L1D L3 Cache RAM L2 L2 Memory Controller Network Interface Chip Inter- connect Socket Core Core L1D L1D L3 Cache RAM L2 L2 Memory Controller Machine Socket Core Core L1D L1D L3 Cache RAM L2 L2 Memory Controller Network Interface Chip Inter- connect Socket Core Core L1D L1D L3 Cache RAM L2 L2 Memory Controller 53 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger Interconnection Network
  • 54. Example: Cluster of Nehalem SMPs Network 54 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 55. The Parallel Programming Problem ■  Execution environment has a particular type (SIMD, MIMD, UMA, NUMA, …) ■  Execution environment maybe configurable (number of resources) ■  Parallel application must be mapped to available resources Execution EnvironmentParallel Application Match ? Configuration Flexible Type 55 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 56. Parallel Programming Concepts OpenHPI Course Week 1 : Terminology and fundamental concepts Unit 1.6: Speedup and Scaleup Dr. Peter Tröger + Teaching Team
  • 57. Which One Is Faster ? ■  Usage scenario □  Transporting a fridge ■  Usage environment □  Driving through a forest ■  Perception of performance □  Maximum speed □  Average speed □  Acceleration ■  We need some kind of application-specific benchmark 57 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 58. Parallelism for … ■  Speedup – compute faster ■  Throughput – compute more in the same time ■  Scalability – compute faster / more with additional resources ■  … Processing Element A1 Processing Element A2 Processing Element A3 Processing Element B1 Processing Element B2 Processing Element B3 ScalingUp Scaling Out MainMemory MainMemory 58 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 59. Metrics ■  Parallelization metrics are application-dependent, but follow a common set of concepts □  Speedup: Adding more resources leads to less time for solving the same problem. □  Linear speedup: n times more resources " n times speedup □  Scaleup: Adding more resources solves a larger version of the same problem in the same time. □  Linear scaleup: n times more resources " n times larger problem solvable ■  The most important goal depends on the application □  Throughput demands scalability of the software □  Response time demands speedup of the processing 59 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 60. Tasks: v=12 Processing elements: N= 3 Time needed: T3= 4 (Linear) Speedup: T1/T3=12/4=3 Speedup ■  Idealized assumptions □  All tasks are equal sized □  All code parts can run in parallel Application 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12 t t Tasks: v=12 Processing elements: N=1 Time needed: T1=12 60 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 61. Speedup with Load Imbalance ■  Assumptions □  Tasks have different size, best-possible speedup depends on optimized resource usage □  All code parts can run in parallel Application 2 3 4 5 6 7 8 9 10 11 12 t t 1 2 3 4 1 5 6 7 8 9 10 11 12 Tasks: v=12 Processing elements: N= 3 Time needed: T3= 6 Speedup: T1/T3=16/6=2.67 Tasks: v=12 Processing elements: N=1 Time needed: T1=16 61 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 62. Speedup with Serial Parts ■  Each application has inherently non-parallelizable serial parts □  Algorithmic limitations □  Shared resources acting as bottleneck □  Overhead for program start □  Communication overhead in shared-nothing systems 23 45 6 7 8 9 10 11 12 tSER1 1 tPAR1 tSER2 tPAR2 tSER3 62 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 63. Amdahl’s Law ■  Gene Amdahl. “Validity of the single processor approach to achieving large scale computing capabilities”. AFIPS 1967 □  Serial parts TSER = tSER1 + tSER2 + tSER3 + … □  Parallelizable parts TPAR = tPAR1 + tPAR2 + tPAR3 + … □  Execution time with one processing element: T1 = TSER+TPAR □  Execution time with N parallel processing elements: TN >= TSER + TPAR / N ◊  Equal only on perfect parallelization, e.g. no load imbalance □  Amdahl’s Law for maximum speedup with N processing elements S = T1 TN 63 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger S = TSER + TP AR TSER + TP AR/N
  • 64. Amdahl’s Law 64 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 65. Amdahl’s Law ■  Speedup through parallelism is hard to achieve ■  For unlimited resources, speedup is bound by the serial parts: □  Assume T1=1 ■  Parallelization problem relates to all system layers □  Hardware offers some degree of parallel execution □  Speedup gained is bound by serial parts: ◊  Limitations of hardware components ◊  Necessary serial activities in the operating system, virtual runtime system, middleware and the application ◊  Overhead for the parallelization itself 65 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger SN!1 = T1 TN!1 SN!1 = 1 TSER
  • 66. Amdahl’s Law ■  “Everyone knows Amdahl’s law, but quickly forgets it.” [Thomas Puzak, IBM] ■  90% parallelizable code leads to not more than 10x speedup □  Regardless of the number of processing elements ■  Parallelism is only useful … □  … for small number of processing elements □  … for highly parallelizable code ■  What’s the sense in big parallel / distributed hardware setups? ■  Relevant assumptions □  Put the same problem on different hardware □  Assumption of fixed problem size □  Only consideration of execution time for one problem 66 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger
  • 67. Gustafson-Barsis’ Law (1988) ■  Gustafson and Barsis: People are typically not interested in the shortest execution time □  Rather solve a bigger problem in reasonable time ■  Problem size could then scale with the number of processors □  Typical in simulation and farmer / worker problems □  Leads to larger parallel fraction with increasing N □  Serial part is usually fixed or grows slower ■  Maximum scaled speedup by N processors: ■  Linear speedup now becomes possible ■  Software needs to ensure that serial parts remain constant ■  Other models exist (e.g. Work-Span model, Karp-Flatt metric) 67 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger S = TSER + N · TP AR TSER + TP AR
  • 68. Summary: Week 1 ■  Moore’s Law and the Power Wall □  Processing element speed no longer increases ■  ILP Wall and Memory Wall □  Memory access is not fast enough for modern hardware ■  Parallel Hardware Classification □  From ILP to SMP, SIMD vs. MIMD ■  Memory Architectures □  UMA vs. NUMA ■  Speedup and Scaleup □  Amdahl’s Law and Gustavson’s Law Since we need parallelism for speedup, how can we express it in software? 68 OpenHPI | Parallel Programming Concepts | Dr. Peter Tröger