SlideShare uma empresa Scribd logo
1 de 56
Baixar para ler offline
Evaluating Computers:
Bigger, better, faster, more?
1
What do you want in a computer?
2
What do you want in a computer?
• Low latency -- one unit of work in minimum time
• 1/latency = responsiveness
• High throughput -- maximum work per time
• High bandwidth (BW)
• Low cost
• Low power -- minimum jules per time
• Low energy -- minimum jules per work
• Reliability -- Mean time to failure (MTTF)
• Derived metrics
• responsiveness/dollar
• BW/$
• BW/Watt
• Work/Jule
• Energy * latency -- Energy delay product
• MTTF/$
3
Latency
• This is the simplest kind of performance
• How long does it take the computer to perform
a task?
• The task at hand depends on the situation.
• Usually measured in seconds
• Also measured in clock cycles
• Caution: if you are comparing two different system, you
must ensure that the cycle times are the same.
4
Measuring Latency
• Stop watch!
• System calls
• gettimeofday()
• System.currentTimeMillis()
• Command line
• time <command>
5
Where latency matters
• Application responsiveness
• Any time a person is waiting.
• GUIs
• Games
• Internet services (from the users perspective)
• “Real-time” applications
• Tight constraints enforced by the real world
• Anti-lock braking systems
• Manufacturing control
• Multi-media applications
• The cost of poor latency
• If you are selling computer time, latency is money.
6
Latency and Performance
• By definition:
• Performance = 1/Latency
• If Performance(X) > Performance(Y), X is faster.
• If Perf(X)/Perf(Y) = S, X is S times faster thanY.
• Equivalently: Latency(Y)/Latency(X) = S
• When we need to talk about specifically about
other kinds of “performance” we must be more
specific.
7
The Performance Equation
• We would like to model how architecture impacts
performance (latency)
• This means we need to quantify performance in
terms of architectural parameters.
• Instructions -- this is the basic unit of work for a
processor
• Cycle time -- these two give us a notion of time.
• Cycles
• The first fundamental theorem of computer
architecture:
Latency = Instructions * Cycles/Instruction *
Seconds/Cycle
8
The Performance Equation
• The units work out! Remember your
dimensional analysis!
• Cycles/Instruction == CPI
• Seconds/Cycle == 1/hz
• Example:
• 1GHz clock
• 1 billion instructions
• CPI = 4
• What is the latency?
9
Latency = Instructions * Cycles/Instruction * Seconds/Cycle
Examples
• gcc runs in 100 sec on a 1 GHz machine
– How many cycles does it take?
• gcc runs in 75 sec on a 600 MHz machine
– How many cycles does it take?
100G cycles
45G cycles
Latency = Instructions * Cycles/Instruction * Seconds/Cycle
How can this be?
• Different Instruction count?
• Different ISAs ?
• Different compilers ?
• Different CPI?
• underlying machine implementation
• Microarchitecture
• Different cycle time?
• New process technology
• Microarchitecture
11
Latency = Instructions * Cycles/Instruction * Seconds/Cycle
Computing Average CPI
• Instruction execution time depends on instruction
time (we’ll get into why this is so later on)
• Integer +, -, <<, |, & -- 1 cycle
• Integer *, /, -- 5-10 cycles
• Floating point +, - -- 3-4 cycles
• Floating point *, /, sqrt() -- 10-30 cycles
• Loads/stores -- variable
• All theses values depend on the particular implementation,
not the ISA
• Total CPI depends on the workload’s Instruction mix
-- how many of each type of instruction executes
• What program is running?
• How was it compiled?
12
The Compiler’s Role
• Compilers affect CPI…
• Wise instruction selection
• “Strength reduction”: x*2n -> x << n
• Use registers to eliminate loads and stores
• More compact code -> less waiting for instructions
• …and instruction count
• Common sub-expression elimination
• Use registers to eliminate loads and stores
13
Stupid Compiler
int i, sum = 0;
for(i=0;i<10;i++)
sum += i;
sw 0($sp), $0 #sum = 0
sw 4($sp), $0 #i = 0
loop:
lw $1, 4($sp)
sub $3, $1, 10
beq $3, $0, end
lw $2, 0($sp)
add $2, $2, $1
st 0($sp), $2
addi $1, $1, 1
st 4($sp), $1
b loop
end:
Type CPI Static # dyn #
mem 5 6 42
int 1 3 30
br 1 2 20
Total 2.8 11 92
(5*42 + 1*30 + 1*20)/92 = 2.8
Smart Compiler
int i, sum = 0;
for(i=0;i<10;i++)
sum += i;
add $1, $0, $0 # i
add $2, $0, $0 # sum
loop:
sub $3, $1, 10
beq $3, $0, end
add $2, $2, $1
addi $1, $1, 1
b loop
end:
sw 0($sp), $2
Type CPI Static # dyn #
mem 5 1 1
int 1 5 32
br 1 2 20
Total 1.01 8 53
(5*1 + 1*32 + 1*20)/53 = 2.8
Live demo
16
Program inputs affect CPI too!
int rand[1000] = {random 0s and 1s }
for(i=0;i<1000;i++)
if(rand[i]) sum -= i;
else sum *= i;
int ones[1000] = {1, 1, ...}
for(i=0;i<1000;i++)
if(ones[i]) sum -= i;
else sum *= i;
• Data-dependent computation
• Data-dependent micro-architectural behavior
–Processors are faster when the computation is
predictable (more later)
Live demo
18
• Meaningful CPI exists only:
• For a particular program with a particular compiler
• ....with a particular input.
• You MUST consider all 3 to get accurate latency estimations
or machine speed comparisons
• Instruction Set
• Compiler
• Implementation of Instruction Set (386 vs Pentium)
• Processor Freq (600 Mhz vs 1 GHz)
• Same high level program with same input
• “wall clock” measurements are always comparable.
• If the workloads (app + inputs) are the same
19
Making Meaningful Comparisons
Latency = Instructions * Cycles/Instruction * Seconds/Cycle
The Performance Equation
• Clock rate =
• Instruction count =
• Latency =
• Find the CPI!
20
Latency = Instructions * Cycles/Instruction * Seconds/Cycle
Today
• Quiz 3
• DRAM
• Amdahl’s law
21
Key Points
• Amdahl’s law and how to apply it in a variety of
situations
• It’s role in guiding optimization of a system
• It’s role in determining the impact of localized
changes on the entire system
22
Limits on Speedup: Amdahl’s Law
• “The fundamental theorem of performance
optimization”
• Coined by Gene Amdahl (one of the designers of the
IBM 360)
• Optimizations do not (generally) uniformly affect the
entire program
– The more widely applicable a technique is, the more valuable it
is
– Conversely, limited applicability can (drastically) reduce the
impact of an optimization.
Always heed Amdahl’s Law!!!
It is central to many many optimization problems
Amdahl’s Law in Action
• SuperJPEG-O-Rama2010 ISA extensions
**
–Speeds up JPEG decode by 10x!!!
–Act now! While Supplies Last!
** Increases processor cost by 45%
Amdahl’s Law in Action
• SuperJPEG-O-Rama2010 in the wild
• PictoBench spends 33% of it’s time doing
JPEG decode
• How much does JOR2k help?
JPEG Decodew/o JOR2k
w/ JOR2k
30s
21s
Performance: 30/21 = 1.4x Speedup != 10x
Is this worth the 45% increase in cost?
Amdahl
ate our
Speedup!
Amdahl’s Law
• The second fundamental theorem of computer
architecture.
• If we can speed up X of the program by S times
• Amdahl’s Law gives the total speed up, Stot
Stot = 1 .
(x/S + (1-x))
x =1 => Stot = 1 = 1 = S
(1/S + (1-1)) 1/S
Sanity check:
Amdahl’s Corollary #1
• Maximum possible speedup, Smax
Smax = 1
(1-x)
S = infinity
Amdahl’s Law Practice
• Protein String Matching Code
–200 hours to run on current machine, spends 20% of
time doing integer instructions
–How much faster must you make the integer unit to
make the code run 10 hours faster?
–How much faster must you make the integer unit to
make the code run 50 hours faster?
A)1.1
B)1.25
C)1.75
D)1.33
E) 10.0
F) 50.0
G) 1 million times
H) Other
Amdahl’s Law Practice
• Protein String Matching Code
–4 days execution time on current machine
• 20% of time doing integer instructions
• 35% percent of time doing I/O
–Which is the better tradeoff?
• Compiler optimization that reduces number of
integer instructions by 25% (assume each integer
inst takes the same amount of time)
• Hardware optimization that makes I/O run 20%
faster?
Amdahl’s Law Applies All Over
30
• SSDs use 10x less power than HDs
• But they only save you ~50% overall.
Amdahl’s Law in Memory
31
Memory Device
Rowdecoder
Column decoder
Sense Amps
High order bits
Low order bits
Storage array
DataAddress
• Storage array 90% of area
• Row decoder 4%
• Column decode 2%
• Sense amps 4%
• What’s the benefit of
reducing bit size by 10%?
• Reducing column decoder
size by 90%?
Amdahl’s Corollary #2
• Make the common case fast (i.e., x should be
large)!
–Common == “most time consuming” not necessarily
“most frequent”
–The uncommon case doesn’t make much difference
–Be sure of what the common case is
–The common case changes.
• Repeat…
–With optimization, the common becomes uncommon
and vice versa.
Amdahl’s Corollary #2: Example
Common case
7x => 1.4x
4x => 1.3x
1.3x => 1.1x
Total = 20/10 = 2x
• In the end, there is no common case!
• Options:
– Global optimizations (faster clock, better compiler)
– Find something common to work on (i.e. memory latency)
– War of attrition
– Total redesign (You are probably well-prepared for this)
Amdahl’s Corollary #3
• Benefits of parallel processing
• p processors
• x% is p-way parallizable
• maximum speedup, Spar
Spar = 1 .
(x/p + (1-x))
x is pretty small for desktop applications, even for p = 2
Does Intel’s 80-core processor make much sense?
Amdahl’s Corollary #4
• Amdahl’s law for latency (L)
Lnew = Lbase *1/Speedup
Lnew = Lbase *(x/S + (1-x))
Lnew = (Lbase /S)*x + Lbase*(1-x)
• If you can speed up y% of the remaining (1-x), you can apply
Amdahl’s law recursively
Lnew = (Lbase /S1)*x +
(Sbase*(1-x)/S2*y + Lbase*(1-x)*(1-y))
Amdahl’s Non-Corollary
• Amdahl’s law does not bound slowdown
Lnew = (Lbase /S)*x + Lbase*(1-x)
• Lnew is linear in 1/S
• Example: x = 0.01 of execution, Lbase = 1
–S = 0.001;
• Enew = 1000*Lbase *0.01 + Lbase *(0.99) ~ 10*Lbase
–S = 0.00001;
• Enew = 100000*Lbase *0.01 + Lbase *(0.99) ~ 1000*Lbase
• Things can only get so fast, but they can get
arbitrarily slow.
–Do not hurt the non-common case too much!
Today
• Projects and 141L
• Amdahl’s Law practice
• Bandwidth, power, and derived metrics
• Beginning of single-cycle datapath
• Key points
• Amdahl’s law
• Bandwidth and power in processors
• Why are benchmarks important?
• Derived metrics
37
License your ISA!
• We have one group in 141L in search of an ISA
• If you are interested you can try to license your
ISA to them.
• Create a show presentation (5 slides) explaining why
your ISA is awesome
• Send it to me.
• Set up a time with the potential customers
• If they select you, you get a grade bump on your
project
• Let me know if you are interested.
38
Amdahl’s Practice
• Memory operations currently take 30% of
execution time.
• A new widget called a “cache” speeds up 80% of
memory operations by a factor of 4
• A second new widget called a “L2 cache” speeds
up 1/2 the remaining 20% by a factor or 2.
• What is the total speed up?
39
40
L1
L
2
n
a
Not memory
Memory time
0.24 0.03 0.03 0.7
Total = 1
24% 3% 3% 70%
n
a
L1
sped
up
Not memory
0.70.030.0150.06
Total = 0.805
Answer in Pictures
Speed up = 1.242
Amdahl’s Practice
• Just the L1 cache
• S1 = 4
• x1 = .8
• StotL1 = 1/(x1/S1 + (1-x1))
• StotL1 = 1/(.8*.3/4 + (1-(.8*.3))) = 1/(.06 + .76) = 1.2195 times
• Add the L2 cache
• StotL2’ = 1/(0.8*0.1/2 + (1-0.8*0.1)) = 1/(.04 + .92) = 1.04times
• StotL2 = StotL2’ * StotL1 = 1.04*1.21 = 1.258
• What’s wrong? -- after we do the L1 cache, the execution
time changes, so .1 is no longer correct for x2
41
What went wrong
42
L1
L1
sped
up
L
2
n
a
Not memory
L1
sped
up
n
a
Not memory
L
2
n
a
Not memory
Memory time
0.24 0.03 0.03 0.7
0.7
0.7
0.030.030.06
0.030.0150.06
Total = 0.82
Total = 1
Total = 0.805
85%4.2%4.2%8.6%
24% 3% 3% 70%
Amdahl’s Practice
• Add the L2 cache separately and correctly
• StotL2’ = 1/(0.042/2 + (1-0.042)) = 1/(.04 + .92) = 1.02 times
• StotL2 = StotL2’ * StotL1 = 1.02*1.21 = 1.24
• Combine both the L1 and the L2
• S2 = 2
• x2 = .1
• StotL2 = 1/(x1/S1 + x2/S2 + (1 - x1 - x2))
• StotL2 = 1/(0.8*0.3/4 + 0.1*0.3/2 + (1-(0.8*0.3)-(0.1*0.3)))
= 1/(0.06+0.015+.73)) = 1.24 times
• Remember: Amdahl’s law is about the fraction of time spent in
the optimized and un-optimized portions.
43
Bandwidth
• The amount of work (or data) per time
• MB/s, GB/s -- network BW, disk BW, etc.
• Frames per second -- Games, video transcoding
• Also called “throughput”
44
Measuring Bandwidth
• Measure how much work is done
• Measure latency
• Divide
45
Latency-BW Trade-offs
• Often, increasing latency for one task and
increase BW for many tasks.
• Think of waiting in line for one of 4 bank tellers
• If the line is empty, your response time is minimized, but
throughput is low because utilization is low.
• If there is always a line, you wait longer (your latency
goes up), but there is always work available for tellers.
• Much of computer performance is about
scheduling work onto resources
• Network links.
• Memory ports.
• Processors, functional units, etc.
• IO channels.
• Increasing contention for these resources generally
increases throughput but hurts latency.
46
Live Demo
47
Reliability Metrics
• Mean time to failure (MTTF)
• Average time before a system stops working
• Very complicated to calculate for complex systems
• Why would a processor fail?
• Electromigration
• High-energy particle strikes
• cracks due to heat/cooling
• It used to be that processors would last longer
than their useful life time. This is becoming less
true.
48
Power/Energy Metrics
• Energy == joules
• You buy electricity in joules.
• Battery capacity is in joules
• To minimizes operating costs, minimize energy
• You can also think of this as the amount of work that
computer must actually do
• Power == joules/sec
• Power is how fast your machine uses joules
• It determines battery life
• It is also determines how much cooling you need. Big
systems need 0.3-1 Watt of cooling for every watt of
compute.
49
Power in Processors
• P = aCV2f
• a = activity factor (what fraction of the xtrs switch every
cycles)
• C = total capacitance (i.e, how many xtrs there are on
the chip)
• V = supply voltage
• f = clock frequency
• Generally, f is linear inV, so P is roughly f3
• Architects can improve
• a -- make the micro architecture more efficient. Less
useless xtr switchings
• C -- smaller chips, with fewer xtrs
50
Metrics in the wild
• Millions of instructions per second (MIPS)
• Floating point operations per second (FLOPS)
• Giga-(integer)operations per second (GOPS)
• Why are these all bandwidth metric?
• Peak bandwidth is workload independent, so these
metrics describe a hardware capability
• When you see these, they are generally GNTE
(Guaranteed not to exceed) numbers.
51
Benchmarks: Standard Candles for
Performance
• It’s hard to convince manufacturers to run your program
(unless you’re a BIG customer)
• A benchmark is a set of programs that are representative of a
class of problems.
• To increase predictability, collections of benchmark
applications, called benchmark suites, are popular
– “Easy” to set up
– Portable
– Well-understood
– Stand-alone
– Standardized conditions
– These are all things that real software is not.
Classes of benchmarks
• Microbenchmark – measure one feature of system
– e.g. memory accesses or communication speed
• Kernels – most compute-intensive part of applications
– e.g. Linpack and NAS kernel b’marks (for supercomputers)
• Full application:
– SpecInt / SpecFP (int and float) (for Unix workstations)
– Other suites for databases, web servers, graphics,...
More Complex Metrics
• For instance, want low power and low latency
• Power * Latency
• More concerned about Power?
• Power2 * Latency
• High bandwidth, low cost?
• (MB/s)/$
• In general, put the good things in the numerator,
the bad things in the denominator.
• MIPS2/W
54
Stationwagon Digression
• IPv6 Internet 2: 272,400 terabit-meters per second
–585GB in 30 minutes over 30,000 Km
–9.08 Gb/s
• Subaru outback wagon
– Max load = 408Kg
– 21Mpg
• MHX2 BT 300 Laptop drive
– 300GB/Drive
– 0.135Kg
• 906TB
• Legal speed: 75MPH (33.3 m/s)
• BW = 8.2 Gb/s
• Latency = 10 days
• 241,535 terabit-meters per second
Prius Digression
• IPv6 Internet 2: 272,400 terabit-meters per second
–585GB in 30 minutes over 30,000 Km
–9.08 Gb/s
• My Toyota Prius
– Max load = 374Kg
– 44Mpg (2x power efficiency)
• MHX2 BT 300
– 300GB/Drive
– 0.135Kg
• 831TB
• Legal speed: 75MPH (33.3 m/s)
• BW = 7.5 Gb/s
• Latency = 10 days
• 221,407 terabit-meters per second (13%
performance hit)

Mais conteúdo relacionado

Mais procurados

Pipelining
PipeliningPipelining
PipeliningAmin Omi
 
Algoritmo Cesar.pptx
Algoritmo Cesar.pptxAlgoritmo Cesar.pptx
Algoritmo Cesar.pptxDavidArce54
 
Multicore Computers
Multicore ComputersMulticore Computers
Multicore ComputersA B Shinde
 
Cache memoy designed by Mohd Tariq
Cache memoy designed by Mohd TariqCache memoy designed by Mohd Tariq
Cache memoy designed by Mohd TariqMohd Tariq
 
asymptotic analysis and insertion sort analysis
asymptotic analysis and insertion sort analysisasymptotic analysis and insertion sort analysis
asymptotic analysis and insertion sort analysisAnindita Kundu
 
how to calclute time complexity of algortihm
how to calclute time complexity of algortihmhow to calclute time complexity of algortihm
how to calclute time complexity of algortihmSajid Marwat
 
CPU Performance Enhancements
CPU Performance EnhancementsCPU Performance Enhancements
CPU Performance EnhancementsDilum Bandara
 
Computer organization memory hierarchy
Computer organization memory hierarchyComputer organization memory hierarchy
Computer organization memory hierarchyAJAL A J
 
Lecture6 memory hierarchy
Lecture6 memory hierarchyLecture6 memory hierarchy
Lecture6 memory hierarchyli12x
 
Final draft intel core i5 processors architecture
Final draft intel core i5 processors architectureFinal draft intel core i5 processors architecture
Final draft intel core i5 processors architectureJawid Ahmad Baktash
 

Mais procurados (20)

Pipelining
PipeliningPipelining
Pipelining
 
8086microprocessor and core i3
8086microprocessor and core i38086microprocessor and core i3
8086microprocessor and core i3
 
Ch4b
Ch4bCh4b
Ch4b
 
Tecnologias Atuais de Redes - Aula 1 - Criptografia
Tecnologias Atuais de Redes - Aula 1 - CriptografiaTecnologias Atuais de Redes - Aula 1 - Criptografia
Tecnologias Atuais de Redes - Aula 1 - Criptografia
 
15 ia64
15 ia6415 ia64
15 ia64
 
Algoritmo Cesar.pptx
Algoritmo Cesar.pptxAlgoritmo Cesar.pptx
Algoritmo Cesar.pptx
 
Intel Pentium Pro
Intel Pentium ProIntel Pentium Pro
Intel Pentium Pro
 
Chapter 5 a
Chapter 5 aChapter 5 a
Chapter 5 a
 
Pipelining slides
Pipelining slides Pipelining slides
Pipelining slides
 
Multicore Computers
Multicore ComputersMulticore Computers
Multicore Computers
 
Chapter_3_v8.0.pptx
Chapter_3_v8.0.pptxChapter_3_v8.0.pptx
Chapter_3_v8.0.pptx
 
Cache memoy designed by Mohd Tariq
Cache memoy designed by Mohd TariqCache memoy designed by Mohd Tariq
Cache memoy designed by Mohd Tariq
 
asymptotic analysis and insertion sort analysis
asymptotic analysis and insertion sort analysisasymptotic analysis and insertion sort analysis
asymptotic analysis and insertion sort analysis
 
Prolog
PrologProlog
Prolog
 
how to calclute time complexity of algortihm
how to calclute time complexity of algortihmhow to calclute time complexity of algortihm
how to calclute time complexity of algortihm
 
Intel Core I5
Intel Core I5Intel Core I5
Intel Core I5
 
CPU Performance Enhancements
CPU Performance EnhancementsCPU Performance Enhancements
CPU Performance Enhancements
 
Computer organization memory hierarchy
Computer organization memory hierarchyComputer organization memory hierarchy
Computer organization memory hierarchy
 
Lecture6 memory hierarchy
Lecture6 memory hierarchyLecture6 memory hierarchy
Lecture6 memory hierarchy
 
Final draft intel core i5 processors architecture
Final draft intel core i5 processors architectureFinal draft intel core i5 processors architecture
Final draft intel core i5 processors architecture
 

Semelhante a 04 performance

Lec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Performance
Lec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- PerformanceLec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Performance
Lec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- PerformanceHsien-Hsin Sean Lee, Ph.D.
 
cs1311lecture25wdl.ppt
cs1311lecture25wdl.pptcs1311lecture25wdl.ppt
cs1311lecture25wdl.pptFannyBellows
 
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...EUDAT
 
What’s eating python performance
What’s eating python performanceWhat’s eating python performance
What’s eating python performancePiotr Przymus
 
Performance Tuning by Dijesh P
Performance Tuning by Dijesh PPerformance Tuning by Dijesh P
Performance Tuning by Dijesh PPlusOrMinusZero
 
Computer architecture short note (version 8)
Computer architecture short note (version 8)Computer architecture short note (version 8)
Computer architecture short note (version 8)Nimmi Weeraddana
 
Parallel Computing - Lec 6
Parallel Computing - Lec 6Parallel Computing - Lec 6
Parallel Computing - Lec 6Shah Zaib
 
Fundamentals.pptx
Fundamentals.pptxFundamentals.pptx
Fundamentals.pptxdhivyak49
 
Design & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptxDesign & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptxJeevaMCSEKIOT
 
Resource Management in (Embedded) Real-Time Systems
Resource Management in (Embedded) Real-Time SystemsResource Management in (Embedded) Real-Time Systems
Resource Management in (Embedded) Real-Time Systemsjeronimored
 
Performance of processor.ppt
Performance of processor.pptPerformance of processor.ppt
Performance of processor.pptnivedita murugan
 
Ruby3x3: How are we going to measure 3x
Ruby3x3: How are we going to measure 3xRuby3x3: How are we going to measure 3x
Ruby3x3: How are we going to measure 3xMatthew Gaudet
 

Semelhante a 04 performance (20)

03 performance
03 performance03 performance
03 performance
 
02 performance
02 performance02 performance
02 performance
 
Lec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Performance
Lec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- PerformanceLec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Performance
Lec3 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- Performance
 
Lecture1
Lecture1Lecture1
Lecture1
 
1571 mean
1571 mean1571 mean
1571 mean
 
cs1311lecture25wdl.ppt
cs1311lecture25wdl.pptcs1311lecture25wdl.ppt
cs1311lecture25wdl.ppt
 
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...
 
What’s eating python performance
What’s eating python performanceWhat’s eating python performance
What’s eating python performance
 
Performance Tuning by Dijesh P
Performance Tuning by Dijesh PPerformance Tuning by Dijesh P
Performance Tuning by Dijesh P
 
Computer architecture short note (version 8)
Computer architecture short note (version 8)Computer architecture short note (version 8)
Computer architecture short note (version 8)
 
Parallel Computing - Lec 6
Parallel Computing - Lec 6Parallel Computing - Lec 6
Parallel Computing - Lec 6
 
Fundamentals.pptx
Fundamentals.pptxFundamentals.pptx
Fundamentals.pptx
 
Design & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptxDesign & Analysis of Algorithm course .pptx
Design & Analysis of Algorithm course .pptx
 
Resource Management in (Embedded) Real-Time Systems
Resource Management in (Embedded) Real-Time SystemsResource Management in (Embedded) Real-Time Systems
Resource Management in (Embedded) Real-Time Systems
 
Parallel Algorithms
Parallel AlgorithmsParallel Algorithms
Parallel Algorithms
 
Nbvtalkatjntuvizianagaram
NbvtalkatjntuvizianagaramNbvtalkatjntuvizianagaram
Nbvtalkatjntuvizianagaram
 
Concurrency
ConcurrencyConcurrency
Concurrency
 
Performance of processor.ppt
Performance of processor.pptPerformance of processor.ppt
Performance of processor.ppt
 
Ruby3x3: How are we going to measure 3x
Ruby3x3: How are we going to measure 3xRuby3x3: How are we going to measure 3x
Ruby3x3: How are we going to measure 3x
 
PILOT Session for Embedded Systems
PILOT Session for Embedded Systems PILOT Session for Embedded Systems
PILOT Session for Embedded Systems
 

Mais de marangburu42

Hennchthree 161102111515
Hennchthree 161102111515Hennchthree 161102111515
Hennchthree 161102111515marangburu42
 
Sequential circuits
Sequential circuitsSequential circuits
Sequential circuitsmarangburu42
 
Combinational circuits
Combinational circuitsCombinational circuits
Combinational circuitsmarangburu42
 
Hennchthree 160912095304
Hennchthree 160912095304Hennchthree 160912095304
Hennchthree 160912095304marangburu42
 
Sequential circuits
Sequential circuitsSequential circuits
Sequential circuitsmarangburu42
 
Combinational circuits
Combinational circuitsCombinational circuits
Combinational circuitsmarangburu42
 
Karnaugh mapping allaboutcircuits
Karnaugh mapping allaboutcircuitsKarnaugh mapping allaboutcircuits
Karnaugh mapping allaboutcircuitsmarangburu42
 
Aac boolean formulae
Aac   boolean formulaeAac   boolean formulae
Aac boolean formulaemarangburu42
 
Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858marangburu42
 
File system interfacefinal
File system interfacefinalFile system interfacefinal
File system interfacefinalmarangburu42
 
File systemimplementationfinal
File systemimplementationfinalFile systemimplementationfinal
File systemimplementationfinalmarangburu42
 
Mass storage structurefinal
Mass storage structurefinalMass storage structurefinal
Mass storage structurefinalmarangburu42
 
All aboutcircuits karnaugh maps
All aboutcircuits karnaugh mapsAll aboutcircuits karnaugh maps
All aboutcircuits karnaugh mapsmarangburu42
 
Virtual memoryfinal
Virtual memoryfinalVirtual memoryfinal
Virtual memoryfinalmarangburu42
 
Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029marangburu42
 

Mais de marangburu42 (20)

Hol
HolHol
Hol
 
Write miss
Write missWrite miss
Write miss
 
Hennchthree 161102111515
Hennchthree 161102111515Hennchthree 161102111515
Hennchthree 161102111515
 
Hennchthree
HennchthreeHennchthree
Hennchthree
 
Hennchthree
HennchthreeHennchthree
Hennchthree
 
Sequential circuits
Sequential circuitsSequential circuits
Sequential circuits
 
Combinational circuits
Combinational circuitsCombinational circuits
Combinational circuits
 
Hennchthree 160912095304
Hennchthree 160912095304Hennchthree 160912095304
Hennchthree 160912095304
 
Sequential circuits
Sequential circuitsSequential circuits
Sequential circuits
 
Combinational circuits
Combinational circuitsCombinational circuits
Combinational circuits
 
Karnaugh mapping allaboutcircuits
Karnaugh mapping allaboutcircuitsKarnaugh mapping allaboutcircuits
Karnaugh mapping allaboutcircuits
 
Aac boolean formulae
Aac   boolean formulaeAac   boolean formulae
Aac boolean formulae
 
Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858
 
Io systems final
Io systems finalIo systems final
Io systems final
 
File system interfacefinal
File system interfacefinalFile system interfacefinal
File system interfacefinal
 
File systemimplementationfinal
File systemimplementationfinalFile systemimplementationfinal
File systemimplementationfinal
 
Mass storage structurefinal
Mass storage structurefinalMass storage structurefinal
Mass storage structurefinal
 
All aboutcircuits karnaugh maps
All aboutcircuits karnaugh mapsAll aboutcircuits karnaugh maps
All aboutcircuits karnaugh maps
 
Virtual memoryfinal
Virtual memoryfinalVirtual memoryfinal
Virtual memoryfinal
 
Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029Mainmemoryfinal 161019122029
Mainmemoryfinal 161019122029
 

Último

SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfchloefrazer622
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024Janet Corral
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxVishalSingh1417
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 

Último (20)

SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 

04 performance

  • 2. What do you want in a computer? 2
  • 3. What do you want in a computer? • Low latency -- one unit of work in minimum time • 1/latency = responsiveness • High throughput -- maximum work per time • High bandwidth (BW) • Low cost • Low power -- minimum jules per time • Low energy -- minimum jules per work • Reliability -- Mean time to failure (MTTF) • Derived metrics • responsiveness/dollar • BW/$ • BW/Watt • Work/Jule • Energy * latency -- Energy delay product • MTTF/$ 3
  • 4. Latency • This is the simplest kind of performance • How long does it take the computer to perform a task? • The task at hand depends on the situation. • Usually measured in seconds • Also measured in clock cycles • Caution: if you are comparing two different system, you must ensure that the cycle times are the same. 4
  • 5. Measuring Latency • Stop watch! • System calls • gettimeofday() • System.currentTimeMillis() • Command line • time <command> 5
  • 6. Where latency matters • Application responsiveness • Any time a person is waiting. • GUIs • Games • Internet services (from the users perspective) • “Real-time” applications • Tight constraints enforced by the real world • Anti-lock braking systems • Manufacturing control • Multi-media applications • The cost of poor latency • If you are selling computer time, latency is money. 6
  • 7. Latency and Performance • By definition: • Performance = 1/Latency • If Performance(X) > Performance(Y), X is faster. • If Perf(X)/Perf(Y) = S, X is S times faster thanY. • Equivalently: Latency(Y)/Latency(X) = S • When we need to talk about specifically about other kinds of “performance” we must be more specific. 7
  • 8. The Performance Equation • We would like to model how architecture impacts performance (latency) • This means we need to quantify performance in terms of architectural parameters. • Instructions -- this is the basic unit of work for a processor • Cycle time -- these two give us a notion of time. • Cycles • The first fundamental theorem of computer architecture: Latency = Instructions * Cycles/Instruction * Seconds/Cycle 8
  • 9. The Performance Equation • The units work out! Remember your dimensional analysis! • Cycles/Instruction == CPI • Seconds/Cycle == 1/hz • Example: • 1GHz clock • 1 billion instructions • CPI = 4 • What is the latency? 9 Latency = Instructions * Cycles/Instruction * Seconds/Cycle
  • 10. Examples • gcc runs in 100 sec on a 1 GHz machine – How many cycles does it take? • gcc runs in 75 sec on a 600 MHz machine – How many cycles does it take? 100G cycles 45G cycles Latency = Instructions * Cycles/Instruction * Seconds/Cycle
  • 11. How can this be? • Different Instruction count? • Different ISAs ? • Different compilers ? • Different CPI? • underlying machine implementation • Microarchitecture • Different cycle time? • New process technology • Microarchitecture 11 Latency = Instructions * Cycles/Instruction * Seconds/Cycle
  • 12. Computing Average CPI • Instruction execution time depends on instruction time (we’ll get into why this is so later on) • Integer +, -, <<, |, & -- 1 cycle • Integer *, /, -- 5-10 cycles • Floating point +, - -- 3-4 cycles • Floating point *, /, sqrt() -- 10-30 cycles • Loads/stores -- variable • All theses values depend on the particular implementation, not the ISA • Total CPI depends on the workload’s Instruction mix -- how many of each type of instruction executes • What program is running? • How was it compiled? 12
  • 13. The Compiler’s Role • Compilers affect CPI… • Wise instruction selection • “Strength reduction”: x*2n -> x << n • Use registers to eliminate loads and stores • More compact code -> less waiting for instructions • …and instruction count • Common sub-expression elimination • Use registers to eliminate loads and stores 13
  • 14. Stupid Compiler int i, sum = 0; for(i=0;i<10;i++) sum += i; sw 0($sp), $0 #sum = 0 sw 4($sp), $0 #i = 0 loop: lw $1, 4($sp) sub $3, $1, 10 beq $3, $0, end lw $2, 0($sp) add $2, $2, $1 st 0($sp), $2 addi $1, $1, 1 st 4($sp), $1 b loop end: Type CPI Static # dyn # mem 5 6 42 int 1 3 30 br 1 2 20 Total 2.8 11 92 (5*42 + 1*30 + 1*20)/92 = 2.8
  • 15. Smart Compiler int i, sum = 0; for(i=0;i<10;i++) sum += i; add $1, $0, $0 # i add $2, $0, $0 # sum loop: sub $3, $1, 10 beq $3, $0, end add $2, $2, $1 addi $1, $1, 1 b loop end: sw 0($sp), $2 Type CPI Static # dyn # mem 5 1 1 int 1 5 32 br 1 2 20 Total 1.01 8 53 (5*1 + 1*32 + 1*20)/53 = 2.8
  • 17. Program inputs affect CPI too! int rand[1000] = {random 0s and 1s } for(i=0;i<1000;i++) if(rand[i]) sum -= i; else sum *= i; int ones[1000] = {1, 1, ...} for(i=0;i<1000;i++) if(ones[i]) sum -= i; else sum *= i; • Data-dependent computation • Data-dependent micro-architectural behavior –Processors are faster when the computation is predictable (more later)
  • 19. • Meaningful CPI exists only: • For a particular program with a particular compiler • ....with a particular input. • You MUST consider all 3 to get accurate latency estimations or machine speed comparisons • Instruction Set • Compiler • Implementation of Instruction Set (386 vs Pentium) • Processor Freq (600 Mhz vs 1 GHz) • Same high level program with same input • “wall clock” measurements are always comparable. • If the workloads (app + inputs) are the same 19 Making Meaningful Comparisons Latency = Instructions * Cycles/Instruction * Seconds/Cycle
  • 20. The Performance Equation • Clock rate = • Instruction count = • Latency = • Find the CPI! 20 Latency = Instructions * Cycles/Instruction * Seconds/Cycle
  • 21. Today • Quiz 3 • DRAM • Amdahl’s law 21
  • 22. Key Points • Amdahl’s law and how to apply it in a variety of situations • It’s role in guiding optimization of a system • It’s role in determining the impact of localized changes on the entire system 22
  • 23. Limits on Speedup: Amdahl’s Law • “The fundamental theorem of performance optimization” • Coined by Gene Amdahl (one of the designers of the IBM 360) • Optimizations do not (generally) uniformly affect the entire program – The more widely applicable a technique is, the more valuable it is – Conversely, limited applicability can (drastically) reduce the impact of an optimization. Always heed Amdahl’s Law!!! It is central to many many optimization problems
  • 24. Amdahl’s Law in Action • SuperJPEG-O-Rama2010 ISA extensions ** –Speeds up JPEG decode by 10x!!! –Act now! While Supplies Last! ** Increases processor cost by 45%
  • 25. Amdahl’s Law in Action • SuperJPEG-O-Rama2010 in the wild • PictoBench spends 33% of it’s time doing JPEG decode • How much does JOR2k help? JPEG Decodew/o JOR2k w/ JOR2k 30s 21s Performance: 30/21 = 1.4x Speedup != 10x Is this worth the 45% increase in cost? Amdahl ate our Speedup!
  • 26. Amdahl’s Law • The second fundamental theorem of computer architecture. • If we can speed up X of the program by S times • Amdahl’s Law gives the total speed up, Stot Stot = 1 . (x/S + (1-x)) x =1 => Stot = 1 = 1 = S (1/S + (1-1)) 1/S Sanity check:
  • 27. Amdahl’s Corollary #1 • Maximum possible speedup, Smax Smax = 1 (1-x) S = infinity
  • 28. Amdahl’s Law Practice • Protein String Matching Code –200 hours to run on current machine, spends 20% of time doing integer instructions –How much faster must you make the integer unit to make the code run 10 hours faster? –How much faster must you make the integer unit to make the code run 50 hours faster? A)1.1 B)1.25 C)1.75 D)1.33 E) 10.0 F) 50.0 G) 1 million times H) Other
  • 29. Amdahl’s Law Practice • Protein String Matching Code –4 days execution time on current machine • 20% of time doing integer instructions • 35% percent of time doing I/O –Which is the better tradeoff? • Compiler optimization that reduces number of integer instructions by 25% (assume each integer inst takes the same amount of time) • Hardware optimization that makes I/O run 20% faster?
  • 30. Amdahl’s Law Applies All Over 30 • SSDs use 10x less power than HDs • But they only save you ~50% overall.
  • 31. Amdahl’s Law in Memory 31 Memory Device Rowdecoder Column decoder Sense Amps High order bits Low order bits Storage array DataAddress • Storage array 90% of area • Row decoder 4% • Column decode 2% • Sense amps 4% • What’s the benefit of reducing bit size by 10%? • Reducing column decoder size by 90%?
  • 32. Amdahl’s Corollary #2 • Make the common case fast (i.e., x should be large)! –Common == “most time consuming” not necessarily “most frequent” –The uncommon case doesn’t make much difference –Be sure of what the common case is –The common case changes. • Repeat… –With optimization, the common becomes uncommon and vice versa.
  • 33. Amdahl’s Corollary #2: Example Common case 7x => 1.4x 4x => 1.3x 1.3x => 1.1x Total = 20/10 = 2x • In the end, there is no common case! • Options: – Global optimizations (faster clock, better compiler) – Find something common to work on (i.e. memory latency) – War of attrition – Total redesign (You are probably well-prepared for this)
  • 34. Amdahl’s Corollary #3 • Benefits of parallel processing • p processors • x% is p-way parallizable • maximum speedup, Spar Spar = 1 . (x/p + (1-x)) x is pretty small for desktop applications, even for p = 2 Does Intel’s 80-core processor make much sense?
  • 35. Amdahl’s Corollary #4 • Amdahl’s law for latency (L) Lnew = Lbase *1/Speedup Lnew = Lbase *(x/S + (1-x)) Lnew = (Lbase /S)*x + Lbase*(1-x) • If you can speed up y% of the remaining (1-x), you can apply Amdahl’s law recursively Lnew = (Lbase /S1)*x + (Sbase*(1-x)/S2*y + Lbase*(1-x)*(1-y))
  • 36. Amdahl’s Non-Corollary • Amdahl’s law does not bound slowdown Lnew = (Lbase /S)*x + Lbase*(1-x) • Lnew is linear in 1/S • Example: x = 0.01 of execution, Lbase = 1 –S = 0.001; • Enew = 1000*Lbase *0.01 + Lbase *(0.99) ~ 10*Lbase –S = 0.00001; • Enew = 100000*Lbase *0.01 + Lbase *(0.99) ~ 1000*Lbase • Things can only get so fast, but they can get arbitrarily slow. –Do not hurt the non-common case too much!
  • 37. Today • Projects and 141L • Amdahl’s Law practice • Bandwidth, power, and derived metrics • Beginning of single-cycle datapath • Key points • Amdahl’s law • Bandwidth and power in processors • Why are benchmarks important? • Derived metrics 37
  • 38. License your ISA! • We have one group in 141L in search of an ISA • If you are interested you can try to license your ISA to them. • Create a show presentation (5 slides) explaining why your ISA is awesome • Send it to me. • Set up a time with the potential customers • If they select you, you get a grade bump on your project • Let me know if you are interested. 38
  • 39. Amdahl’s Practice • Memory operations currently take 30% of execution time. • A new widget called a “cache” speeds up 80% of memory operations by a factor of 4 • A second new widget called a “L2 cache” speeds up 1/2 the remaining 20% by a factor or 2. • What is the total speed up? 39
  • 40. 40 L1 L 2 n a Not memory Memory time 0.24 0.03 0.03 0.7 Total = 1 24% 3% 3% 70% n a L1 sped up Not memory 0.70.030.0150.06 Total = 0.805 Answer in Pictures Speed up = 1.242
  • 41. Amdahl’s Practice • Just the L1 cache • S1 = 4 • x1 = .8 • StotL1 = 1/(x1/S1 + (1-x1)) • StotL1 = 1/(.8*.3/4 + (1-(.8*.3))) = 1/(.06 + .76) = 1.2195 times • Add the L2 cache • StotL2’ = 1/(0.8*0.1/2 + (1-0.8*0.1)) = 1/(.04 + .92) = 1.04times • StotL2 = StotL2’ * StotL1 = 1.04*1.21 = 1.258 • What’s wrong? -- after we do the L1 cache, the execution time changes, so .1 is no longer correct for x2 41
  • 42. What went wrong 42 L1 L1 sped up L 2 n a Not memory L1 sped up n a Not memory L 2 n a Not memory Memory time 0.24 0.03 0.03 0.7 0.7 0.7 0.030.030.06 0.030.0150.06 Total = 0.82 Total = 1 Total = 0.805 85%4.2%4.2%8.6% 24% 3% 3% 70%
  • 43. Amdahl’s Practice • Add the L2 cache separately and correctly • StotL2’ = 1/(0.042/2 + (1-0.042)) = 1/(.04 + .92) = 1.02 times • StotL2 = StotL2’ * StotL1 = 1.02*1.21 = 1.24 • Combine both the L1 and the L2 • S2 = 2 • x2 = .1 • StotL2 = 1/(x1/S1 + x2/S2 + (1 - x1 - x2)) • StotL2 = 1/(0.8*0.3/4 + 0.1*0.3/2 + (1-(0.8*0.3)-(0.1*0.3))) = 1/(0.06+0.015+.73)) = 1.24 times • Remember: Amdahl’s law is about the fraction of time spent in the optimized and un-optimized portions. 43
  • 44. Bandwidth • The amount of work (or data) per time • MB/s, GB/s -- network BW, disk BW, etc. • Frames per second -- Games, video transcoding • Also called “throughput” 44
  • 45. Measuring Bandwidth • Measure how much work is done • Measure latency • Divide 45
  • 46. Latency-BW Trade-offs • Often, increasing latency for one task and increase BW for many tasks. • Think of waiting in line for one of 4 bank tellers • If the line is empty, your response time is minimized, but throughput is low because utilization is low. • If there is always a line, you wait longer (your latency goes up), but there is always work available for tellers. • Much of computer performance is about scheduling work onto resources • Network links. • Memory ports. • Processors, functional units, etc. • IO channels. • Increasing contention for these resources generally increases throughput but hurts latency. 46
  • 48. Reliability Metrics • Mean time to failure (MTTF) • Average time before a system stops working • Very complicated to calculate for complex systems • Why would a processor fail? • Electromigration • High-energy particle strikes • cracks due to heat/cooling • It used to be that processors would last longer than their useful life time. This is becoming less true. 48
  • 49. Power/Energy Metrics • Energy == joules • You buy electricity in joules. • Battery capacity is in joules • To minimizes operating costs, minimize energy • You can also think of this as the amount of work that computer must actually do • Power == joules/sec • Power is how fast your machine uses joules • It determines battery life • It is also determines how much cooling you need. Big systems need 0.3-1 Watt of cooling for every watt of compute. 49
  • 50. Power in Processors • P = aCV2f • a = activity factor (what fraction of the xtrs switch every cycles) • C = total capacitance (i.e, how many xtrs there are on the chip) • V = supply voltage • f = clock frequency • Generally, f is linear inV, so P is roughly f3 • Architects can improve • a -- make the micro architecture more efficient. Less useless xtr switchings • C -- smaller chips, with fewer xtrs 50
  • 51. Metrics in the wild • Millions of instructions per second (MIPS) • Floating point operations per second (FLOPS) • Giga-(integer)operations per second (GOPS) • Why are these all bandwidth metric? • Peak bandwidth is workload independent, so these metrics describe a hardware capability • When you see these, they are generally GNTE (Guaranteed not to exceed) numbers. 51
  • 52. Benchmarks: Standard Candles for Performance • It’s hard to convince manufacturers to run your program (unless you’re a BIG customer) • A benchmark is a set of programs that are representative of a class of problems. • To increase predictability, collections of benchmark applications, called benchmark suites, are popular – “Easy” to set up – Portable – Well-understood – Stand-alone – Standardized conditions – These are all things that real software is not.
  • 53. Classes of benchmarks • Microbenchmark – measure one feature of system – e.g. memory accesses or communication speed • Kernels – most compute-intensive part of applications – e.g. Linpack and NAS kernel b’marks (for supercomputers) • Full application: – SpecInt / SpecFP (int and float) (for Unix workstations) – Other suites for databases, web servers, graphics,...
  • 54. More Complex Metrics • For instance, want low power and low latency • Power * Latency • More concerned about Power? • Power2 * Latency • High bandwidth, low cost? • (MB/s)/$ • In general, put the good things in the numerator, the bad things in the denominator. • MIPS2/W 54
  • 55. Stationwagon Digression • IPv6 Internet 2: 272,400 terabit-meters per second –585GB in 30 minutes over 30,000 Km –9.08 Gb/s • Subaru outback wagon – Max load = 408Kg – 21Mpg • MHX2 BT 300 Laptop drive – 300GB/Drive – 0.135Kg • 906TB • Legal speed: 75MPH (33.3 m/s) • BW = 8.2 Gb/s • Latency = 10 days • 241,535 terabit-meters per second
  • 56. Prius Digression • IPv6 Internet 2: 272,400 terabit-meters per second –585GB in 30 minutes over 30,000 Km –9.08 Gb/s • My Toyota Prius – Max load = 374Kg – 44Mpg (2x power efficiency) • MHX2 BT 300 – 300GB/Drive – 0.135Kg • 831TB • Legal speed: 75MPH (33.3 m/s) • BW = 7.5 Gb/s • Latency = 10 days • 221,407 terabit-meters per second (13% performance hit)