O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.
Cache Memory
E048-Shailesh Tanwar
E061-Yash Nair
E017-Shairal Neema
E019-Aashita Nyati
Agenda
 Memory Hierarchy
 What is Cache Memory
 Working of Cache
 Structure of Cache
 Cache Write Policies
 Levels o...
Memory Hierarchy –
Diagram
Decreasing cost per bitIncreasing capacityIncreasing access timeDecreasing frequency of
access ...
What is Cache Memory?
 Cache memory is used in order to achieve higher performance of CPU by
allowing the CPU to access d...
AN INSIGHT INTO THE WORKING OF CACHE
Structure of the Cache memory
Contains address of actual
data fetched from Main Memory
Contains actual data fetched
From t...
Cache write policies
When we write ,should we write to cache or memory?
 Write through cache – write to both cache and ma...
Levels of Cache
Fastest Fast
Less Fast
Slow
Cache Organization
Address
Address
buffer
Control Control
Data
Data
buffer
System bus
Mapping Techniques
Direct mapping
Associative mapping
Set associative mapping
Direct mapping
Simplest technique
In this , each block of main memory is mapped into
only one possible cache line .
i = ...
Address length = (s + w) bits
Number of addressable units = 2^(s+w) words or
bytes
Block size = line size = 2w words or...
ASSOCIATIVE MAPPING
It overcomes the disadvantage of direct
mapping.
It permits each main memory block to be
loaded into...
Address length = (s + w) bits
Number of addressable units = 2^(s+w)
words or bytes
Block size = line size = 2^w words o...
SET ASSOCIATIVE MAPPING
 The relationship which is followed here is
m= v*k
i= j modulo v
Where ,
i= cache set no.
j= main...
Address length = (s + w) bits
Number of addressable units = 2s+w words
or bytes
Block size = line size = 2^w words or b...
Replacement algorithms
Discard itemsAdd new ones
Cache Memory
Cache Full…
Replacement algorithms
 Optimizing instructions.
 To manage cache information on computer.
 In direct mapping.
Each blo...
 Least recently used(LRU)
 First in first out(FIFO)
 Least frequently used(LFU)
 Random
Replacement algorithms
Least Recently used
 The most effective.
 Keeps track of which block used when.
 Discards the least recently used block...
2 3 4 2 1 3 7
1 2 3 4 5 6 7
2 2 2 2 2 2 7
3 3 3 1 1 1
4 4 4 3 3
EXAMPLE
Page hit
1
Page fault
3
7
First In First Out
 The simplest algorithm.
 Bad performance.
 First entering block, is discarded first.
 Replaces the...
2 3 4 2 1 3 7
1 2 3 4 5 6 7
2 2 2 2 1 1 1
3 3 3 3 3 3
4 4 4 4 7
EXAMPLE
Page hit Page hit
1
7 Page fault
Least Frequently Used
 Counts how often a block is needed.
 Every block has one counter of its own which is initially
se...
2 3 4 2 1 3 7
1 2 3 4 5 6 7
2 2 2 2 2 2 2
3 3 3 3 1 1
4 4 4 4 3
EXAMPLE
Page hit
1 7
Page fault
I I I II I I I
3
Random
 Randomly selects a block.
 Discards it to make space.
 Does not keep track of access history.
 This eliminates...
Thank You
Cache memory
Cache memory
Cache memory
Próximos SlideShares
Carregando em…5
×

Cache memory

3.360 visualizações

Publicada em

Basic theory about the fastest memory.

Publicada em: Engenharia
  • I cant download it.!!!!!!!!!!!!!!!
       Responder 
    Tem certeza que deseja  Sim  Não
    Insira sua mensagem aqui

Cache memory

  1. 1. Cache Memory E048-Shailesh Tanwar E061-Yash Nair E017-Shairal Neema E019-Aashita Nyati
  2. 2. Agenda  Memory Hierarchy  What is Cache Memory  Working of Cache  Structure of Cache  Cache Write Policies  Levels of Cache  Cache Organization  Mapping techniques  Replacement algorithms
  3. 3. Memory Hierarchy – Diagram Decreasing cost per bitIncreasing capacityIncreasing access timeDecreasing frequency of access of the memory by the processor
  4. 4. What is Cache Memory?  Cache memory is used in order to achieve higher performance of CPU by allowing the CPU to access data at faster speed.  It is placed closest to the processor in the computer assembly.  It is way too costly.  It is also a type of memory but keeping in mind the cost factor it cannot be used as a primary memory.
  5. 5. AN INSIGHT INTO THE WORKING OF CACHE
  6. 6. Structure of the Cache memory Contains address of actual data fetched from Main Memory Contains actual data fetched From the Main Memory
  7. 7. Cache write policies When we write ,should we write to cache or memory?  Write through cache – write to both cache and main memory. Cache and memory are always consistent.  Write back cache – write only to cache and set a “dirty bit”. When the block gets replaced from the cache ,write it out to memory.
  8. 8. Levels of Cache Fastest Fast Less Fast Slow
  9. 9. Cache Organization Address Address buffer Control Control Data Data buffer System bus
  10. 10. Mapping Techniques Direct mapping Associative mapping Set associative mapping
  11. 11. Direct mapping Simplest technique In this , each block of main memory is mapped into only one possible cache line . i = j modulo m where , i= cache memory j= main memory m=no. of lines in the cache
  12. 12. Address length = (s + w) bits Number of addressable units = 2^(s+w) words or bytes Block size = line size = 2w words or bytes Number of blocks in main memory = 2^(s+ w)/2^w = 2s Number of lines in cache = m = 2^r Size of tag = (s – r) bits
  13. 13. ASSOCIATIVE MAPPING It overcomes the disadvantage of direct mapping. It permits each main memory block to be loaded into any line of the cache .
  14. 14. Address length = (s + w) bits Number of addressable units = 2^(s+w) words or bytes Block size = line size = 2^w words or bytes Number of blocks in main memory = 2^(s+ w)/2^w = 2^s Number of lines in cache = undetermined Size of tag = s bits
  15. 15. SET ASSOCIATIVE MAPPING  The relationship which is followed here is m= v*k i= j modulo v Where , i= cache set no. j= main memory m= no. of lines in the cache v= no. of set k= no. of lines in each set This is called k-way set associative mapping .
  16. 16. Address length = (s + w) bits Number of addressable units = 2s+w words or bytes Block size = line size = 2^w words or bytes Number of blocks in main memory = 2^s Number of lines in set = k Number of sets = v = 2d Number of lines in cache = kv = k * 2d Size of tag = (s – d) bits
  17. 17. Replacement algorithms Discard itemsAdd new ones Cache Memory Cache Full…
  18. 18. Replacement algorithms  Optimizing instructions.  To manage cache information on computer.  In direct mapping. Each block only maps to one cache block .  Associative and set associative mapping.
  19. 19.  Least recently used(LRU)  First in first out(FIFO)  Least frequently used(LFU)  Random Replacement algorithms
  20. 20. Least Recently used  The most effective.  Keeps track of which block used when.  Discards the least recently used blocks first. USE bit 0 USE bit 1
  21. 21. 2 3 4 2 1 3 7 1 2 3 4 5 6 7 2 2 2 2 2 2 7 3 3 3 1 1 1 4 4 4 3 3 EXAMPLE Page hit 1 Page fault 3 7
  22. 22. First In First Out  The simplest algorithm.  Bad performance.  First entering block, is discarded first.  Replaces the block that has been in cache the longest.
  23. 23. 2 3 4 2 1 3 7 1 2 3 4 5 6 7 2 2 2 2 1 1 1 3 3 3 3 3 3 4 4 4 4 7 EXAMPLE Page hit Page hit 1 7 Page fault
  24. 24. Least Frequently Used  Counts how often a block is needed.  Every block has one counter of its own which is initially set to 0.  As the block is referenced, the counter is incremented.  Replaces the lowest reference frequency block.
  25. 25. 2 3 4 2 1 3 7 1 2 3 4 5 6 7 2 2 2 2 2 2 2 3 3 3 3 1 1 4 4 4 4 3 EXAMPLE Page hit 1 7 Page fault I I I II I I I 3
  26. 26. Random  Randomly selects a block.  Discards it to make space.  Does not keep track of access history.  This eliminates the overhead cost of tracking page references.
  27. 27. Thank You

×