SlideShare uma empresa Scribd logo
1 de 34
Computer Architecture




   Parallel Processing




           Prepared By   RAJESH JAIN
Multiple Processor Organization
•   Single instruction, single data stream - SISD
•   Single instruction, multiple data stream - SIMD
•   Multiple instruction, single data stream - MISD
•   Multiple instruction, multiple data stream- MIMD
Single Instruction, Single Data Stream -
SISD
•   Single processor
•   Single instruction stream
•   Data stored in single memory
•   Uni-processor
Single Instruction, Multiple Data Stream -
SIMD
• Single machine instruction
• Controls simultaneous execution
• Number of processing elements
• Lockstep basis
• Each processing element has associated data
  memory
• Each instruction executed on different set of
  data by different processors
• Vector and array processors
Multiple Instruction, Single Data Stream -
MISD
• Sequence of data
• Transmitted to set of processors
• Each processor executes different instruction
  sequence
• Never been implemented
Multiple Instruction, Multiple Data
Stream- MIMD
• Set of processors
• Simultaneously execute different instruction
  sequences
• Different sets of data
• SMPs, clusters and NUMA systems
Taxonomy of Parallel Processor
Architectures
MIMD - Overview
• General purpose processors
• Each can process all instructions necessary
• Further classified by method of processor
  communication
Tightly Coupled - SMP
• Processors share memory
• Communicate via that shared memory
• Symmetric Multiprocessor (SMP)
  —Share single memory or pool
  —Shared bus to access memory
  —Memory access time to given area of memory is
   approximately the same for each processor
Tightly Coupled - NUMA
• Nonuniform memory access
• Access times to different regions of memroy
  may differ
Loosely Coupled - Clusters
• Collection of independent uniprocessors or
  SMPs
• Interconnected to form a cluster
• Communication via fixed path or network
  connections
Parallel Organizations - SISD
Parallel Organizations - SIMD
Parallel Organizations - MIMD Shared
Memory
Parallel Organizations - MIMD
Distributed Memory
Symmetric Multiprocessors
• A stand alone computer with the following
  characteristics
   — Two or more similar processors of comparable capacity
   — Processors share same memory and I/O
   — Processors are connected by a bus or other internal connection
   — Memory access time is approximately the same for each
     processor
   — All processors share access to I/O
      – Either through same channels or different channels giving paths to
        same devices
   — All processors can perform the same functions (hence
     symmetric)
   — System controlled by integrated operating system
      – providing interaction between processors
      – Interaction at job, task, file and data element levels
SMP Advantages
• Performance
   —If some work can be done in parallel
• Availability
   —Since all processors can perform the same functions,
    failure of a single processor does not halt the system
• Incremental growth
   —User can enhance performance by adding additional
    processors
• Scaling
   —Vendors can offer range of products based on
    number of processors
Block Diagram of Tightly Coupled
Multiprocessor
Organization Classification
• Time shared or common bus
• Multiport memory
• Central control unit
Time Shared Bus
• Simplest form
• Structure and interface similar to single
  processor system
• Following features provided
   —Addressing - distinguish modules on bus
   —Arbitration - any module can be temporary master
   —Time sharing - if one module has the bus, others
    must wait and may have to suspend
• Now have multiple processors as well as
  multiple I/O modules
Shared Bus
Time Share Bus - Advantages
• Simplicity
• Flexibility
• Reliability
Time Share Bus - Disadvantage
• Performance limited by bus cycle time
• Each processor should have local cache
  —Reduce number of bus accesses
• Leads to problems with cache coherence
  —Solved in hardware - see later
Multiport Memory
• Direct independent access of memory modules
  by each processor
• Logic required to resolve conflicts
• Little or no modification to processors or
  modules required
Multiport Memory Diagram
Multiport Memory - Advantages and
Disadvantages
• More complex
  —Extra login in memory system
• Better performance
  —Each processor has dedicated path to each module
• Can configure portions of memory as private to
  one or more processors
  —Increased security
• Write through cache policy
Clusters
•   Alternative to SMP
•   High performance
•   High availability
•   Server applications


•   A group of interconnected whole computers
•   Working together as unified resource
•   Illusion of being one machine
•   Each computer called a node
Cluster Benefits
•   Absolute scalability
•   Incremental scalability
•   High availability
•   Superior price/performance
Cluster Configurations - Standby Server,
No Shared Disk
Cluster Configurations -
Shared Disk
Operating Systems Design Issues
• Failure Management
  —High availability
  —Fault tolerant
  —Failover
     – Switching applications & data from failed system to
       alternative within cluster
  —Failback
     – Restoration of applications and data to original system
     – After problem is fixed
• Load balancing
  —Incremental scalability
  —Automatically include new computers in scheduling
  —Middleware needs to recognise that processes may
   switch between machines
Parallelizing
• Single application executing in parallel on a
  number of machines in cluster
  —Complier
     – Determines at compile time which parts can be executed in
       parallel
     – Split off for different computers
  —Application
     –   Application written from scratch to be parallel
     –   Message passing to move data between nodes
     –   Hard to program
     –   Best end result
  —Parametric computing
     – If a problem is repeated execution of algorithm on different
       sets of data
     – e.g. simulation using different scenarios
     – Needs effective tools to organize and run
Cluster Computer Architecture
Cluster v. SMP
• Both provide multiprocessor support to high
  demand applications.
• Both available commercially
  —SMP for longer
• SMP:
  —Easier to manage and control
  —Closer to single processor systems
     – Scheduling is main difference
     – Less physical space
     – Lower power consumption
• Clustering:
  —Superior incremental & absolute scalability
  —Superior availability
     – Redundancy

Mais conteúdo relacionado

Mais procurados

Introduction to parallel_computing
Introduction to parallel_computingIntroduction to parallel_computing
Introduction to parallel_computingMehul Patel
 
Introduction to parallel processing
Introduction to parallel processingIntroduction to parallel processing
Introduction to parallel processingPage Maker
 
Parallel architecture-programming
Parallel architecture-programmingParallel architecture-programming
Parallel architecture-programmingShaveta Banda
 
Unit 5 Advanced Computer Architecture
Unit 5 Advanced Computer ArchitectureUnit 5 Advanced Computer Architecture
Unit 5 Advanced Computer ArchitectureBalaji Vignesh
 
Lecture 6
Lecture  6Lecture  6
Lecture 6Mr SMAK
 
Parallel computing
Parallel computingParallel computing
Parallel computingVinay Gupta
 
Parallel computing and its applications
Parallel computing and its applicationsParallel computing and its applications
Parallel computing and its applicationsBurhan Ahmed
 
Computer architecture
Computer architecture Computer architecture
Computer architecture Ashish Kumar
 
Parallel computing
Parallel computingParallel computing
Parallel computingvirend111
 
Parallel computing in india
Parallel computing in indiaParallel computing in india
Parallel computing in indiaPreeti Chauhan
 
network ram parallel computing
network ram parallel computingnetwork ram parallel computing
network ram parallel computingNiranjana Ambadi
 
Intro to parallel computing
Intro to parallel computingIntro to parallel computing
Intro to parallel computingPiyush Mittal
 
ADVANCED COMPUTER ARCHITECTURE AND PARALLEL PROCESSING
ADVANCED COMPUTER ARCHITECTUREAND PARALLEL PROCESSINGADVANCED COMPUTER ARCHITECTUREAND PARALLEL PROCESSING
ADVANCED COMPUTER ARCHITECTURE AND PARALLEL PROCESSING Zena Abo-Altaheen
 

Mais procurados (20)

Lecture4
Lecture4Lecture4
Lecture4
 
Introduction to parallel_computing
Introduction to parallel_computingIntroduction to parallel_computing
Introduction to parallel_computing
 
Introduction to parallel processing
Introduction to parallel processingIntroduction to parallel processing
Introduction to parallel processing
 
Parallel computing persentation
Parallel computing persentationParallel computing persentation
Parallel computing persentation
 
Parallel architecture-programming
Parallel architecture-programmingParallel architecture-programming
Parallel architecture-programming
 
Parallel Computing
Parallel Computing Parallel Computing
Parallel Computing
 
Unit 5 Advanced Computer Architecture
Unit 5 Advanced Computer ArchitectureUnit 5 Advanced Computer Architecture
Unit 5 Advanced Computer Architecture
 
Lecture 6
Lecture  6Lecture  6
Lecture 6
 
Parallel computing
Parallel computingParallel computing
Parallel computing
 
Parallel computing and its applications
Parallel computing and its applicationsParallel computing and its applications
Parallel computing and its applications
 
Mimd
MimdMimd
Mimd
 
Computer architecture
Computer architecture Computer architecture
Computer architecture
 
Lecture1
Lecture1Lecture1
Lecture1
 
Parallel computing
Parallel computingParallel computing
Parallel computing
 
Parallel computing in india
Parallel computing in indiaParallel computing in india
Parallel computing in india
 
Parallelism
ParallelismParallelism
Parallelism
 
network ram parallel computing
network ram parallel computingnetwork ram parallel computing
network ram parallel computing
 
Intro to parallel computing
Intro to parallel computingIntro to parallel computing
Intro to parallel computing
 
Lecture02 types
Lecture02 typesLecture02 types
Lecture02 types
 
ADVANCED COMPUTER ARCHITECTURE AND PARALLEL PROCESSING
ADVANCED COMPUTER ARCHITECTUREAND PARALLEL PROCESSINGADVANCED COMPUTER ARCHITECTUREAND PARALLEL PROCESSING
ADVANCED COMPUTER ARCHITECTURE AND PARALLEL PROCESSING
 

Destaque

CDA4411: Chapter 4 - Processor Technology and Architecture
CDA4411: Chapter 4 - Processor Technology and ArchitectureCDA4411: Chapter 4 - Processor Technology and Architecture
CDA4411: Chapter 4 - Processor Technology and ArchitectureFreddy San
 
Chapter 04 the processor
Chapter 04   the processorChapter 04   the processor
Chapter 04 the processorBảo Hoang
 
Processor organization & register organization
Processor organization & register organizationProcessor organization & register organization
Processor organization & register organizationGhanshyam Patel
 
Processor powerpoint
Processor powerpointProcessor powerpoint
Processor powerpointbrennan_jame
 
Processors - an overview
Processors - an overviewProcessors - an overview
Processors - an overviewLorenz Lo Sauer
 
8086 instruction set with types
8086 instruction set with types8086 instruction set with types
8086 instruction set with typesRavinder Rautela
 
Cloud based infrastructure for massive scale, without massive costs
Cloud based infrastructure for massive scale, without massive costsCloud based infrastructure for massive scale, without massive costs
Cloud based infrastructure for massive scale, without massive costsElmer Thomas
 
Algorithmic Memory Increases Memory Performance by an Order of Magnitude
Algorithmic Memory Increases Memory Performance by an Order of MagnitudeAlgorithmic Memory Increases Memory Performance by an Order of Magnitude
Algorithmic Memory Increases Memory Performance by an Order of Magnitudechiportal
 
Processor architecture design using 3 d integration technologies
Processor architecture design using 3 d integration technologiesProcessor architecture design using 3 d integration technologies
Processor architecture design using 3 d integration technologiesAvinash Reddy Penugonda
 
Digital image processing unit 1
Digital image processing unit 1Digital image processing unit 1
Digital image processing unit 1Anantharaj Manoj
 
It 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processingIt 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processingHarish Khodke
 

Destaque (20)

Parallel Computing
Parallel ComputingParallel Computing
Parallel Computing
 
Parallel processing Concepts
Parallel processing ConceptsParallel processing Concepts
Parallel processing Concepts
 
CDA4411: Chapter 4 - Processor Technology and Architecture
CDA4411: Chapter 4 - Processor Technology and ArchitectureCDA4411: Chapter 4 - Processor Technology and Architecture
CDA4411: Chapter 4 - Processor Technology and Architecture
 
Presentation1(1)
Presentation1(1)Presentation1(1)
Presentation1(1)
 
Chapter 04 the processor
Chapter 04   the processorChapter 04   the processor
Chapter 04 the processor
 
Arm
ArmArm
Arm
 
Lecture 47
Lecture 47Lecture 47
Lecture 47
 
Processor organization & register organization
Processor organization & register organizationProcessor organization & register organization
Processor organization & register organization
 
Processor powerpoint
Processor powerpointProcessor powerpoint
Processor powerpoint
 
Processors - an overview
Processors - an overviewProcessors - an overview
Processors - an overview
 
8086 instruction set with types
8086 instruction set with types8086 instruction set with types
8086 instruction set with types
 
Hadoop and big data
Hadoop and big dataHadoop and big data
Hadoop and big data
 
Cloud based infrastructure for massive scale, without massive costs
Cloud based infrastructure for massive scale, without massive costsCloud based infrastructure for massive scale, without massive costs
Cloud based infrastructure for massive scale, without massive costs
 
Algorithmic Memory Increases Memory Performance by an Order of Magnitude
Algorithmic Memory Increases Memory Performance by an Order of MagnitudeAlgorithmic Memory Increases Memory Performance by an Order of Magnitude
Algorithmic Memory Increases Memory Performance by an Order of Magnitude
 
Processors
ProcessorsProcessors
Processors
 
Processor architecture design using 3 d integration technologies
Processor architecture design using 3 d integration technologiesProcessor architecture design using 3 d integration technologies
Processor architecture design using 3 d integration technologies
 
Dip Unit Test-I
Dip Unit Test-IDip Unit Test-I
Dip Unit Test-I
 
Digital image processing unit 1
Digital image processing unit 1Digital image processing unit 1
Digital image processing unit 1
 
It 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processingIt 4-yr-1-sem-digital image processing
It 4-yr-1-sem-digital image processing
 
OGSA
OGSAOGSA
OGSA
 

Semelhante a Parallel processing extra

Semelhante a Parallel processing extra (20)

parallel processing.ppt
parallel processing.pptparallel processing.ppt
parallel processing.ppt
 
chapter-18-parallel-processing-multiprocessing (1).ppt
chapter-18-parallel-processing-multiprocessing (1).pptchapter-18-parallel-processing-multiprocessing (1).ppt
chapter-18-parallel-processing-multiprocessing (1).ppt
 
parallel-processing.ppt
parallel-processing.pptparallel-processing.ppt
parallel-processing.ppt
 
18 parallel processing
18 parallel processing18 parallel processing
18 parallel processing
 
Week 13-14 Parrallel Processing-new.pptx
Week 13-14 Parrallel Processing-new.pptxWeek 13-14 Parrallel Processing-new.pptx
Week 13-14 Parrallel Processing-new.pptx
 
22CS201 COA
22CS201 COA22CS201 COA
22CS201 COA
 
CA UNIT IV.pptx
CA UNIT IV.pptxCA UNIT IV.pptx
CA UNIT IV.pptx
 
Parallel Processing
Parallel ProcessingParallel Processing
Parallel Processing
 
unit 4.pptx
unit 4.pptxunit 4.pptx
unit 4.pptx
 
unit 4.pptx
unit 4.pptxunit 4.pptx
unit 4.pptx
 
Chapter 20
Chapter 20Chapter 20
Chapter 20
 
OS_MD_4.pdf
OS_MD_4.pdfOS_MD_4.pdf
OS_MD_4.pdf
 
Multiprocessor.pptx
 Multiprocessor.pptx Multiprocessor.pptx
Multiprocessor.pptx
 
Multi processing
Multi processingMulti processing
Multi processing
 
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
BIL406-Chapter-2-Classifications of Parallel Systems.pptBIL406-Chapter-2-Classifications of Parallel Systems.ppt
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
 
High performance computing
High performance computingHigh performance computing
High performance computing
 
CSA unit5.pptx
CSA unit5.pptxCSA unit5.pptx
CSA unit5.pptx
 
Intro_ppt.pptx
Intro_ppt.pptxIntro_ppt.pptx
Intro_ppt.pptx
 
Classification of Parallel Computers.pptx
Classification of Parallel Computers.pptxClassification of Parallel Computers.pptx
Classification of Parallel Computers.pptx
 
Distributed database management systems
Distributed database management systemsDistributed database management systems
Distributed database management systems
 

Mais de Er Girdhari Lal Kumawat (7)

Secret of success
Secret of successSecret of success
Secret of success
 
Help desk
Help deskHelp desk
Help desk
 
Input output
Input outputInput output
Input output
 
External memory
External memoryExternal memory
External memory
 
Frist slider share
Frist slider shareFrist slider share
Frist slider share
 
E library
E libraryE library
E library
 
Jaipur rural
Jaipur ruralJaipur rural
Jaipur rural
 

Parallel processing extra

  • 1. Computer Architecture Parallel Processing Prepared By RAJESH JAIN
  • 2. Multiple Processor Organization • Single instruction, single data stream - SISD • Single instruction, multiple data stream - SIMD • Multiple instruction, single data stream - MISD • Multiple instruction, multiple data stream- MIMD
  • 3. Single Instruction, Single Data Stream - SISD • Single processor • Single instruction stream • Data stored in single memory • Uni-processor
  • 4. Single Instruction, Multiple Data Stream - SIMD • Single machine instruction • Controls simultaneous execution • Number of processing elements • Lockstep basis • Each processing element has associated data memory • Each instruction executed on different set of data by different processors • Vector and array processors
  • 5. Multiple Instruction, Single Data Stream - MISD • Sequence of data • Transmitted to set of processors • Each processor executes different instruction sequence • Never been implemented
  • 6. Multiple Instruction, Multiple Data Stream- MIMD • Set of processors • Simultaneously execute different instruction sequences • Different sets of data • SMPs, clusters and NUMA systems
  • 7. Taxonomy of Parallel Processor Architectures
  • 8. MIMD - Overview • General purpose processors • Each can process all instructions necessary • Further classified by method of processor communication
  • 9. Tightly Coupled - SMP • Processors share memory • Communicate via that shared memory • Symmetric Multiprocessor (SMP) —Share single memory or pool —Shared bus to access memory —Memory access time to given area of memory is approximately the same for each processor
  • 10. Tightly Coupled - NUMA • Nonuniform memory access • Access times to different regions of memroy may differ
  • 11. Loosely Coupled - Clusters • Collection of independent uniprocessors or SMPs • Interconnected to form a cluster • Communication via fixed path or network connections
  • 14. Parallel Organizations - MIMD Shared Memory
  • 15. Parallel Organizations - MIMD Distributed Memory
  • 16. Symmetric Multiprocessors • A stand alone computer with the following characteristics — Two or more similar processors of comparable capacity — Processors share same memory and I/O — Processors are connected by a bus or other internal connection — Memory access time is approximately the same for each processor — All processors share access to I/O – Either through same channels or different channels giving paths to same devices — All processors can perform the same functions (hence symmetric) — System controlled by integrated operating system – providing interaction between processors – Interaction at job, task, file and data element levels
  • 17. SMP Advantages • Performance —If some work can be done in parallel • Availability —Since all processors can perform the same functions, failure of a single processor does not halt the system • Incremental growth —User can enhance performance by adding additional processors • Scaling —Vendors can offer range of products based on number of processors
  • 18. Block Diagram of Tightly Coupled Multiprocessor
  • 19. Organization Classification • Time shared or common bus • Multiport memory • Central control unit
  • 20. Time Shared Bus • Simplest form • Structure and interface similar to single processor system • Following features provided —Addressing - distinguish modules on bus —Arbitration - any module can be temporary master —Time sharing - if one module has the bus, others must wait and may have to suspend • Now have multiple processors as well as multiple I/O modules
  • 22. Time Share Bus - Advantages • Simplicity • Flexibility • Reliability
  • 23. Time Share Bus - Disadvantage • Performance limited by bus cycle time • Each processor should have local cache —Reduce number of bus accesses • Leads to problems with cache coherence —Solved in hardware - see later
  • 24. Multiport Memory • Direct independent access of memory modules by each processor • Logic required to resolve conflicts • Little or no modification to processors or modules required
  • 26. Multiport Memory - Advantages and Disadvantages • More complex —Extra login in memory system • Better performance —Each processor has dedicated path to each module • Can configure portions of memory as private to one or more processors —Increased security • Write through cache policy
  • 27. Clusters • Alternative to SMP • High performance • High availability • Server applications • A group of interconnected whole computers • Working together as unified resource • Illusion of being one machine • Each computer called a node
  • 28. Cluster Benefits • Absolute scalability • Incremental scalability • High availability • Superior price/performance
  • 29. Cluster Configurations - Standby Server, No Shared Disk
  • 31. Operating Systems Design Issues • Failure Management —High availability —Fault tolerant —Failover – Switching applications & data from failed system to alternative within cluster —Failback – Restoration of applications and data to original system – After problem is fixed • Load balancing —Incremental scalability —Automatically include new computers in scheduling —Middleware needs to recognise that processes may switch between machines
  • 32. Parallelizing • Single application executing in parallel on a number of machines in cluster —Complier – Determines at compile time which parts can be executed in parallel – Split off for different computers —Application – Application written from scratch to be parallel – Message passing to move data between nodes – Hard to program – Best end result —Parametric computing – If a problem is repeated execution of algorithm on different sets of data – e.g. simulation using different scenarios – Needs effective tools to organize and run
  • 34. Cluster v. SMP • Both provide multiprocessor support to high demand applications. • Both available commercially —SMP for longer • SMP: —Easier to manage and control —Closer to single processor systems – Scheduling is main difference – Less physical space – Lower power consumption • Clustering: —Superior incremental & absolute scalability —Superior availability – Redundancy