SlideShare uma empresa Scribd logo
1 de 41
SEMINAR ON
PARALLEL COMPUTING
   Niranjana Ambadi
      B090404EC
What does “parallel” mean?

ACCORDING TO WEBSTER, PARALLEL IS
“AN ARRANGEMENT OR STATE THAT
PERMITS SEVERAL OPERATIONS OR
TASKS TO BE PERFORMED
SIMULTANEOUSLY RATHER THAN
CONSECUTIVELY.”
What is a parallel computer?

“A LARGE COLLECTION OF
PROCESSING ELEMENTS THAT CAN
COMMUNICATE AND COOPERATE
TO SOLVE LARGE PROBLEMS FAST.”
PARALLELISM
• Parallel computing -form of computation in which
  many calculations are carried out simultaneously.
• Parallel computers can be roughly classified according
  to the level at which the hardware supports
  parallelism.
• Multi-core and Multi-processor computers have
  multiple processing elements within a single
  machine;clusters and grids use multiple computers to
  work on the same task. Specialized parallel computer
  architectures are used alongside traditional processors,
  for accelerating specific tasks,eg GPUs.
Flynn’s taxonomy

 SISD(Single Instruction Single Data)
 SIMD(Single Instruction Multiple Data)-
  available on CPU enables single op on multiple
  data at once.
 MISD(Multiple Instruction Single Data)
 MIMD(Multiple Instruction Multiple Data)-
  several cores on a single die
Parallelism-How?
• Task parallelism
• Data parallelism
• Recent CPU-several parallelisation techniques-
  branch prediction,out of order
  execution,superscalar
• These increase complexity,limiting the number of
  CPUs on a single chip.
• GPU each processing unit is simple,but large
  number on a single chip
Parallel Architectures
Three popular:
1. Shared memory (uniform memory access and
   symmetric multiprocessing),
2. Distributed memory (clusters and network of
   workstations), and
3. Shared Distributed (non-uniform memory
   access)
Difference With Distributed
                  Computing
Parallel Computing different processors/computers work on a
single common goal

Eg.Ten men pulling a rope to lift up one rock.
Supercomputers implement parallel computing.

Distributed computing is where several different computers work
separately on a multi-faceted computing workload.

Eg Ten men pulling ten ropes to lift ten different rocks.
Employees working in an office doing their own work.
Difference With Cluster Computing

A computer cluster is a group of linked computers, working together
closely so that in many respects they form a single computer.


Eg.,In an office of 50 employees,group of 15 doing some work,25
some other,and remaining 10 something else.

Similarly,in a network of 20 computers,16 working on a common
goal,whereas 4 on some other common goal.

Cluster Computing is a specific case of parallel computing.
Difference With Grid Computing

Grid Computing makes use of computers communicating over the
Internet to work on a given problem.


Eg. When 3 persons, one of them from USA, another from Japan and a
third from Norway are working together online on a common project.


Websites like Wikipedia,Yahoo!Answers,YouTube,FlickR or open
source OS like Linux are examples of grid computing.


Again,an example of parallel computing.
Cluster Computing
• Loosely connected n/w of nodes(computers)
  via a high speed LAN
• Orchestrated by "clustering middleware“
• Relies on a centralized management approach
  which makes the nodes available as
  orchestrated shared servers.
GPU-Graphics Processing Unit
• the dominant , massively parallel architecture
  available to the masses.
• simple yet energy-efficient computational
cores,
• thousands of simultaneously active fine-
  grained threads, and
Where are GPUs used?
Designed for a particular class of applications
 with the following characteristics:

Computational requirements are large.
Parallelism is substantial.
Throughput is more important than latency.
Fixed function GPUs
• The hardware in any given stage could exploit
  data parallelism within that stage, processing
  multiple elements at the same time.
• Each stage’s hardware customized for its given
  task
• a lengthy, feed-forward GPU pipeline with many
  stages, each typically accelerated by special
  purpose parallel hardware.
• Advantage-High throughput
• Disadvantage-Load balancing
GPU evolution
6 years ago                      Today
• A fixed-function processor     • a full fledge parallel
• built around the graphics        programmable processor
   pipeline                      • both application
• Best described as additions      programming interface
   of programmability to fixed     (APIs) and hardware
   function pipeline             • increasingly focusing on the
                                   programmable aspects of
                                   the GPU-vertex pgms &
                                   fragment pgms
Remote Sensing Processing

• On-the-flow processing: part by part
• Most algorithm do not consider neighborhood
  of each pixel
• Development of languages like CUDA and
  OpenCL motivated programmers to
  heterogenous processing platforms
Challenges for parallel-computing
                 chips
1. Power supply voltage scaling diminishing
2. memory bandwidth improvements is slowing
   down
3. Programmability
  – Memory model
  – Degree of parallelism
  – Heterogeneity
4. Research still going strong in parallel
   computing
Cluster memory
                   Increased CPU
                 utilisation requires
                 limiting number of
                 parallel processes.




  However as
 problem size
increases page
  fault occurs
Cluster Memory

                   Memory
                fragmentation


Effective
Memory
 Usage
                   Paging
                  overhead
.

                • Total memory is distributed
   Memory         into discrete chunks
fragmentation   • Uneven and inefficient
                  utilisation


                • Disk paging in heavily
   Paging         loaded nodes-high cost
  overhead      • Hard disks are too much
                  slower
NETWORK RAM
Applications can allocate memory
   greater than what is locally
            available
     Idle memory of other
     machines is used using
     a fast interconnecting
             network

            No Page
             faults
.




 RAM



Network
 RAM



 Disk
Disadvantages of existing NRAM
• Parallel job divides into processes which needs
  to be synchronised regularly.
• Nodes seek NRAM independently, uneven
  amount maybe granted-processes run at
  different speeds
• The whole job is limited by speed of the
  slowest process
Diagram of Parallel Network-RAM. Application 2
                 is assigned to
    nodes of P3, P4, and P5, but utilizes the
 available memory spaces in other nodes, such
              as P2, P6, and P7.
Generic Description
• All nodes host PNR servants-a servant acts as
  both client and server
• Managers(some servants) coordinate client
  requests
• Server has more unallocated memory than a
  threshold, it will grant NRAM request and
  allocate memory to the manager.
• Read and write requests are directly from the
  clients.
Generic Description



Client attempts to      Once allocated,
                                               Client will send
 allocate and de-      client is informed
                                             pages to the server
allocate NRAM on     which are the server
                                            for storage and later
behalf of hosting     nodes and the amt
                                                   retrieval
       node          of memory allocated
Network RAM Designs
Centralised (CEN)Strategy


Client (CLI)Strategy


Local Managers (MAN)Strategy


Backbone (BB) Strategy
CEN Strategy
Only one manager
coordinating all client
requests.
All servents know him
Advantage-No
broadcast of memory
load information
Disadvantage-Network
connection leading to
manager node
becomes bottleneck
CLI Strategy
Each client is a manager
and sends allocation
requests directly.
Advantage- No
synchronisation overhead
and allocates NRAM
quickly
Disadvantage-Some clients
receive large amounts of
NRAM while some may
not, worsening the overall
performance.
MAN
Strategy
When a job starts or
stops one client
volunteers as the
manager.
Each servant should
agree on the selected
manager node.
Drawback-broadcast
memory load
information causing
congestion
BB Strategy
•Subset of servents act as
managers
•All clients associated with
a job must agree on which
manager to contact.
•It is more scalable than
the centralized solution
• Since load is shared
among many servents, and
it uses fewer messages for
synchronization
Models
• Each node-33 MHz,32 MB local RAM,hard
  disk with seek time=9ms transfer rate 50
  MB/s
• Ethernet 100 Mbps star topology
• Each link latency 50 ns,central switch
  processing delay 80 microsec.
• No collisions
• System tasks by separate dedicated
  processors
• One centralised scheduler for the system
• Cache hit ratio of 50% ,memory access every
  4 clock cycles is assumed.
Metrics
• To directly compare DP (“disk paging,” a
  system without PNR) to the various PNR
  designs, we create another metric based on
  average response time (R):optimization ratio,
  which is defined as
Experimental set up
• We evaluate the performance of PNRAM
  under the following situations:
1. Varying memory loads
2. Varying network speeds
3. Different network topologies
4. Different scheduling strategies
Varying network
Varying memory                                        Schedulers
                                performance

• Vary RAM at                  • Link BW            • Gang scheduler
  each node                      &processing        • Space sharing
                                 delay                scheduler’
• Memory
  demands of
  jobs constant


                   Paging methods                  Topologies

                                                 • Bus
                  • Base method is               • Star
                    disk paging
                                                 • Fully connected
                  • Four PNR                       n/w
                    methods
Results-1
• As memory load increases,PNR and DP tend to
  infinite response times
• As memory load decreases,the response time
  converges to a constant number
• Adding PNR to systems loaded within some
  bounds (and adequate communication links)
  lead to performance benefit
Results-2
• PNR is very sensitive to network performance
• PNR response time tends to infinity as
  network service time is increased and
  converges to a constant number when service
  time is decreased
• DP does not follow this model
• PNR cannot be considered with low
  BW/comm bottlenecks
Result-3
• In space sharing system,only one process is
  allowed on a node at a time.
• In low load case ,CLI is the best choice
• Under heavy load,NRAM allocation
  coordination is a limiting factor
• In gang scheduling,n/w performance is crucial
• Lighter load OR-12%,heavier load OR>90%
Future work
• For some exps,PNR memory usage was even
  more non uniform than DP’s.
• More work needed to ensure that PNR itself
  doesnot create more overloaded nodes
• Coordination of allocation of memory
  resources and communication overhead
  needs to be taken care of.
CONCLUSION
• Using a coordinating PNR method under heavier loads
  is essential for good performance.
• Coordinating PNR methods offer the best performance
  enhancement when under moderate load.
• Performance gains can be as high as 100 percent.
• CLI can provide acceptable or superior results under
  light load only.
• All PNR methods offer little benefit under very heavy
  or very light loads.
• Good network performance is crucial for good PNR
  performance.
network ram parallel computing

Mais conteúdo relacionado

Mais procurados

Data cube computation
Data cube computationData cube computation
Data cube computationRashmi Sheikh
 
Optimistic concurrency control in Distributed Systems
Optimistic concurrency control in Distributed SystemsOptimistic concurrency control in Distributed Systems
Optimistic concurrency control in Distributed Systemsmridul mishra
 
Database , 12 Reliability
Database , 12 ReliabilityDatabase , 12 Reliability
Database , 12 ReliabilityAli Usman
 
Deadlock in distribute system by saeed siddik
Deadlock in distribute system by saeed siddikDeadlock in distribute system by saeed siddik
Deadlock in distribute system by saeed siddikSaeed Siddik
 
Distributed concurrency control
Distributed concurrency controlDistributed concurrency control
Distributed concurrency controlBinte fatima
 
Ddb 1.6-design issues
Ddb 1.6-design issuesDdb 1.6-design issues
Ddb 1.6-design issuesEsar Qasmi
 
Deadlock in Distributed Systems
Deadlock in Distributed SystemsDeadlock in Distributed Systems
Deadlock in Distributed SystemsPritom Saha Akash
 
Load Balancing in Parallel and Distributed Database
Load Balancing in Parallel and Distributed DatabaseLoad Balancing in Parallel and Distributed Database
Load Balancing in Parallel and Distributed DatabaseMd. Shamsur Rahim
 
Database ,14 Parallel DBMS
Database ,14 Parallel DBMSDatabase ,14 Parallel DBMS
Database ,14 Parallel DBMSAli Usman
 
Query Decomposition and data localization
Query Decomposition and data localization Query Decomposition and data localization
Query Decomposition and data localization Hafiz faiz
 
Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory managementrprajat007
 
Operating Systems: Process Scheduling
Operating Systems: Process SchedulingOperating Systems: Process Scheduling
Operating Systems: Process SchedulingDamian T. Gordon
 
Week 1 lecture material cc
Week 1 lecture material ccWeek 1 lecture material cc
Week 1 lecture material ccAnkit Gupta
 

Mais procurados (20)

Distributed deadlock
Distributed deadlockDistributed deadlock
Distributed deadlock
 
Data cube computation
Data cube computationData cube computation
Data cube computation
 
Optimistic concurrency control in Distributed Systems
Optimistic concurrency control in Distributed SystemsOptimistic concurrency control in Distributed Systems
Optimistic concurrency control in Distributed Systems
 
Database , 12 Reliability
Database , 12 ReliabilityDatabase , 12 Reliability
Database , 12 Reliability
 
Deadlock in distribute system by saeed siddik
Deadlock in distribute system by saeed siddikDeadlock in distribute system by saeed siddik
Deadlock in distribute system by saeed siddik
 
Distributed concurrency control
Distributed concurrency controlDistributed concurrency control
Distributed concurrency control
 
Grid computing
Grid computingGrid computing
Grid computing
 
Ddb 1.6-design issues
Ddb 1.6-design issuesDdb 1.6-design issues
Ddb 1.6-design issues
 
Deadlock in Distributed Systems
Deadlock in Distributed SystemsDeadlock in Distributed Systems
Deadlock in Distributed Systems
 
Load Balancing in Parallel and Distributed Database
Load Balancing in Parallel and Distributed DatabaseLoad Balancing in Parallel and Distributed Database
Load Balancing in Parallel and Distributed Database
 
Replication in Distributed Systems
Replication in Distributed SystemsReplication in Distributed Systems
Replication in Distributed Systems
 
Lec 7 query processing
Lec 7 query processingLec 7 query processing
Lec 7 query processing
 
Database ,14 Parallel DBMS
Database ,14 Parallel DBMSDatabase ,14 Parallel DBMS
Database ,14 Parallel DBMS
 
Ddbms1
Ddbms1Ddbms1
Ddbms1
 
Shared memory
Shared memoryShared memory
Shared memory
 
Query Decomposition and data localization
Query Decomposition and data localization Query Decomposition and data localization
Query Decomposition and data localization
 
Operating system memory management
Operating system memory managementOperating system memory management
Operating system memory management
 
Operating Systems: Process Scheduling
Operating Systems: Process SchedulingOperating Systems: Process Scheduling
Operating Systems: Process Scheduling
 
Slide05 Message Passing Architecture
Slide05 Message Passing ArchitectureSlide05 Message Passing Architecture
Slide05 Message Passing Architecture
 
Week 1 lecture material cc
Week 1 lecture material ccWeek 1 lecture material cc
Week 1 lecture material cc
 

Destaque

Parallel computing in india
Parallel computing in indiaParallel computing in india
Parallel computing in indiaPreeti Chauhan
 
Introduction to parallel_computing
Introduction to parallel_computingIntroduction to parallel_computing
Introduction to parallel_computingMehul Patel
 
Gpu and The Brick Wall
Gpu and The Brick WallGpu and The Brick Wall
Gpu and The Brick Wallugur candan
 
R workshop xx -- Parallel Computing with R
R workshop xx -- Parallel Computing with R R workshop xx -- Parallel Computing with R
R workshop xx -- Parallel Computing with R Vivian S. Zhang
 
Introduction To Parallel Computing
Introduction To Parallel ComputingIntroduction To Parallel Computing
Introduction To Parallel ComputingJörn Dinkla
 
Higher nab preparation
Higher nab preparationHigher nab preparation
Higher nab preparationscaddell
 
Introduction to Parallel Computing
Introduction to Parallel ComputingIntroduction to Parallel Computing
Introduction to Parallel ComputingRoshan Karunarathna
 
Highly Surmountable Challenges in Ruby+OMR JIT Compilation
Highly Surmountable Challenges in Ruby+OMR JIT CompilationHighly Surmountable Challenges in Ruby+OMR JIT Compilation
Highly Surmountable Challenges in Ruby+OMR JIT CompilationMatthew Gaudet
 
Patterns For Parallel Computing
Patterns For Parallel ComputingPatterns For Parallel Computing
Patterns For Parallel ComputingDavid Chou
 
Parallel Computing with R
Parallel Computing with RParallel Computing with R
Parallel Computing with RAbhirup Mallik
 
Scalable Parallel Computing on Clouds
Scalable Parallel Computing on CloudsScalable Parallel Computing on Clouds
Scalable Parallel Computing on CloudsThilina Gunarathne
 
High Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud TechnologiesHigh Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud Technologiesjaliyae
 
Indian Contribution towards Parallel Processing
Indian Contribution towards Parallel ProcessingIndian Contribution towards Parallel Processing
Indian Contribution towards Parallel ProcessingAjil Jose
 
VLSI Design(Fabrication)
VLSI Design(Fabrication)VLSI Design(Fabrication)
VLSI Design(Fabrication)Trijit Mallick
 
Parallel computing
Parallel computingParallel computing
Parallel computingvirend111
 

Destaque (20)

Parallel computing in india
Parallel computing in indiaParallel computing in india
Parallel computing in india
 
Introduction to parallel_computing
Introduction to parallel_computingIntroduction to parallel_computing
Introduction to parallel_computing
 
Gpu and The Brick Wall
Gpu and The Brick WallGpu and The Brick Wall
Gpu and The Brick Wall
 
Parallel Computing
Parallel ComputingParallel Computing
Parallel Computing
 
R workshop xx -- Parallel Computing with R
R workshop xx -- Parallel Computing with R R workshop xx -- Parallel Computing with R
R workshop xx -- Parallel Computing with R
 
Introduction To Parallel Computing
Introduction To Parallel ComputingIntroduction To Parallel Computing
Introduction To Parallel Computing
 
Higher nab preparation
Higher nab preparationHigher nab preparation
Higher nab preparation
 
Introduction to Parallel Computing
Introduction to Parallel ComputingIntroduction to Parallel Computing
Introduction to Parallel Computing
 
Highly Surmountable Challenges in Ruby+OMR JIT Compilation
Highly Surmountable Challenges in Ruby+OMR JIT CompilationHighly Surmountable Challenges in Ruby+OMR JIT Compilation
Highly Surmountable Challenges in Ruby+OMR JIT Compilation
 
Patterns For Parallel Computing
Patterns For Parallel ComputingPatterns For Parallel Computing
Patterns For Parallel Computing
 
Parallel Computing with R
Parallel Computing with RParallel Computing with R
Parallel Computing with R
 
Parallel computing(1)
Parallel computing(1)Parallel computing(1)
Parallel computing(1)
 
Scalable Parallel Computing on Clouds
Scalable Parallel Computing on CloudsScalable Parallel Computing on Clouds
Scalable Parallel Computing on Clouds
 
High Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud TechnologiesHigh Performance Parallel Computing with Clouds and Cloud Technologies
High Performance Parallel Computing with Clouds and Cloud Technologies
 
Indian Contribution towards Parallel Processing
Indian Contribution towards Parallel ProcessingIndian Contribution towards Parallel Processing
Indian Contribution towards Parallel Processing
 
VLSI Design(Fabrication)
VLSI Design(Fabrication)VLSI Design(Fabrication)
VLSI Design(Fabrication)
 
Parallel Computing
Parallel Computing Parallel Computing
Parallel Computing
 
Parallel computing
Parallel computingParallel computing
Parallel computing
 
Parallel computing(2)
Parallel computing(2)Parallel computing(2)
Parallel computing(2)
 
Parallel processing
Parallel processingParallel processing
Parallel processing
 

Semelhante a network ram parallel computing

CSA unit5.pptx
CSA unit5.pptxCSA unit5.pptx
CSA unit5.pptxAbcvDef
 
Multiprocessor.pptx
 Multiprocessor.pptx Multiprocessor.pptx
Multiprocessor.pptxMuhammad54342
 
Modern processor art
Modern processor artModern processor art
Modern processor artwaqasjadoon11
 
Modern processor art
Modern processor artModern processor art
Modern processor artwaqasjadoon11
 
Virtualization for Emerging Memory Devices
Virtualization for Emerging Memory DevicesVirtualization for Emerging Memory Devices
Virtualization for Emerging Memory DevicesTakahiro Hirofuchi
 
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
BIL406-Chapter-2-Classifications of Parallel Systems.pptBIL406-Chapter-2-Classifications of Parallel Systems.ppt
BIL406-Chapter-2-Classifications of Parallel Systems.pptKadri20
 
Introduction to parallel processing
Introduction to parallel processingIntroduction to parallel processing
Introduction to parallel processingPage Maker
 
Parallel & Distributed processing
Parallel & Distributed processingParallel & Distributed processing
Parallel & Distributed processingSyed Zaid Irshad
 
Virtualizing Sharepoint for Performance and Availability
Virtualizing Sharepoint for Performance and AvailabilityVirtualizing Sharepoint for Performance and Availability
Virtualizing Sharepoint for Performance and AvailabilityDamir Bersinic
 
Throughput oriented aarchitectures
Throughput oriented aarchitecturesThroughput oriented aarchitectures
Throughput oriented aarchitecturesNomy059
 
Multicore processor.pdf
Multicore processor.pdfMulticore processor.pdf
Multicore processor.pdfrajaratna4
 
Writing Scalable Software in Java
Writing Scalable Software in JavaWriting Scalable Software in Java
Writing Scalable Software in JavaRuben Badaró
 
High Performance Computer Architecture
High Performance Computer ArchitectureHigh Performance Computer Architecture
High Performance Computer ArchitectureSubhasis Dash
 

Semelhante a network ram parallel computing (20)

High performance computing
High performance computingHigh performance computing
High performance computing
 
CA UNIT IV.pptx
CA UNIT IV.pptxCA UNIT IV.pptx
CA UNIT IV.pptx
 
CSA unit5.pptx
CSA unit5.pptxCSA unit5.pptx
CSA unit5.pptx
 
CPU Caches
CPU CachesCPU Caches
CPU Caches
 
Multiprocessor.pptx
 Multiprocessor.pptx Multiprocessor.pptx
Multiprocessor.pptx
 
Modern processor art
Modern processor artModern processor art
Modern processor art
 
processor struct
processor structprocessor struct
processor struct
 
Danish presentation
Danish presentationDanish presentation
Danish presentation
 
Modern processor art
Modern processor artModern processor art
Modern processor art
 
Virtualization for Emerging Memory Devices
Virtualization for Emerging Memory DevicesVirtualization for Emerging Memory Devices
Virtualization for Emerging Memory Devices
 
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
BIL406-Chapter-2-Classifications of Parallel Systems.pptBIL406-Chapter-2-Classifications of Parallel Systems.ppt
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
 
Introduction to parallel processing
Introduction to parallel processingIntroduction to parallel processing
Introduction to parallel processing
 
Parallel & Distributed processing
Parallel & Distributed processingParallel & Distributed processing
Parallel & Distributed processing
 
Virtualizing Sharepoint for Performance and Availability
Virtualizing Sharepoint for Performance and AvailabilityVirtualizing Sharepoint for Performance and Availability
Virtualizing Sharepoint for Performance and Availability
 
Throughput oriented aarchitectures
Throughput oriented aarchitecturesThroughput oriented aarchitectures
Throughput oriented aarchitectures
 
Multicore processor.pdf
Multicore processor.pdfMulticore processor.pdf
Multicore processor.pdf
 
Concept of thread
Concept of threadConcept of thread
Concept of thread
 
Lecture4
Lecture4Lecture4
Lecture4
 
Writing Scalable Software in Java
Writing Scalable Software in JavaWriting Scalable Software in Java
Writing Scalable Software in Java
 
High Performance Computer Architecture
High Performance Computer ArchitectureHigh Performance Computer Architecture
High Performance Computer Architecture
 

Último

Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...itnewsafrica
 
All These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFAll These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFMichael Gough
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...Karmanjay Verma
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxfnnc6jmgwh
 
Infrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsInfrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsYoss Cohen
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integrationmarketing932765
 

Último (20)

Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
 
All These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDFAll These Sophisticated Attacks, Can We Really Detect Them - PDF
All These Sophisticated Attacks, Can We Really Detect Them - PDF
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...React JS; all concepts. Contains React Features, JSX, functional & Class comp...
React JS; all concepts. Contains React Features, JSX, functional & Class comp...
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
 
Infrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platformsInfrared simulation and processing on Nvidia platforms
Infrared simulation and processing on Nvidia platforms
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
 

network ram parallel computing

  • 1. SEMINAR ON PARALLEL COMPUTING Niranjana Ambadi B090404EC
  • 2. What does “parallel” mean? ACCORDING TO WEBSTER, PARALLEL IS “AN ARRANGEMENT OR STATE THAT PERMITS SEVERAL OPERATIONS OR TASKS TO BE PERFORMED SIMULTANEOUSLY RATHER THAN CONSECUTIVELY.”
  • 3. What is a parallel computer? “A LARGE COLLECTION OF PROCESSING ELEMENTS THAT CAN COMMUNICATE AND COOPERATE TO SOLVE LARGE PROBLEMS FAST.”
  • 4. PARALLELISM • Parallel computing -form of computation in which many calculations are carried out simultaneously. • Parallel computers can be roughly classified according to the level at which the hardware supports parallelism. • Multi-core and Multi-processor computers have multiple processing elements within a single machine;clusters and grids use multiple computers to work on the same task. Specialized parallel computer architectures are used alongside traditional processors, for accelerating specific tasks,eg GPUs.
  • 5. Flynn’s taxonomy  SISD(Single Instruction Single Data)  SIMD(Single Instruction Multiple Data)- available on CPU enables single op on multiple data at once.  MISD(Multiple Instruction Single Data)  MIMD(Multiple Instruction Multiple Data)- several cores on a single die
  • 6. Parallelism-How? • Task parallelism • Data parallelism • Recent CPU-several parallelisation techniques- branch prediction,out of order execution,superscalar • These increase complexity,limiting the number of CPUs on a single chip. • GPU each processing unit is simple,but large number on a single chip
  • 7. Parallel Architectures Three popular: 1. Shared memory (uniform memory access and symmetric multiprocessing), 2. Distributed memory (clusters and network of workstations), and 3. Shared Distributed (non-uniform memory access)
  • 8. Difference With Distributed Computing Parallel Computing different processors/computers work on a single common goal Eg.Ten men pulling a rope to lift up one rock. Supercomputers implement parallel computing. Distributed computing is where several different computers work separately on a multi-faceted computing workload. Eg Ten men pulling ten ropes to lift ten different rocks. Employees working in an office doing their own work.
  • 9. Difference With Cluster Computing A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. Eg.,In an office of 50 employees,group of 15 doing some work,25 some other,and remaining 10 something else. Similarly,in a network of 20 computers,16 working on a common goal,whereas 4 on some other common goal. Cluster Computing is a specific case of parallel computing.
  • 10. Difference With Grid Computing Grid Computing makes use of computers communicating over the Internet to work on a given problem. Eg. When 3 persons, one of them from USA, another from Japan and a third from Norway are working together online on a common project. Websites like Wikipedia,Yahoo!Answers,YouTube,FlickR or open source OS like Linux are examples of grid computing. Again,an example of parallel computing.
  • 11. Cluster Computing • Loosely connected n/w of nodes(computers) via a high speed LAN • Orchestrated by "clustering middleware“ • Relies on a centralized management approach which makes the nodes available as orchestrated shared servers.
  • 12. GPU-Graphics Processing Unit • the dominant , massively parallel architecture available to the masses. • simple yet energy-efficient computational cores, • thousands of simultaneously active fine- grained threads, and
  • 13. Where are GPUs used? Designed for a particular class of applications with the following characteristics: Computational requirements are large. Parallelism is substantial. Throughput is more important than latency.
  • 14. Fixed function GPUs • The hardware in any given stage could exploit data parallelism within that stage, processing multiple elements at the same time. • Each stage’s hardware customized for its given task • a lengthy, feed-forward GPU pipeline with many stages, each typically accelerated by special purpose parallel hardware. • Advantage-High throughput • Disadvantage-Load balancing
  • 15. GPU evolution 6 years ago Today • A fixed-function processor • a full fledge parallel • built around the graphics programmable processor pipeline • both application • Best described as additions programming interface of programmability to fixed (APIs) and hardware function pipeline • increasingly focusing on the programmable aspects of the GPU-vertex pgms & fragment pgms
  • 16. Remote Sensing Processing • On-the-flow processing: part by part • Most algorithm do not consider neighborhood of each pixel • Development of languages like CUDA and OpenCL motivated programmers to heterogenous processing platforms
  • 17. Challenges for parallel-computing chips 1. Power supply voltage scaling diminishing 2. memory bandwidth improvements is slowing down 3. Programmability – Memory model – Degree of parallelism – Heterogeneity 4. Research still going strong in parallel computing
  • 18. Cluster memory Increased CPU utilisation requires limiting number of parallel processes. However as problem size increases page fault occurs
  • 19. Cluster Memory Memory fragmentation Effective Memory Usage Paging overhead
  • 20. . • Total memory is distributed Memory into discrete chunks fragmentation • Uneven and inefficient utilisation • Disk paging in heavily Paging loaded nodes-high cost overhead • Hard disks are too much slower
  • 21. NETWORK RAM Applications can allocate memory greater than what is locally available Idle memory of other machines is used using a fast interconnecting network No Page faults
  • 23. Disadvantages of existing NRAM • Parallel job divides into processes which needs to be synchronised regularly. • Nodes seek NRAM independently, uneven amount maybe granted-processes run at different speeds • The whole job is limited by speed of the slowest process
  • 24. Diagram of Parallel Network-RAM. Application 2 is assigned to nodes of P3, P4, and P5, but utilizes the available memory spaces in other nodes, such as P2, P6, and P7.
  • 25. Generic Description • All nodes host PNR servants-a servant acts as both client and server • Managers(some servants) coordinate client requests • Server has more unallocated memory than a threshold, it will grant NRAM request and allocate memory to the manager. • Read and write requests are directly from the clients.
  • 26. Generic Description Client attempts to Once allocated, Client will send allocate and de- client is informed pages to the server allocate NRAM on which are the server for storage and later behalf of hosting nodes and the amt retrieval node of memory allocated
  • 27. Network RAM Designs Centralised (CEN)Strategy Client (CLI)Strategy Local Managers (MAN)Strategy Backbone (BB) Strategy
  • 28. CEN Strategy Only one manager coordinating all client requests. All servents know him Advantage-No broadcast of memory load information Disadvantage-Network connection leading to manager node becomes bottleneck
  • 29. CLI Strategy Each client is a manager and sends allocation requests directly. Advantage- No synchronisation overhead and allocates NRAM quickly Disadvantage-Some clients receive large amounts of NRAM while some may not, worsening the overall performance.
  • 30. MAN Strategy When a job starts or stops one client volunteers as the manager. Each servant should agree on the selected manager node. Drawback-broadcast memory load information causing congestion
  • 31. BB Strategy •Subset of servents act as managers •All clients associated with a job must agree on which manager to contact. •It is more scalable than the centralized solution • Since load is shared among many servents, and it uses fewer messages for synchronization
  • 32. Models • Each node-33 MHz,32 MB local RAM,hard disk with seek time=9ms transfer rate 50 MB/s • Ethernet 100 Mbps star topology • Each link latency 50 ns,central switch processing delay 80 microsec. • No collisions • System tasks by separate dedicated processors • One centralised scheduler for the system • Cache hit ratio of 50% ,memory access every 4 clock cycles is assumed.
  • 33. Metrics • To directly compare DP (“disk paging,” a system without PNR) to the various PNR designs, we create another metric based on average response time (R):optimization ratio, which is defined as
  • 34. Experimental set up • We evaluate the performance of PNRAM under the following situations: 1. Varying memory loads 2. Varying network speeds 3. Different network topologies 4. Different scheduling strategies
  • 35. Varying network Varying memory Schedulers performance • Vary RAM at • Link BW • Gang scheduler each node &processing • Space sharing delay scheduler’ • Memory demands of jobs constant Paging methods Topologies • Bus • Base method is • Star disk paging • Fully connected • Four PNR n/w methods
  • 36. Results-1 • As memory load increases,PNR and DP tend to infinite response times • As memory load decreases,the response time converges to a constant number • Adding PNR to systems loaded within some bounds (and adequate communication links) lead to performance benefit
  • 37. Results-2 • PNR is very sensitive to network performance • PNR response time tends to infinity as network service time is increased and converges to a constant number when service time is decreased • DP does not follow this model • PNR cannot be considered with low BW/comm bottlenecks
  • 38. Result-3 • In space sharing system,only one process is allowed on a node at a time. • In low load case ,CLI is the best choice • Under heavy load,NRAM allocation coordination is a limiting factor • In gang scheduling,n/w performance is crucial • Lighter load OR-12%,heavier load OR>90%
  • 39. Future work • For some exps,PNR memory usage was even more non uniform than DP’s. • More work needed to ensure that PNR itself doesnot create more overloaded nodes • Coordination of allocation of memory resources and communication overhead needs to be taken care of.
  • 40. CONCLUSION • Using a coordinating PNR method under heavier loads is essential for good performance. • Coordinating PNR methods offer the best performance enhancement when under moderate load. • Performance gains can be as high as 100 percent. • CLI can provide acceptable or superior results under light load only. • All PNR methods offer little benefit under very heavy or very light loads. • Good network performance is crucial for good PNR performance.