O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.
Raja’ Masa’deh
 How to Run Applications Faster ?
 What is a Cluster.
 Motivation for Using Clusters.
 Key Benefits of Clusters.
 Maj...
 There are three ways to improve performance:
◦ Work harder
◦ Work smarter
◦ Get help
 To speed up computation in Comput...
 A cluster is a type of parallel or distributed computer system,
which consists of a collection of inter-connected stand-...
Beowulf cluster NOW-1 cluster
 Many science and engineering problems today require
large amounts of computational resources and cannot be
executed in a...
 High performance: The reason for the growth in use of clusters is
that they have significantly reduced the cost of proce...
 Scalable Performance: This refers to the fact that scaling of the
resources (cluster nodes, memory capacity, I/O bandwid...
 Cluster Job Management: Clusters try to achieve high system
utilization, out of traditional workstations or PC nodes tha...
 The key components of a cluster include:
 Multiple standalone computers (PCs, Workstations, or
SMPs).
 Operating syste...
 Clusters, built using commercial-off-the-shelf (COTS)
hardware components.
 Such as :Personal Computers (PCs), workstat...
 A cluster operating system is desired to have the following features:
1. Manageability: Ability to manage and administra...
5. Scalability: Able to scale without impact on performance.
6. Support: User and system administrator support is essentia...
 Clusters need to incorporate fast interconnection
technologies in order to support high-bandwidth and low-
latency inter...
 Resides between OS and applications and offers an
infrastructure for supporting:
◦ Single System Image (SSI)
◦ System Av...
 Threads (PCs, SMPs, NOW, ..)
◦ POSIX Threads
◦ Java Threads
 MPI
◦ Linux, NT, on many Supercomputers
 PVM
 Software D...
 Sequential
 Parallel/distributed (cluster-aware applications)
Grand challenging applications
Weather Forecasting
Qua...
1. High availability clusters (HA) .
2. Network Load balancing clusters.
3. High performance Clusters.
 High-availability clusters (also known as Failover Clusters)
are implemented for the purpose of improving the
availabili...
 Load balancing is a computer networking methodology to
distribute workload across multiple computers.
 Load balancing c...
 HPC clusters used to solve high & advanced computation
problems.
 They are designed to take advantage of parallel proce...
Cluster ClassificationAttributes
SlackCompactPackaging
DecentralizedCentralizedControl
HeterogeneousHomogeneousHomogeneity...
 The cluster nodes can be compactly or slackly packaged.
 In a compact cluster, the nodes are closely packaged in one or...
 Centralized cluster: all the nodes are owned, controlled,
managed by a central administrator.
 Decentralized cluster, t...
 A homogeneous cluster means that the nodes adopt the same
platform (same processor architecture and the same OS).
 A he...
 Intracluster communication can be either exposed or
enclosed.
 In an exposed cluster, the communication paths among the...
 Such exposed clusters are easy to implement, but have
several disadvantages:
• Exposed intracluster communication is not...
 In an Enclosed cluster, intracluster communication is
shielded from the outside world ( more secure ).
 A disadvantage ...
 Manageability: with cluster, large numbers of components are
combined to work as a single entity. So, management becomes...
 Programmability Issues: when the components are different in
terms of software from each other, and then there may be is...
 We have discussed the motivation for cluster computing as
well as the technologies available for building cluster
system...
 Kiranjot Kaur1, Anjandeep Kaur Rai, A Comparative Analysis: Grid, Cluster
and Cloud Computing, International Journal of ...
Cluster computing
Cluster computing
Cluster computing
Cluster computing
Cluster computing
Cluster computing
Próximos SlideShares
Carregando em…5
×

Cluster computing

9.529 visualizações

Publicada em

A cluster is a type of parallel or distributed computer system, which consists of a collection of inter-connected stand-alone computers working together as a single integrated computing resource.

Publicada em: Educação
  • The Scrambler Unlock Her Legs | 95% Off by Bobby Rio-Rob Judge? ➤➤ http://t.cn/AijLRbnO
       Responder 
    Tem certeza que deseja  Sim  Não
    Insira sua mensagem aqui

Cluster computing

  1. 1. Raja’ Masa’deh
  2. 2.  How to Run Applications Faster ?  What is a Cluster.  Motivation for Using Clusters.  Key Benefits of Clusters.  Major issues in cluster design.  Cluster Architecture.  Cluster components.  Types of Cluster.  Cluster Classification.  Advantages & Disadvantages of Cluster computing.
  3. 3.  There are three ways to improve performance: ◦ Work harder ◦ Work smarter ◦ Get help  To speed up computation in Computer system : ◦ Use faster hardware: e.g. reduce the time per instruction. ◦ Optimized algorithms and techniques ◦ Doing parallel processing by multiple processors or multiple computers cooperated to solve problem
  4. 4.  A cluster is a type of parallel or distributed computer system, which consists of a collection of inter-connected stand-alone computers working together as a single integrated computing resource.  Cluster platforms was driven by a number of academic projects, such as Beowulf ,Berkeley NOW (Network Of Workstations) , and HPVM (High Performance Virtual Machine) that prove the advantage of clusters over other traditional platforms
  5. 5. Beowulf cluster NOW-1 cluster
  6. 6.  Many science and engineering problems today require large amounts of computational resources and cannot be executed in a single machine.  High cost of high performance computers.  Large numbers of under-utilized machines with a wasted computational power.  Communications bandwidth between computers is increasing.
  7. 7.  High performance: The reason for the growth in use of clusters is that they have significantly reduced the cost of processing power.  Scalability: a cluster uses the combined processing power of compute nodes to run cluster-enabled applications.  System availability : offer inherent high system availability due to the redundancy of hardware, operating systems, and applications
  8. 8.  Scalable Performance: This refers to the fact that scaling of the resources (cluster nodes, memory capacity, I/O bandwidth, etc.) leads to proportional increase in performance. Of course, both scale up and scale down are needed, depending on the application demand or cost-effectiveness considerations.  Availability Support: Clusters can provide cost-effective high availability with lots of redundancy in processors, memories, disks, I/O devices, networks, operating system images, etc.
  9. 9.  Cluster Job Management: Clusters try to achieve high system utilization, out of traditional workstations or PC nodes that are normally not highly utilized. Job management software is needed to provide batching, load balancing, parallel processing, and other functionality.  Fault Tolerance and Recovery: Cluster of machines can be designed to eliminate all single points of failure. Through redundancy, the cluster can tolerate faulty condition up to certain extent
  10. 10.  The key components of a cluster include:  Multiple standalone computers (PCs, Workstations, or SMPs).  Operating systems.  High-performance interconnections.  Middleware.  Parallel programming environments.  Applications.
  11. 11.  Clusters, built using commercial-off-the-shelf (COTS) hardware components.  Such as :Personal Computers (PCs), workstations, and Symmetric Multiple-Processors (SMPs).  These technologies are solution for cost-effective parallel computing for their availability and low cost.
  12. 12.  A cluster operating system is desired to have the following features: 1. Manageability: Ability to manage and administrate local and remote resources. 2. Stability: Support for robustness against system failures with system recovery. 3. Performance: All types of operations should be optimized and efficient. 4. Extensibility: Provide easy integration of cluster-specific extensions.
  13. 13. 5. Scalability: Able to scale without impact on performance. 6. Support: User and system administrator support is essential. 7. Heterogeneity: Portability over multiple architectures to support a cluster consisting of heterogeneous hardware components.  Popular OS used on nodes of clusters:  Linux  Microsoft NT  SUN Solaris
  14. 14.  Clusters need to incorporate fast interconnection technologies in order to support high-bandwidth and low- latency inter-processor communication between cluster nodes. Examples of NW technologies:  Fast Ethernet (100Mbps)  Gigabit Ethernet (1Gbps)  SCI (Dolphin - MPI- 12 usec latency)  ATM - Myrinet (1.2Gbps)
  15. 15.  Resides between OS and applications and offers an infrastructure for supporting: ◦ Single System Image (SSI) ◦ System Availability (SA)  SSI makes collection of computers appear as a single machine (globalized view of system resources)  SA supports check pointing and process migration.
  16. 16.  Threads (PCs, SMPs, NOW, ..) ◦ POSIX Threads ◦ Java Threads  MPI ◦ Linux, NT, on many Supercomputers  PVM  Software DSMs
  17. 17.  Sequential  Parallel/distributed (cluster-aware applications) Grand challenging applications Weather Forecasting Quantum Chemistry Molecular Biology Modeling Engineering Analysis (CAD/CAM) Web servers, data-mining
  18. 18. 1. High availability clusters (HA) . 2. Network Load balancing clusters. 3. High performance Clusters.
  19. 19.  High-availability clusters (also known as Failover Clusters) are implemented for the purpose of improving the availability of services which the cluster provides.  Provide redundant nodes that can act as backup systems in the event of failure.  Support mission critical applications.
  20. 20.  Load balancing is a computer networking methodology to distribute workload across multiple computers.  Load balancing clusters operate by routing all the workload over one or more load balancing front end nodes, which then distribute the workload efficiently between remaining active back end nodes.  Web servers, all available servers process requests.
  21. 21.  HPC clusters used to solve high & advanced computation problems.  They are designed to take advantage of parallel processing power of multiple nodes.  They are commonly used to perform function that require nodes to communicate as they perform their tasks- when calculation results from one node will affect future results from another.
  22. 22. Cluster ClassificationAttributes SlackCompactPackaging DecentralizedCentralizedControl HeterogeneousHomogeneousHomogeneity ExposedEnclosedSecurity
  23. 23.  The cluster nodes can be compactly or slackly packaged.  In a compact cluster, the nodes are closely packaged in one or more racks sitting in a room, and the nodes are not attached to peripherals (monitors, keyboards, mice, etc.).  In a slack cluster, the nodes are attached to their usual peripherals (i.e. they are complete SMPs, workstations, and PCs), and they may be located in different rooms, different buildings, even wide-area in remote regions.
  24. 24.  Centralized cluster: all the nodes are owned, controlled, managed by a central administrator.  Decentralized cluster, the nodes have individual owners, so this makes the system administration of such a cluster very difficult. It also requires special techniques for process scheduling, workload migration, check pointing, etc.
  25. 25.  A homogeneous cluster means that the nodes adopt the same platform (same processor architecture and the same OS).  A heterogeneous cluster uses nodes of different platforms.  In a homogeneous cluster, a process can migrate to another node and continue execution.  This is not feasible in a heterogeneous cluster, because different platform, as the binary code will not be executable
  26. 26.  Intracluster communication can be either exposed or enclosed.  In an exposed cluster, the communication paths among the nodes are exposed to the outside world.  An outside machine can access the communication paths, and thus individual nodes.
  27. 27.  Such exposed clusters are easy to implement, but have several disadvantages: • Exposed intracluster communication is not secure, unless the communication subsystem performs additional work to ensure the privacy and security. • Outside communications may disrupt intracluster communications in an unpredictable manner .
  28. 28.  In an Enclosed cluster, intracluster communication is shielded from the outside world ( more secure ).  A disadvantage is that there is currently no standard for efficient, enclosed intracluster communication.
  29. 29.  Manageability: with cluster, large numbers of components are combined to work as a single entity. So, management becomes easy.  Single System Image: also this illusion makes the user not worried about the cluster components, he only needs to manage a single system image.  High Availability: if one component fails, then some other component can takes its place, and user can continue to work with the system.
  30. 30.  Programmability Issues: when the components are different in terms of software from each other, and then there may be issues when combining all of them together as a single entity.  Problem in Finding Fault: it is difficult to find fault and determine which component has a problem.  Difficult to handle by a non specialist : cluster computing involves merging different or same components together, so a non- professional person may find it difficult to manage.
  31. 31.  We have discussed the motivation for cluster computing as well as the technologies available for building cluster systems using commodity-based hardware and software components to achieve high performance, availability, and scalability .  The cluster computing is a more cost effective platform compared to traditional high performance platforms.
  32. 32.  Kiranjot Kaur1, Anjandeep Kaur Rai, A Comparative Analysis: Grid, Cluster and Cloud Computing, International Journal of Advanced Research in Computer and Communication Engineering Vol. 3, Issue 3, March 2014.  Kai Hwang, Geoffrey Fox, and Jack Dongarra , Distributed Computing: Cluster, Grids and Clouds, May 2, 2010.  Domenico Laforenza et al., Grid and Cluster Computing: Models, Middleware and Architectures, Springer- Verlag Berlin Heidelberg 2006.  R. Buyya , High Performance Cluster Computing: Architectures and Systems, vol. 1, Prentice Hall, 1999.

×