The document presents a design alternative for a parallel computing system. It introduces parallel computing and discusses using multiple CPUs to solve problems concurrently by breaking them into discrete parts. It then outlines the proposed hardware resources, including PCs and networking equipment, and software tools including Linux and compilers. It describes implementing NFS file sharing and developing an algorithm to distribute tasks among nodes based on their load. Initial results showed executing applications on a single machine. It concludes that implementing parallelism at all levels allows for cost-effective utilization of resources to develop a mini supercomputer.
1. Design Alternative for Parallel System
[root@aissms ~ ]# mount /dev/Parallex /mnt/presentation
Presented by:
Amit Kumar B32*****7
Ankit Singh B32*****8
Sushant Bhadkamkar B32*****2
GUIDE: Mr. Anil.J. Kadam
Department of Computer Engineering,
AISSMS College of Engineering,
Pune - 1
[root@aissms]# cat /mnt/presentation/AUTHORS
2. Overview
[root@aissms ~]# tree /mnt/presentation
Introduction
What is Parallel computing?
- Introduction to parallel computing,
- Who uses parallel computing.?
- Why parallel computing?
Hardware & software resources.
Technical design overview
Implementation briefing
Phase I results
Applications
Advantages
Conclusion
References
3. Introduction
- What is Parallel computing?
Parallel computing is the simultaneous execution of the
same task (split up and specially adapted) on multiple Processors in
order to obtain results faster.
In the simplest sense, parallel computing is the simultaneous use of
multiple compute resources to solve a computational problem.
- To be run using multiple CPUs
- A problem is broken into discrete parts that can be solved
concurrently
[root@aissms ~]# grep /mnt/parallex Introduction
4. Introduction
[root@aissms ~]# grep /mnt/parallex Introduction
-Amdahl’s Law
If the sequential component of an algorithm accounts for 1/s of
the program's execution time, then the maximum possible speedup that
can be achieved on a parallel computer is ‘s’
6. [root@aissms ~]# sed /mnt/parallex PARALLEL
- Why parallel computing?
The primary reasons for using parallel computing:
-Save time - wall clock time
-Solve larger problems
-Provide concurrency (do multiple things at the same time)
-Taking advantage of non-local resources
-Cost savings
Limits to serial computing :
-Transmission speeds
-Limits to miniaturization
-Economic limitations
Introduction
7. [root@aissms ~]# cat Hardware | more
Hardware:
x686 Class PCs (installed with intranet connection)
Switch
Serial port connectors
100 BASE T LAN cable , RJ 45 connectors
Software:
Linux (2.6.x kernel)
Intel Compiler suite (Noncommercial)
LSB ( Linux Standard Base ) Set of GNU Kits with GNU CC/C+
+/F77/LD/AS
Hardware and Software
Resources
9. Phase I Implementation
[root@aissms ~]# echo
-NFS mounted on all nodes. (implementing shared memory)
-Status of nodes
-A test application sent to all host to determine current load on
the processor.
-Developed a distribution algorithm to break the task according to
load capacity of processor given by test app.
-All task received by server & integrate the result & give
output on server terminal.
23. Applications
[root@aissms ~]#
- High processing requirement tasks
- Molecular dynamics
- Astronomical modeling
- Data mining
- Image rendering
- Clustering is now used for mission-critical applications such as web and FTP
servers
- Google uses an ever-growing cluster composed of tens of thousands of
computers
- Scientific Calculations consisting of complex numerical calculations
24. Advantages
[root@aissms ~]# ls -lh ‘Advantages*’
- Implemented parallelism at every level.
- Parallel systems implemented on available hardware.
- Diskless technology.
-Cost (central storage solution)
-Error recovery
-Initialization
-Optimum utilization of available resources.
25. Conclusion
[root@aissms ~]# echo $CONCLUSION
By Implementing parallelism on all levels and making efficient utilization of
available hardware resources, we attempt to provide cost effective solution
for small & medium scale businesses and research institutes.
And,
We are in process of developing Mini Super computer.
26. References
[root@aissms ~]# find / -name “*Parallex*”
[1] Parallel Computer Architectures : Hardware/Software Approach.
Culler, David. Morgran Coffman Publishers. San Fransisco,CA.
[2] High Performance Computing. 2nd Edition, Dowd Kavin and Charles.
Sebastopol , CA : ORielly and Associates
[3] Source Book of Parallel Computing: Dongara, Jack. Morgran Coffman
Publishers. San Fransisco,CA
[4] High Performance Linux Cluster,Joseph
Sloan.Sebastopol,CA:O’ReillyMedia Inc.
[5] Parallel Computing on Heterogeneous Networks by Alexey L.
Lastovetsky
[6] Kernel Sources from http://ww.kernel.org