The document discusses the limits of information and communication technologies (ICT) such as computing power, data storage, and network bandwidth. It proposes that future networks will need to scale in both size and functionality through approaches like federation of multiple networks. Cloud computing is presented as a potential approach to tackle these limits by providing on-demand access to shared computing resources over a network in a scalable and elastic manner. However, cloud computing is still associated with many marketing hype and open questions remain regarding its impact and how it can integrate with existing technologies.
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Cloud Computing,雲端運算-中研院網格計畫主持人林誠謙
1.
2.
3. Water, water, everywhere, nor any drop to drink S. T. Coleridge, 1797 S. T. Coleridge, 1797 S. T. Coleridge, 1797
4.
5. The Internet Hourglass [email_address] IP Application layer Link layer Ethernet, WIFI (802.11), ATM, SONET/SDH, FrameRelay, modem, ADSL, Cable, Bluetooth… Voice, Video, P2P, Email, youtube, …. Protocols – TCP, UDP, SCTP, ICMP,… Disruptive approaches need a disruptive architecture Changing/updating the Internet core is difficult or impossible ! (e.g. Multicast, Mobile IP, QoS, …) Everything on IP Homogeneous networking abstraction IP on Everything IPv6 IPvX
27. SETI@home: 1,000,000 CPUs “Poor man’s Grid” BOINC (Berkeley Open Infrastructure for Network Computing) platform launched in 2003. General-purpose open-source platform for client-server model of distributed computing. >50 volunteer computing projects today in a wide range of sciences. Not all use BOINC. Volunteer Computing
28. Folding@home: >1 petaflop Sony pre-installs folding@home on Playstation-III – user can choose to run in background. 50k PS3s make first distributed petaflop machine (Guiness World Record Sept 2007). Often of Philanthropic Nature
29. LHC@home: >3000 CPU-years, >60K Volunteers >60K Volunteers Fortran program Sixtrack simulates LHC proton beam stability for 10 5 -10 6 orbits. Include real magnet parameters, beam-beam effects. Predict stable operation conditions. Citizen Cyber-science Projects
33. History of volunteer computing Applications Middleware 1995 2005 distributed.net, GIMPS SETI@home, Folding@home Commercial: Entropia, United Devices, ... BOINC Climateprediction.net [email_address] IBM World Community Grid [email_address] [email_address] ... 2005 2000 now Academic: Bayanihan, Javelin, ... Applications
34. ASGC Introduction Large Hadron Collider (LHC) Avian Flu Drug Discovery Grid Application Platform A Worldwide Grid Infrastructure Asia Pacific Regional Operation Center >280 sites, 45 countries >80,000 CPUs, >25 PetaBytes >14,000 users, >200 VOs >250,000 jobs/day Best Demo Award of EGEE’07 Lightweight Problem Solving Framework 1. Most Reliable T1: 98.83% 2. Very Highly Performing and most Stable Site in CCRC08 Max CERN/T1-ASGC Point2Point Inbound : 7.3 Gbps
35. 由歐洲到台灣每秒能傳送兩部大英百科全書 -- 歷史記錄 Max CERN/T1->ASGC Inbound : 7.3 Gbps, and Outbound : 5.9 Gbps
61. Is It Buzzword evolution? One distributed computing buzzword per decade Metacomputing (~1987, L. Smarr) Grid computing (~1997, I. Foster, K. Kesselman) Cloud computing (~2007, E. Schmidt ?)
63. Cloud hype 2 It’s stupidity. It’s worse than stupidity: it’s a marketing hype campaign. Somebody is saying this is inevitable — and whenever you hear somebody saying that, it’s very likely to be a set of businesses campaigning to make it true. Richard Stallman, quoted in The Guardian, September 29, 2008 The interesting thing about Cloud Computing is that we’ve redefined Cloud Computing to include everything that we already do. . . . I don’t understand what we would do differently in the light of Cloud Computing other than change the wording of some of our ads. Larry Ellison, quoted in the Wall Street Journal, September 26, 2008
65. Cloud hype 4 Illustration by David Simonds The Economist April 09 The Open Cloud Manifesto “ The industry needs an objective, straightforward conversation about how this new computing paradigm will impact organizations, how it can be used with existing technologies, and the potential pitfalls of proprietary technologies that can lead to lock-in and limited choice .” OpenCloudManifesto.org
70. Definitions 2 Cloud computing is an on-demand service offering a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and/or software services) in a pay-per-use model . Clouds are usually commercial and use proprietary interfaces. Service grids are systems that federate, share and coordinate distributed resources from different organizations which are not subject to centralized control , using standard, open, general-purpose protocols and interfaces to deliver non-trivial qualities of service. Service grids are used by Virtual Organisations , thematic groups of users crossing administrative and geographical boundaries.
74. Grids vs. Clouds 3 Cost of 1 Teraflop-year Cloud: $1.75M Amazon EC2 rates Cluster: $145K Hardware (computing, network, storage); power; infrastructure; sysadmin Volunteer: $1K - $10K Server hardware; sysadmin; web development
75.
76.
77. Conclusion “ IT is a cost center, after all, not so dissimilar from janitorial and cafeteria services, both of which have long been outsourced at most enterprises. Security concerns won't necessarily prevent companies from wholesale outsourcing of data services: businesses have long outsourced payroll and customer data to trusted providers. Much will depend on the specific company, of course, but it's unlikely that smaller enterprises will resist the economic logic of utility computing. Bigger corporations will simply take longer to make the shift.” Nicholas Carr in The Big Switch: Rewiring the World, from Edison to Google
79. Grid & Cloud: opportunities Many cloud offerings = competition Amazon Elastic Cloud Computing (EC2); IBM Blue Cloud; Microsoft Azure Services Platform; Sun Open Cloud Initiative; Google App Engine; Salesforce.com Force.com Cloud; GoGrid Cloud Hosting; RackSpace Cloud Hosting; Flexiscale Utility Computing on Demand… Some academic/open-source variants emerging Eucalyptus; Nimbus, OpenNebula,… Experiments in Grid + Cloud BalticCloud,StratusLab, VirtCloud, …some to be discussed in this session
80.
81. Production & Volunteer grids EGEE-III INFSO-RI-222667 EGEE - Bob Jones - OGF25/User Forum - 2-6 March 2009 Enabling Grids for E-sciencE
87. Grid Computing My preferred definition: Grid computing is distributed computing performed transparently across multiple administrative domains Peter Coveney (P.V.Coveney@ucl.ac.uk) Notes: Computing means any activity involving digital information -- no distinction between numeric/symbolic, or numeric/data/viz Transparency implies minimal complexity for users of the technology
Moore’s Law – Individual computers double in processing power every 18 monthsStorage Law – disk storage capacity doubles every 12 Months Gilder’s Law – Network bandwidth doubles every 9 months (but is harder to install)This exponential growth profoundly changes the landscape of Information technology(High-speed) access to networked information becomes the dominant feature of future computingFor large-scale images: secure remote access eventually becomes routine
1. Moore’s Law – Transistor Density doubles thus Individual computers double in processing power every 18 months2. This exponential growth profoundly changes the landscape of Information Technology
1. Moore’s Law implies 101.6 times growth in 10 years, 256 times in 12 years2. In the last 10 years, in particular, 10,000 times performance gain; this is mainly due to clever use of Parallel programming in some application area3. How to write and Ease of Use of Massively Parallel Computers are essential!
1. Higher transistor integration and higher frequency alone result too much heat for more performance.2. One needs a holistic approach to deal with heat problem3. Multicore with multithreading architecture helps to spread power dissipation over a greater area
Since the global e-Infrastructure is establishing quickly, we believe it is an excellent opportunity to take advantage of sharing and collaboration to bridge the gap between Asia and the world as well as creating new regional cooperation opportunities in Asia Pacific. ASGC as one of the earliest partner in Asia
ASGC is actually a center for e-Science, HEP is the largest and first (power) user of ASGC, and we also learnt a lot from HEP community for collaboration EUAsiaGrid: with 11 Asia partners & 4 European partners
Are we able to do e-Science ? sharing sometimes also means complement)
from site deployment, certification, services and to sustainable operationsOur participation to other EGEE activities, such as application support and development, dissemination and training will not be discussed here. Only the sustainable model for AP regional is raised here.
AARNet also has 2 x 10Gb to US
[Introduce the ASGCNet backbone first] first 10G network between Asia and Europe, 2.5G link to US and extend to NL as the Europe link backup. In Asia, we upgrade to have 2.5 G links to both JP and HK from 2008, and another 622Mb link to Singapore. We are keen to share all the network with APAN and TEIN and Asia partners.
What we are doing in Taiwan for EUAsiaGrid and the regional collaboration is to provide gLite infrastructure, and to enable e-Infrastructure services for users
There is also an important component of EGEE-II in Business Partners and Industry Take-up such as Industry Task Force and Industry Forum. It also organises EGEE Business Associates (EBA).
Prediction of earthquake is still not possible at this moment, through the simulation of wave propagation and impacts with assumed hypocenter, we could understand the possible threat to areas scatter from the epicenter. Then, the strategy for mitigation and protection would be verified accordingly. Here is exmaple of simulation of 3D amplification effects in Taipei basin by the wave-field propagation simulation with considering the complex geometry structure, extraordinary hazards could be identified even the epicenter is far from the target area.
1. For the un-precedented scale of collaboration, the High Energy Physics Community has the most experience.2. Strategically, the world-wide funding agencies use HEP community to build the 1st Global production Grid3. With the Hope that this will extend to other e-Science and HPC application areas by EGEE and OSG!