ESnet Defined: Challenges and Overview Department of Energy ...
1. ESnet Status Update William E. Johnston ESnet Department Head and Senior Scientist wej@es.net, www.es.net This talk is available at www.es.net/ESnet4 Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of Science ESCC January 23, 2008 (Aloha!)
2.
3.
4.
5.
6. Building ESnet4 - Starting Point International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) 42 end user sites CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ESnet IP core: Packet over SONET Optical Ring and Hubs ELP commercial peering points MAE-E PAIX-PA Equinix, etc. PNWGPoP/ PAcificWave ESnet core hubs IP Abilene high-speed peering points with Internet2/Abilene NYC SNV ALB MREN Netherlands StarTap Taiwan (TANet2, ASCC) AU AU Specific R&E network peers Other R&E peering points ESnet Science Data Network (SDN) core Office Of Science Sponsored (22) Ia. ESnet 3 with Sites and Peers (Early 2007) TWC SNLL YUCCA MT BECHTEL-NV PNNL LIGO INEEL LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL Lab DC Offices MIT ANL BNL FNAL AMES NREL LLNL GA DOE-ALB OSC GTN NNSA SINet (Japan) Russia (BINP) DC Abilene Abilene CERN (USLHCnet DOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc Starlight Abilene JGI LBNL SLAC NERSC SNV SDN SDSC Equinix SNV ORNL CHI NASA Ames SEA CHI-SL MAN LAN Abilene UNM MAXGPoP AMPATH (S. America) AMPATH (S. America) R&E networks ATL NSF/IRNC funded Equinix
7. ESnet 3 Backbone as of January 1, 2007 Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub
8. ESnet 4 Backbone as of April 15, 2007 Clev. Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub
9. ESnet 4 Backbone as of May 15, 2007 SNV Clev. Boston Clev. Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub
10. ESnet 4 Backbone as of June 20, 2007 Clev. Boston Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Denver 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub
11. ESnet 4 Backbone August 1, 2007 (Last JT meeting at FNAL) Clev. Boston Houston Los Angeles Clev. Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Denver 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub
12. ESnet 4 Backbone September 30, 2007 Clev. Boston Houston Boise Los Angeles Clev. Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Denver 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub
13. ESnet 4 Backbone December 2007 Clev. Boston Houston Boise Los Angeles Clev. Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso New York City Washington DC Atlanta Denver Nashville Chicago 10 Gb/s SDN core (NLR) 2.5 Gb/s IP Tail (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub
14. ESnet 4 Backbone Projected for December, 2008 Houston Clev. Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Denver Los Angeles Nashville X2 X2 X2 X2 X2 X2 X2 DC 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub
15. ESnet Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (12/2007) LVK SNLL PNNL LIGO Lab DC Offices MIT/ PSFC AMES LLNL NNSA Sponsored (13+) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) ~45 end user sites CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren commercial peering points PAIX-PA Equinix, etc. ESnet core hubs NEWY JGI LBNL SLAC NERSC SDSC ORNL MREN StarTap Taiwan (TANet2, ASCC) AU AU SEA CHI-SL Specific R&E network peers Office Of Science Sponsored (22) KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Abilene/I2 NETL ANL FNAL International (1-10 Gb/s) 10 Gb/s SDN core (I2, NLR) 10Gb/s IP core MAN rings (≥ 10 Gb/s) Lab supplied links OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Salt Lake DOE Geography is only representational Other R&E peering points INL YUCCA MT BECHTEL-NV LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL BNL NREL GA DOE-ALB DOE GTN NNSA SINet (Japan) Russia (BINP) ELPA WASH CERN (USLHCnet: DOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc SUNN Abilene SNV1 Equinix ALBU CHIC NASA Ames UNM MAXGPoP NLR AMPATH (S. America) AMPATH (S. America) R&E networks ATLA NSF/IRNC funded IARC PacWave KAREN / REANNZ Internet2 SINGAREN ODN Japan Telecom America Starlight USLHCNet NLR PacWave Abilene Equinix DENV SUNN NASH Internet2 NYSERNet MAN LAN USHLCNet to GÉANT
16. ESnet4 End-Game Core networks 50-60 Gbps by 2009-2010 (10Gb/s circuits), 500-600 Gbps by 2011-2012 (100 Gb/s circuits) Cleveland Europe (GEANT) Asia-Pacific New York Chicago Washington DC Atlanta CERN (30+ Gbps) Seattle Albuquerque Australia San Diego LA Denver South America (AMPATH) South America (AMPATH) Canada (CANARIE) CERN (30+ Gbps) Canada (CANARIE) Asia-Pacific Asia Pacific GLORIAD (Russia and China) Boise Houston Jacksonville Tulsa Boston Science Data Network Core IP Core Kansas City Australia Core network fiber path is ~ 14,000 miles / 24,000 km Sunnyvale IP core hubs SDN hubs USLHCNet 1625 miles / 2545 km 2700 miles / 4300 km Production IP core ( 10Gbps ) SDN core ( 20-30-40-50 Gbps ) MANs (20-60 Gbps) or backbone loops for site access International connections Primary DOE Labs High speed cross-connects with Ineternet2/Abilene Possible hubs
17. A Tail of Two ESnet4 Hubs MX960 Switch T320 Router 6509 Switch T320 Routers Sunnyvale Ca Hub Chicago Hub ESnet’s SDN backbone is implemented with Layer2 switches; Cisco 6509s and Juniper MX960s - Each present their own unique challenges.
18.
19. ESnet Traffic Continues to Exceed 2 Petabytes/Month 2.7 PBytes in July 2007 1 PBytes in April 2006 ESnet traffic historically has increased 10x every 47 months Overall traffic tracks the very large science use of the network
20. When A Few Large Data Sources/Sinks Dominate Traffic it is Not Surprising that Overall Network Usage Follows the Patterns of the Very Large Users - This Trend Will Reverse in the Next Few Weeks as the Next Round of LHC Data Challenges Kicks Off FNAL Outbound Traffic
21. FNAL Traffic is Representative of all CMS Traffic Accumulated data (Terabytes) received by CMS Data Centers (“tier1” sites) and many analysis centers (“tier2” sites) during the past 12 months (15 petabytes of data) [LHC/CMS]
22. ESnet Continues to be Highly Reliable; Even During the Transition “ 5 nines” (>99.995%) “ 3 nines” (>99.5%) “ 4 nines” (>99.95%) Dually connected sites Note: These availability measures are only for ESnet infrastructure, they do not include site-related problems. Some sites, e.g. PNNL and LANL, provide circuits from the site to an ESnet hub, and therefore the ESnet-site demarc is at the ESnet hub (there is no ESnet equipment at the site. In this case, circuit outages between the ESnet equipment and the site are considered site issues and are not included in the ESnet availability metric.
43. DOEGrids CA (one of several CAs) Usage Statistics * Report as of Jan 17, 2008 Total No. of Active Certificates 7547 113 FusionGRID CA certificates 11797 Total No. of Expired Certificates 14545 Host & Service Certificates 49 ESnet SSL Server CA Certificates 1776 Total No. of Revoked Certificates 6549 User Certificates 21095 Total No. of Certificates Issued 25470 Total No. of Requests
44. DOEGrids CA (Active Certificates) Usage Statistics * Report as of Jan 17, 2008 US, LHC ATLAS project adopts ESnet CA service
The point here is that most modern, large-scale, science and engineering is multi-disciplinary, and must use geographically distributed components, and this is what has motivated a $50-75M/yr (approx) investment in Grid technology by the US, UK, European, Japanese, and Korean governments.
The point here is that most modern, large-scale, science and engineering is multi-disciplinary, and must use geographically distributed components, and this is what has motivated a $50-75M/yr (approx) investment in Grid technology by the US, UK, European, Japanese, and Korean governments.
The point here is that most modern, large-scale, science and engineering is multi-disciplinary, and must use geographically distributed components, and this is what has motivated a $50-75M/yr (approx) investment in Grid technology by the US, UK, European, Japanese, and Korean governments.