SlideShare uma empresa Scribd logo
1 de 43
10:00am                          Coffee
10:30am                          Morning Session

NeSI Presentation
An overview of high performance computing (HPC) and NeSI provided by NeSI's Director Nick Jones.

Demonstrations
NeSI staff will be demonstrating some of the features of the HPC facilities, explaining how it they can fit into a
research project.

12:30pm                          Lunch
1:30pm                           Afternoon Session

Local Case Study
An overview of a local researcher’s use of HPC facilities and aspirations for the national facilities provided by NeSI.

Group discussions facilitated by the NeSI team
A chance for researchers to connect and discuss how NeSI can benefit their projects.
Staff from REANNZ and Tuakiri, the New Zealand Access Federation, will also be on hand to discuss the Science DMZ
and Tuakiri respectively.
Image: Natural Palette by Seth Darling and Muruganathan Ramanathan, Argonne's Center for Nanoscale Materials
Creative Commons Atribution Share-Alike - http://creativecommons.org/licenses/by-sa/2.0/




                                                                                              Computing Facilities for
                                                                                                 NZ Researchers
                                                                                                 National HPC Roadshow 2012

                                                                                                 Nick Jones, Director


                                                                                          A partnership between NZ’s research institutions
                                                                                           delivering advanced computational services for
                                                                                                       leading edge discovery
NeSI, a national research infrastructure
 High Performance Computing (HPC)
 & eScience services
 … for all NZ researchers


NeSI HPC Facilities


NeSI’s Roadmap


Progress Report
Genomics, Genetics, Bioinform
                                                               Wind Energy, Geothermal &
                            atics, Molecular Modelling:
                                                               Minerals Exploration:
                            NZ Genomics Ltd
                                                               • GNS Exploration
                            Maurice Wilkins Centre
                                                               • Institute for Earth Science and
                            Alan Wilson Centre
                                                               Engineering
                            Virtual Institute of Statistical
                                                               • Centre for Atmospheric Research
                            Genetics




                                                                             Earthquakes, Tsunami, Volcano
Nanotechnology and High                                                      es:
Technology Materials:                                                        • Natural Hazards Research
MacDiarmid Institute                                                         Platform
Materials TRST                                                               • DEVORA Auckland Volcanic
                                                                             Field
                                                                             • GeoNet




   Human Development, Bioengineering, Social
   Statistics:
   • National Research Centre for Growth and
                                                                    Invasive Species, Water / Land
   Development
                                                                    Use, Emissions:
   • Auckland Bioengineering Institute
                                                                    • Agricultural Greenhouse Gas Centre
   • Liggins Institute
                                                                    • Bio-Protection Research Centre
   • Malaghan Institute
                                                                    • National Climate Change Centre
   • Social Sciences Data Service
Research e-Infrastructure Roadmap - 2010
Genomics                                     NZ Genomics Ltd

    Data
eScience                       BeSTGRID Services

                          BeSTGRID Cluster
    HPC                   BlueFern BlueGene/L
              NIWA Cray                                    NeSI @ NIWA: HPC – P575

 Identity                     BeSTGRID Federation

 Network            REANNZ KAREN Advanced Research Network

            2006           2008              2010          2012          2014
How did NeSI arise?
October 2010 an Investment Case entitled:

     “National eScience Infrastructure (NeSI) High Performance Computational
     Platforms and Services for NZ’s Research Communities”
•   Submitted to the Minister of Research, Science & Technology
•   Prepared by a Working Group of representatives from: UoA, UC, UoO, Landcare, AgResearch & NIWA
    (under an indept. Chair)

Investment Case asserted that:
     – HPC and related eScience infrastructure are indispensable components of modern science, and are
       having a major impact on almost every branch of research;
     – By taking a sector approach, more efficient coordination and cooperation would be achieved, leading
       to strategically targeted investment in HPC;
     – Thereby providing international-scale HPC to a wide range of communities and disciplines.

Formulated following a “Needs Analysis” during 2010




                                                                                                             6
What “we” said about our HPC needs…
In 2010, NZ Researchers were surveyed to determine their existing and anticipated
HPC requirements. We (~194 of us) said (e.g.):

  Processors to run a code      < 10        10 - 100    100 - 1000     >10,000
  In 2010                       40%         39%         15%            5%
  In 2015                       12%         32%         38%            17%

  File Space per experiment     < 100GB     1 - 10 TB    100 TB        >1PB
  In 2010                       64%         35%          0%            0.6%
  In 2015                       27%         58%          13%           3%

  Off-Site Data Transfers/day   < 100MB     1GB          1TB           >10TB
  In 2010                       33%         42%          21%           4%
  In 2015                       17%         28%          33%           23%




                                                                                    7
Why hadn’t this already happened?
Coordination failure..
    a collective failure to coordinate investment decisions for the greater benefit (value, efficiency)

    We see a coordination failure (coordination of capital and resources) leading to HPC facilities that don’t achieve the
    scale required for demanding research applications

Funding strategy aligns Opex and Capex – Crown and collaborators fund both, clarifying intent

Value to Crown                                              Value to Institutions
Why do we need Government investment?                       Why co-invest into national infrastructure?

•   overcome coordination failure                           •    new investment into institutions and
•   reach scale and maturity                                     sector
•   efficient investment in critical                        •    create scalable mature facilities
    infrastructure                                          •    trust and control ensure good fit to needs
What did we decide on?
    Shared facilities
    shared facilities accessible nationally




    National Access scheme
    single Access Policy for all HPC facility access




    National Team
    single Team nationally sharing practices and expertise
Investment partners
Crown                                                                                                                    Auckland / Landcare /
                         • Invest over 3 years, plus out-years                                                                                    Otago
                                                                                                                                                  • People
                                                                                               Governance                                         • Commodity cluster
                                                                                              Principal Investors &
                                                                                              Independent Directors                               • Storage
                                                                                              & Crown Observer                                    • Portion of
                                                                                                                                                  facilities, power, depreciation
                                                                                              • Transparent                                       costs
                                                                                              • Collaborative                   Cluster &
                                                                                                                                                  • Virtual hosting
                 Canterbury
                                                                                                                                Services
                 • People                                                                                                                                NIWA
                 • HPC
                                                             HPC &                    National eScience                                                  • People
                 • New storage                                                                                                          HPC &            • HPC capacity
                                                            Services                   Infrastructure
                 • Portion of                                                                                                          Services          • Storage
                 facilities, power, depreciation                                                  Procurement
                 costs                                                                                                                                   • Portion of
                                                                                            Operational management                                       facilities, power, depreciation
                                                                                                                                                         costs
                                                                                            Data and Compute Access
                                                                                                    Outreach
                                                                                          Scientific Computing Experts
                                                                                                                                                                 Legend

                                                                                                                                                                                    Financial flows
                                                                                                  Facilities
                                                                                                                                                                                    Access
Researchers
                                                                                            Managed Access


                  Private Industry                                                         Institutional Investors                                            Research institutions (non investors)
 • Access through a “single front door”                          • Access through a “single front door”                                              • Access through a “single front door”
 • Capacity scaled out from partners capabilities                • Specialised concentrations of capability at each institution                      • Capacity scaled out from partners capabilities
 • Managed and metered access to                                 • Receive government coinvestment                                                   • Managed and metered access to resources, across
 resources, across all resource types                            • Capacity available to institutions reflects their level of investment             all resource types
 • Access fee calculated as full costs of metered use            • Managed and metered access to resources, across all resource types                • Access fee initially calculated as partial costs of
 of resources                                                    • Institutions’ access costs covered by investment                                  metered use of resources and reviewed annually
How is NeSI Funded?
There are three anchor partners:
      –   The University of Auckland           (receives Crown funding of ~$2.2M pa);
            •   including two associates (Associate Investors): Landcare and University of Otago.
      –   University of Canterbury             (receives Crown funding of ~$2.6M pa);
      –   NIWA                                 (receives Crown funding of ~$1.0M pa);

Crown investment is $27M over 3 years
Institutional investment is $21M over 3 years
      –   $15.4M in Capital Expenditure & $5.6M in Operational Expenditure

Which provides:
      –   4 × HPC systems at Universities of Auckland & Canterbury, and at NIWA
      –   NeSI Directorate at Auckland (5 staff positions + admin) 5.5 FTEs
      –   Systems Engineers ~ 5.5 FTEs, Services & Application Engineers ~ 5.5 FTEs, Site Management 1.8 FTEs
      –   HPC specialist Scientific Programmers ~5.7 FTEs

And access is allocated in the following way:
• anchor partners (called Collaborators) reserve 60% of the HPC capacity for their purpose
• and 40% is available, by any researcher at a public research institution in NZ with an active peer-reviewed grant




                                                                                                                      12
What are NeSI’s objectives?
1.   Creating an advanced, scalable computing infrastructure to support New Zealand’s research
     communities;
     – i.e. International scale as opposed to institutional/department scale.

2.   Providing the grid middleware, research tools and applications, data management, user-
     support, and community engagement needed for the best possible uptake (of HPC);
     – i.e. enable efficient use of and easy access to these systems.

3.   Encouraging a high level of coordination and cooperation within the research sector;
     – i.e. fit the science to the HPC as opposed to “my institution’s resources” (shared services and
       resources).

4.   Contributing to high quality research outputs from the application of advanced computing and
     data management techniques and associated services, which support the Government’s
     published priorities for science.
     – i.e. Its all about better science to underpin national outcomes.




                                                                                                         13
Research e-Infrastructure Roadmap - 2012
                                                                              NZ Genomics Ltd –
Genomics                                     NZ Genomics Ltd
                                                                             Bioinformatics Cloud
                                                                                  Research Data
    Data                                                                          Infrastructure
eScience                       BeSTGRID Services                               NeSI eScience
                                                                 New Zealand
                          BeSTGRID Cluster                 NeSI @New Zealand + GPU Cluster
                                                                   eScience Intel:
                                                                   Auckland            New Zealand
    HPC                                                              eScience
                                                                Infrastructure           eScience
                          BlueFern BlueGene/L              NeSI @ BlueFern: BlueGene/P + P7
                                                                  Infrastructure
                                                                 2011 - 2014          Infrastructure
              NIWA Cray                                    NeSI @ NIWA: 2014– P575
                                                                    2011 - HPC         2014 – 2019

 Identity                     BeSTGRID Federation              Tuakiri – Research Access Federation

 Network            REANNZ KAREN Advanced Research Network                          KAREN 2.0

            2006           2008              2010          2012              2014
An Introduction
              to
       Supercomputing
           Dr. Michael J. Uddstrom
Director, NIWA High Performance Computing
                  Facility
       michael.uddstrom@niwa.co.nz
Processor Trends and Limits
What is a Supercomputer (or HPC)?
• There are two “basic” types:
    – Capability (aka Supercomputers): provide the maximum computing power available to
      solve large problems: the emphasis is on problem size (large memory, lots of cores) . e.g:
         • IBM p775/p7 & p575/p6, Cray XK6, IBM BG/Q & BG/P
    – Capacity: typically use efficient cost-effective computing components: the emphasis is
      on throughput (dealing with loads larger than a single PC/small cluster), e.g:
         • IBM iDataPlex, HP Cluster Platform n000

• The essential differences between Capability & Capacity systems are:
    – the interconnect fabric performance;
    – the processor performance, and
    – reliability (i.e. resiliency to component failure).
• Supercomputers have high efficiency:
    – Efficiency = sustained-performance / peak-performance (%).




                                                                                                   18
What types of HPC do we need?
• It depends on the problem (and the data locality)
• Is it “Embarrassingly Parallel” (EP)?
    – This means the problem can be split into independent tasks, with each sent to a
      different processor:
         • Eg: Image rendering, classification, Monte Carlo calculations, BLAST, etc…
• If not EP, then is it highly-scalable?
    – This means the problem does not place high demands on processor performance –
      because the coupling between processors is relatively “loose” and you can use very
      many:
         • Eg: materials codes, Schrodinger’s equation, DCA++, LSMS, NWChem…
• If not EP and not highly scalable then
    – This means that the problem will place high demands on processor performance and
      on the interconnect between processors:
         • Examples: numerical weather and climate prediction, Variational data
           assimilation, combustion, seismology, POP,




                                                                                           20
NWP: The Computational Challenge
                                    Numerical Weather Prediction Model Scaling
                                            (Grid Size: 744 × 928 × 70)
                                                  No science I/O
                       6000
    Wall Clock (sec)



                       5000

                       4000

                       3000

                       2000

                       1000

                         0

                              124             256           504            1004   2020
                                                        # Cores
•    Speedup on 2020 4.7GHz cores: 7.4
•    Relative number of cores : 16.3   45% of possible scaling
The Limits to Scalability: Amdahl’s Law
                                                   8192
Amdahl's law states that if P is the proportion of a program that can be made
                                                 4096
parallel, and (1 − P) is the proportion that cannot be parallelized (remains serial), then
                                                 2048

the maximum speedup that can be achieved by using N processors is:
                                                 1024

                                               1    512

                                                   P256                                      1
                                        (1 P)                                                0.99
                                                   N128
• In the limit, as N tends to infinity, the maximum speedup tends to 1 / (1 − P). In
                                                 64                                      0.95
                                                                                         0.9
                                                 32
  practice, performance to price ratio falls rapidly as N is increased once there is even0.5
                                                 16
  a small component of (1 − P).                   8
                                                                                         0.2


   e.g. if P=0.99, & N=64, then speedup = 39.3 (perfect = 64)
                                              4

       if P=0.99, & N=1024, then speedup = 91.2 (perfect = 1024)
                                              2
                                                                         ×
                                              1
       if P=0.90, & N=1024, then speedup = 9.9 (perfect = 1024)          ××!
It’s a Challenge… but this is “our” future!!




                                                                                                    23
Load Balancing
• Consider a global atmospheric model:
   – Decompose the grid across the processors (chess-board like)
   – ~50% of the planet is in darkness all of the time only
     longwave radiation calculations are needed;
   – ~50% of the planet is in sun light all of the time both
     longwave and shortwave radiation calculations are needed;
   – P is going to be less than optimal (without some effort)!
• Then there is the matter of
  the poles.
• But… HPC means we can at
  last solve really large problems!
Applications running on Jaguar at ORNL (2011)
       Domain area        Code name    Institution                # of cores                    Performance          Notes
                                                                                                              2008 Gordon Bell Prize
    Materials               DCA++        ORNL                        213,120                      1.9 PF             Winner
                                                                                                              2009 Gordon Bell Prize
    Materials              WL-LSMS     ORNL/ETH                      223,232                      1.8 PF             Winner
                                                                                                              2008 Gordon scalable
                                                                                                                 Highly Bell Prize
    Chemistry              NWChem     PNNL/ORNL                      224,196                      1.4 PF
                                                                                                                highFinalist
                                                                                                                     fraction of Peak
                                                                                                              2010 Gordon Bell Prize
                                                                                                                    Performance
    Nanoscience             OMEN          Duke                       222,720                      > 1 PF             Finalist
                                                                                                              2010 Gordon Bell Prize
    Biomedical              MoBo         GaTech                      196,608                      780 TF             Winner
    Chemistry             MADNESS       UT/ORNL                      140,000                      550 TF
                                                                                                              2008 Tightly Coupled
                                                                                                                   Gordon Bell Prize
    Materials               LS3DF          LBL                       147,456                      442 TF         Problem small
                                                                                                                     Winner
                                                                                                              2008 fraction of Peak
                                                                                                                   Gordon Bell Prize
    Seismology            SPECFEM3D   USA (multiple)                 149,784                      165 TF             Finalist
                                                                                                                     Performance
    Combustion               S3D          SNL                        147,456                       83 TF        Just 3.6% of DCA++
                                                                                                                   Performance
    Weather                 WRF       USA (multiple)                 150,000                       50 TF




                                                                                                                                       25
Wednesday, July 4, 2012                     New Zealand HPC Applications Workshop, Wellington
NeSI HPCs in Summary
• University of Auckland:
    – IBM iDataPlex Intel processor Cluster, large node memory + some exotic hardware (i.e.
      GPGPUs) (Pan)
        • General purpose HPC cluster
        • Optimised for EP and Highly Scalable problems;
• University of Canterbury:
    – IBM BlueGene/P Supercomputer
        • Optimised for EP and Highly Scalable problems;
    – IBM p755/POWER7 cluster
        • General purpose / capability HPC cluster
    – IBM IDataPlex Visualisation Cluster
• NIWA High Performance Computing Facility:
    – IBM p575/POWER6 Supercomputer (FitzRoy)
        • Optimised for tightly coupled (large) problems
        • Operational/production-ready (i.e. IBM Support arrangements)




                                                                                              26
Other Points about HPCs
• Provide ultimate compute performance;
• Provide ultimate I/O performance (think scalability):
   – Parallel file systems (read / write simultaneously from/to multiple
     cores);
   – Very high I/O throughput (e.g. multi- GB/s).
• Data management matters – generate vast amounts of data:
   – Hierarchical Storage Management systems.
• Jobs run in a batch-processing environment:
   – There will be time and resource limits on queues;
   – Need to be able to “self checkpoint” – i.e. write restart files;
   – It’s a shared resource.
• Often characterised by small number of large users.
Image: Lagoon by Paul Podsiadlo and Elena Shevchenko, Argonne's Center for Nanoscale Materials
Creative Commons Atribution Share-Alike - http://creativecommons.org/licenses/by-sa/2.0/




                                                                                                 NeSI Roadmap
Access Scheme
Allocation Classes
• Proposal Development             Free
• Research (peer reviewed)         80% subsidy by NeSI
• Educational                      Free
• Collaborator                     Free
• Private Industry                 Full cost recovery

Access Policy Advisory Group
• Development of an Access Policy, in partnership with funding agencies
• Members are senior representatives of science and funding providers

… more from Technical Qualification Panel Chair, Mark Cheeseman, shortly..
Complexity & Computation
Challenges for researchers                   Challenges for institutions

•   Multiple competencies required for       •   Comprehending opportunities that
    modern computational approaches              support and grow research
    – “Computational Thinking”
                                             •   Education and skills training
•   Computational facilities access and
    efficient use                            •   Supporting distributed teamwork
•   Maintaining links with leaders in        •   Sustaining and growing research
    relevant fields                              computing infrastructure
•   Communicating with institutions          •   Coordination and collaboration
    and facilities and colleagues in other       across institutional boundaries
    disciplines
Scaling scientific computing
NeSI plays a variety of roles in addressing these
needs…

Research computation – scaling up/out
     –   Code porting and tuning and scaling
                                                            Experts
     –   Bridging from Campus to National                   scientific applications Experts, available
     –   Bridging from National to International            through our Research allocation class
     –   Computation as a Service

Infrastructure Partnerships – scaling platforms
     – Research communities scaling up, collaboration   •     REANNZ
       nationally & internationally                     •     Tuakiri
     – Sector wide research infrastructures e.g. HPC,
       Research Data                                    •     Institutional IT Services
     – Institutional/campus bridging                    •     AeRO, Caudit
     – Innovation, new facilities/services to meet
       evolving needs
NZers are Computational Research leaders…

Medical & Health Sciences
Biological Sciences
Physical Sciences
Engineering
Math & Statistics
Humanities
Social Sciences
Outreach & Education
Communities of Practice:
3rd year for…
•     NZ eResearch symposium – annual meeting that highlights work in the community to address researcher needs
•     2013, Christchurch

1st year for...
•     NZ HPC Applications Workshop – focused on building a NZ community of researchers developing scientific applications
•     2013, Christchurch

Planned…
•    eResearch leaders forum

Education:
3rd year for…
•     Summer of eResearch – annual summer scholarship programme bringing together software engineering students with
      researchers to tackle IT problems at a national scale
•     Aim is to develop a balanced programme, where institutions and Crown jointly invest into building capability thorugh
      education

Planned…
•    Recruiting a national NeSI Education Lead, who will act as a colleague of those teaching HPC and eScience across the sector,
     collaborate on curriculum development, design, and delivery, and engage in training
eScience Services
•   DataFabric (Federated Data Sharing Service)
     –   Hosted at multiple redundant sites
     –   Using best of breed platform: Integrated Rule Oriented Data Service (iRODS)
     –   Tuakiri authentication
     –   WWW / WebDAV / FUSE / GridFTP
     –   Status: Prototype (BeSTGRID migration to NeSI)


•   Data Transfer
     –   High throughput transfer of data across REANNZ network nationally, and internationally
     –   GridFTP and GlobusOnline
     –   Status: Protoype (BeSTGRID migration to NeSI)


•   Hosting (virtual) for national services
     –   Hosting of databases and core NeSI services
     –   Status: Prototype (New platform)
Image: Natural Palette by Seth Darling and Muruganathan Ramanathan, Argonne's Center for Nanoscale Materials
Creative Commons Atribution Share-Alike - http://creativecommons.org/licenses/by-sa/2.0/




                                                                                                     Progress Report
Collaboration
“Big scientific problems require big teams these days
and our current institutional arrangements, with
their high transaction costs and researcher-scale
accountabilities, are ill-suited to meet such
challenges. Putting together large, multi-
institutional teams to tackle complex problems
remains depressingly difficult in the New Zealand
environment.”
               03/2012, Shaun Hendy, President, NZAS
NeSI Status
•   Creating an advanced, scalable computing infrastructure to support New Zealand’s
    research communities
     – 3 international scale HPCs operating

•   Providing the grid middleware, research tools and applications, data management,
    user-support, and community engagement needed for the best possible uptake (of
    HPC)
     – NeSI is staffed (few open positions), grid middleware being developed, community engagement
       underway (HPC Workshop, Presentations beginning)

•   Encouraging a high level of coordination and cooperation within the research sector
     – Has been a challenge – but Auckland, Canterbury & NIWA working together for the good of NZ Science

•   Contributing to high quality research outputs… which support the Government’s
    published priorities for science
     – Too early to tell – but we need to do so within the next 12 months




                                                                                                            52
NeSI Statistics
Operational HPC facilities since January 2012

First Call for proposals for national allocations: May 2012

Current Call – September: Third Call

28 Projects across development and research classes

         Allan Wilson Centre, ANU, Auckland, Auckland DHB, Canterbury, CSIRO, ESR,
         GNS, Lincoln, Livestock Improvement Corporation, IRL, MacDiarmid Institute,
         Massey, Met Service, NIWA, Ohio State, Otago, Scion, VUW, Waikato
http://www.nesi.org.nz/projects
Strategic opportunities
Research community and sector wide partnerships

e.g.
   – National Science Challenges
   – Research Data Management
   – Callaghan Institute (Advanced Technology Institute)



Leveraging further institutional capabilities?

   – Significant institutional investments into eScience and HPC resources
     that might be better coordinated nationally
Strategy & Plans


2012                        2013
• Strategic Plan under      • September: Contract
  development                 review, to inform
                              renegotiation and
• Seeking sector feedback     continuation from July
  late 2012/early 2013        2014
Summary
NeSI is a big new investment (by the Crown and Sector, into HPC for NZ Science)
• It is making a world-class HPC ecosystem available to NZ Science
• It is a collaboration between Universities & CRIs
• Initial contract funded till June 2014 – but will need to prove its success by
  September 2013 – is in long term budget, if we perform

To be successful… (i.e. attract ongoing investment to support national HPC access)
    – NZ Scientists will need to demonstrate their need for HPC (see User Needs Survey)
    – This means transitioning from the problems that I can solve on “my” PC, and/or
      departmental cluster – to large scale HPC provided by NeSI
        • This is a function of the funding-round cycle too…
        • It will take time to learn new programming methods & tools: MPI, OpenMP, and new
          Operating Environments




                                                                                             56
Summary
and, importantly:
    – PIs will have to demonstrate the value of NeSI by funding 20% of their access costs
        • This has implications for the way our Institutions provide access to Operational Expenditure
          (most prefer to provide Capital Expenditure)
    – Research Projects using NeSI will need to generate excellent science (that could not
      be done without these HPC facilities)
    – Research Projects and NeSI need to contribute to Government Outcomes


.. in which case we can expect a long period of HPC funding in the years ahead

.. and can aspire to a broader scope of eScience infrastructure, too…




                                                                                                         57
Image: Natural Palette by Seth Darling and Muruganathan Ramanathan, Argonne's Center for Nanoscale Materials
Creative Commons Atribution Share-Alike - http://creativecommons.org/licenses/by-sa/2.0/




                                                                                           NeSI                                  http://www.nesi.org.nz
                                                                                           Access Policy                      http://www.nesi.org.nz/access-policy
                                                                                                    Eligibility               http://www.nesi.org.nz/eligibility
                                                                                                    Allocation Classeshttp://www.nesi.org.nz/allocations
                                                                                           Application Forms                  http://www.nesi.org.nz/apply
                                                                                           Calls Timetable                    http://www.nesi.org.nz/timetable

                                                                                           Case Studies                          http://www.nesi.org.nz/case-studies
                                                                                           Projects                              http://www.nesi.org.nz/projects

                                                                                           Facilities
                                                                                                        Center for eResearch, Auckland
                                                                                                                 http://www.eresearch.auckland.ac.nz/uoa

                                                                                                        BlueFern SuperComputer, Canterbury
                                                                                                                http://www.bluefern.canterbury.ac.nz

                                                                                                        FitzRoy HPCF, NIWA
                                                                                                                 http://www.niwa.co.nz/our-services/hpcf

Mais conteúdo relacionado

Semelhante a NeSI HPC Roadshow 2012

100503 bioinfo instsymp
100503 bioinfo instsymp100503 bioinfo instsymp
100503 bioinfo instsympNick Jones
 
BeSTGRID OpenGridForum 29 GIN session
BeSTGRID OpenGridForum 29 GIN sessionBeSTGRID OpenGridForum 29 GIN session
BeSTGRID OpenGridForum 29 GIN sessionNick Jones
 
Federation and Interoperability in the Nectar Research Cloud
Federation and Interoperability in the Nectar Research CloudFederation and Interoperability in the Nectar Research Cloud
Federation and Interoperability in the Nectar Research CloudOpenStack
 
National Grid Singapore (Jon Lau Khee Erng)
National Grid Singapore (Jon Lau Khee Erng)National Grid Singapore (Jon Lau Khee Erng)
National Grid Singapore (Jon Lau Khee Erng)James Chan
 
Matthew Purss_"Skinning the multi-dimensional cat": how TERN and the Unlockin...
Matthew Purss_"Skinning the multi-dimensional cat": how TERN and the Unlockin...Matthew Purss_"Skinning the multi-dimensional cat": how TERN and the Unlockin...
Matthew Purss_"Skinning the multi-dimensional cat": how TERN and the Unlockin...TERN Australia
 
Why manage research data?
Why manage research data?Why manage research data?
Why manage research data?Graham Pryor
 
Ausplots Training - Session 1
Ausplots Training - Session 1Ausplots Training - Session 1
Ausplots Training - Session 1bensparrowau
 
SKA NZ R&D BeSTGRID Infrastructure
SKA NZ R&D BeSTGRID InfrastructureSKA NZ R&D BeSTGRID Infrastructure
SKA NZ R&D BeSTGRID InfrastructureNick Jones
 
Towards the Wikipedia of World Wide Sensors
Towards the Wikipedia of World Wide SensorsTowards the Wikipedia of World Wide Sensors
Towards the Wikipedia of World Wide SensorsCybera Inc.
 
Big Data is today: key issues for big data - Dr Ben Evans
Big Data is today: key issues for big data - Dr Ben EvansBig Data is today: key issues for big data - Dr Ben Evans
Big Data is today: key issues for big data - Dr Ben EvansARDC
 
ApacheCon NA 2013 VFASTR
ApacheCon NA 2013 VFASTRApacheCon NA 2013 VFASTR
ApacheCon NA 2013 VFASTRLucaCinquini
 
Welcome & Workshop Objectives: Introduction to COMPRES by Jay Bass, Universit...
Welcome & Workshop Objectives: Introduction to COMPRES by Jay Bass, Universit...Welcome & Workshop Objectives: Introduction to COMPRES by Jay Bass, Universit...
Welcome & Workshop Objectives: Introduction to COMPRES by Jay Bass, Universit...EarthCube
 
Scalable Data Mining and Archiving in the Era of the Square Kilometre Array
Scalable Data Mining and Archiving in the Era of the Square Kilometre ArrayScalable Data Mining and Archiving in the Era of the Square Kilometre Array
Scalable Data Mining and Archiving in the Era of the Square Kilometre ArrayChris Mattmann
 
National bee diagnostic centre beaverlodge, alberta canada
National bee diagnostic centre   beaverlodge, alberta canadaNational bee diagnostic centre   beaverlodge, alberta canada
National bee diagnostic centre beaverlodge, alberta canadaGPRC Research & Innovation
 

Semelhante a NeSI HPC Roadshow 2012 (20)

100503 bioinfo instsymp
100503 bioinfo instsymp100503 bioinfo instsymp
100503 bioinfo instsymp
 
100503 bioinfo instsymp
100503 bioinfo instsymp100503 bioinfo instsymp
100503 bioinfo instsymp
 
BeSTGRID OpenGridForum 29 GIN session
BeSTGRID OpenGridForum 29 GIN sessionBeSTGRID OpenGridForum 29 GIN session
BeSTGRID OpenGridForum 29 GIN session
 
Memory
MemoryMemory
Memory
 
Federation and Interoperability in the Nectar Research Cloud
Federation and Interoperability in the Nectar Research CloudFederation and Interoperability in the Nectar Research Cloud
Federation and Interoperability in the Nectar Research Cloud
 
National Grid Singapore (Jon Lau Khee Erng)
National Grid Singapore (Jon Lau Khee Erng)National Grid Singapore (Jon Lau Khee Erng)
National Grid Singapore (Jon Lau Khee Erng)
 
Matthew Purss_"Skinning the multi-dimensional cat": how TERN and the Unlockin...
Matthew Purss_"Skinning the multi-dimensional cat": how TERN and the Unlockin...Matthew Purss_"Skinning the multi-dimensional cat": how TERN and the Unlockin...
Matthew Purss_"Skinning the multi-dimensional cat": how TERN and the Unlockin...
 
Why manage research data?
Why manage research data?Why manage research data?
Why manage research data?
 
Ausplots Training - Session 1
Ausplots Training - Session 1Ausplots Training - Session 1
Ausplots Training - Session 1
 
Doc (2)
Doc (2)Doc (2)
Doc (2)
 
SKA NZ R&D BeSTGRID Infrastructure
SKA NZ R&D BeSTGRID InfrastructureSKA NZ R&D BeSTGRID Infrastructure
SKA NZ R&D BeSTGRID Infrastructure
 
Towards the Wikipedia of World Wide Sensors
Towards the Wikipedia of World Wide SensorsTowards the Wikipedia of World Wide Sensors
Towards the Wikipedia of World Wide Sensors
 
Big Data is today: key issues for big data - Dr Ben Evans
Big Data is today: key issues for big data - Dr Ben EvansBig Data is today: key issues for big data - Dr Ben Evans
Big Data is today: key issues for big data - Dr Ben Evans
 
eResearch@UCT
eResearch@UCTeResearch@UCT
eResearch@UCT
 
ApacheCon NA 2013 VFASTR
ApacheCon NA 2013 VFASTRApacheCon NA 2013 VFASTR
ApacheCon NA 2013 VFASTR
 
Cifar
CifarCifar
Cifar
 
Welcome & Workshop Objectives: Introduction to COMPRES by Jay Bass, Universit...
Welcome & Workshop Objectives: Introduction to COMPRES by Jay Bass, Universit...Welcome & Workshop Objectives: Introduction to COMPRES by Jay Bass, Universit...
Welcome & Workshop Objectives: Introduction to COMPRES by Jay Bass, Universit...
 
ELIXIR
ELIXIRELIXIR
ELIXIR
 
Scalable Data Mining and Archiving in the Era of the Square Kilometre Array
Scalable Data Mining and Archiving in the Era of the Square Kilometre ArrayScalable Data Mining and Archiving in the Era of the Square Kilometre Array
Scalable Data Mining and Archiving in the Era of the Square Kilometre Array
 
National bee diagnostic centre beaverlodge, alberta canada
National bee diagnostic centre   beaverlodge, alberta canadaNational bee diagnostic centre   beaverlodge, alberta canada
National bee diagnostic centre beaverlodge, alberta canada
 

Último

Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 

Último (20)

Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 

NeSI HPC Roadshow 2012

  • 1. 10:00am Coffee 10:30am Morning Session NeSI Presentation An overview of high performance computing (HPC) and NeSI provided by NeSI's Director Nick Jones. Demonstrations NeSI staff will be demonstrating some of the features of the HPC facilities, explaining how it they can fit into a research project. 12:30pm Lunch 1:30pm Afternoon Session Local Case Study An overview of a local researcher’s use of HPC facilities and aspirations for the national facilities provided by NeSI. Group discussions facilitated by the NeSI team A chance for researchers to connect and discuss how NeSI can benefit their projects. Staff from REANNZ and Tuakiri, the New Zealand Access Federation, will also be on hand to discuss the Science DMZ and Tuakiri respectively.
  • 2. Image: Natural Palette by Seth Darling and Muruganathan Ramanathan, Argonne's Center for Nanoscale Materials Creative Commons Atribution Share-Alike - http://creativecommons.org/licenses/by-sa/2.0/ Computing Facilities for NZ Researchers National HPC Roadshow 2012 Nick Jones, Director A partnership between NZ’s research institutions delivering advanced computational services for leading edge discovery
  • 3. NeSI, a national research infrastructure  High Performance Computing (HPC)  & eScience services  … for all NZ researchers NeSI HPC Facilities NeSI’s Roadmap Progress Report
  • 4. Genomics, Genetics, Bioinform Wind Energy, Geothermal & atics, Molecular Modelling: Minerals Exploration: NZ Genomics Ltd • GNS Exploration Maurice Wilkins Centre • Institute for Earth Science and Alan Wilson Centre Engineering Virtual Institute of Statistical • Centre for Atmospheric Research Genetics Earthquakes, Tsunami, Volcano Nanotechnology and High es: Technology Materials: • Natural Hazards Research MacDiarmid Institute Platform Materials TRST • DEVORA Auckland Volcanic Field • GeoNet Human Development, Bioengineering, Social Statistics: • National Research Centre for Growth and Invasive Species, Water / Land Development Use, Emissions: • Auckland Bioengineering Institute • Agricultural Greenhouse Gas Centre • Liggins Institute • Bio-Protection Research Centre • Malaghan Institute • National Climate Change Centre • Social Sciences Data Service
  • 5. Research e-Infrastructure Roadmap - 2010 Genomics NZ Genomics Ltd Data eScience BeSTGRID Services BeSTGRID Cluster HPC BlueFern BlueGene/L NIWA Cray NeSI @ NIWA: HPC – P575 Identity BeSTGRID Federation Network REANNZ KAREN Advanced Research Network 2006 2008 2010 2012 2014
  • 6. How did NeSI arise? October 2010 an Investment Case entitled: “National eScience Infrastructure (NeSI) High Performance Computational Platforms and Services for NZ’s Research Communities” • Submitted to the Minister of Research, Science & Technology • Prepared by a Working Group of representatives from: UoA, UC, UoO, Landcare, AgResearch & NIWA (under an indept. Chair) Investment Case asserted that: – HPC and related eScience infrastructure are indispensable components of modern science, and are having a major impact on almost every branch of research; – By taking a sector approach, more efficient coordination and cooperation would be achieved, leading to strategically targeted investment in HPC; – Thereby providing international-scale HPC to a wide range of communities and disciplines. Formulated following a “Needs Analysis” during 2010 6
  • 7. What “we” said about our HPC needs… In 2010, NZ Researchers were surveyed to determine their existing and anticipated HPC requirements. We (~194 of us) said (e.g.): Processors to run a code < 10 10 - 100 100 - 1000 >10,000 In 2010 40% 39% 15% 5% In 2015 12% 32% 38% 17% File Space per experiment < 100GB 1 - 10 TB 100 TB >1PB In 2010 64% 35% 0% 0.6% In 2015 27% 58% 13% 3% Off-Site Data Transfers/day < 100MB 1GB 1TB >10TB In 2010 33% 42% 21% 4% In 2015 17% 28% 33% 23% 7
  • 8. Why hadn’t this already happened? Coordination failure.. a collective failure to coordinate investment decisions for the greater benefit (value, efficiency) We see a coordination failure (coordination of capital and resources) leading to HPC facilities that don’t achieve the scale required for demanding research applications Funding strategy aligns Opex and Capex – Crown and collaborators fund both, clarifying intent Value to Crown Value to Institutions Why do we need Government investment? Why co-invest into national infrastructure? • overcome coordination failure • new investment into institutions and • reach scale and maturity sector • efficient investment in critical • create scalable mature facilities infrastructure • trust and control ensure good fit to needs
  • 9. What did we decide on? Shared facilities shared facilities accessible nationally National Access scheme single Access Policy for all HPC facility access National Team single Team nationally sharing practices and expertise
  • 11. Crown Auckland / Landcare / • Invest over 3 years, plus out-years Otago • People Governance • Commodity cluster Principal Investors & Independent Directors • Storage & Crown Observer • Portion of facilities, power, depreciation • Transparent costs • Collaborative Cluster & • Virtual hosting Canterbury Services • People NIWA • HPC HPC & National eScience • People • New storage HPC & • HPC capacity Services Infrastructure • Portion of Services • Storage facilities, power, depreciation Procurement costs • Portion of Operational management facilities, power, depreciation costs Data and Compute Access Outreach Scientific Computing Experts Legend Financial flows Facilities Access Researchers Managed Access Private Industry Institutional Investors Research institutions (non investors) • Access through a “single front door” • Access through a “single front door” • Access through a “single front door” • Capacity scaled out from partners capabilities • Specialised concentrations of capability at each institution • Capacity scaled out from partners capabilities • Managed and metered access to • Receive government coinvestment • Managed and metered access to resources, across resources, across all resource types • Capacity available to institutions reflects their level of investment all resource types • Access fee calculated as full costs of metered use • Managed and metered access to resources, across all resource types • Access fee initially calculated as partial costs of of resources • Institutions’ access costs covered by investment metered use of resources and reviewed annually
  • 12. How is NeSI Funded? There are three anchor partners: – The University of Auckland (receives Crown funding of ~$2.2M pa); • including two associates (Associate Investors): Landcare and University of Otago. – University of Canterbury (receives Crown funding of ~$2.6M pa); – NIWA (receives Crown funding of ~$1.0M pa); Crown investment is $27M over 3 years Institutional investment is $21M over 3 years – $15.4M in Capital Expenditure & $5.6M in Operational Expenditure Which provides: – 4 × HPC systems at Universities of Auckland & Canterbury, and at NIWA – NeSI Directorate at Auckland (5 staff positions + admin) 5.5 FTEs – Systems Engineers ~ 5.5 FTEs, Services & Application Engineers ~ 5.5 FTEs, Site Management 1.8 FTEs – HPC specialist Scientific Programmers ~5.7 FTEs And access is allocated in the following way: • anchor partners (called Collaborators) reserve 60% of the HPC capacity for their purpose • and 40% is available, by any researcher at a public research institution in NZ with an active peer-reviewed grant 12
  • 13. What are NeSI’s objectives? 1. Creating an advanced, scalable computing infrastructure to support New Zealand’s research communities; – i.e. International scale as opposed to institutional/department scale. 2. Providing the grid middleware, research tools and applications, data management, user- support, and community engagement needed for the best possible uptake (of HPC); – i.e. enable efficient use of and easy access to these systems. 3. Encouraging a high level of coordination and cooperation within the research sector; – i.e. fit the science to the HPC as opposed to “my institution’s resources” (shared services and resources). 4. Contributing to high quality research outputs from the application of advanced computing and data management techniques and associated services, which support the Government’s published priorities for science. – i.e. Its all about better science to underpin national outcomes. 13
  • 14.
  • 15. Research e-Infrastructure Roadmap - 2012 NZ Genomics Ltd – Genomics NZ Genomics Ltd Bioinformatics Cloud Research Data Data Infrastructure eScience BeSTGRID Services NeSI eScience New Zealand BeSTGRID Cluster NeSI @New Zealand + GPU Cluster eScience Intel: Auckland New Zealand HPC eScience Infrastructure eScience BlueFern BlueGene/L NeSI @ BlueFern: BlueGene/P + P7 Infrastructure 2011 - 2014 Infrastructure NIWA Cray NeSI @ NIWA: 2014– P575 2011 - HPC 2014 – 2019 Identity BeSTGRID Federation Tuakiri – Research Access Federation Network REANNZ KAREN Advanced Research Network KAREN 2.0 2006 2008 2010 2012 2014
  • 16. An Introduction to Supercomputing Dr. Michael J. Uddstrom Director, NIWA High Performance Computing Facility michael.uddstrom@niwa.co.nz
  • 18. What is a Supercomputer (or HPC)? • There are two “basic” types: – Capability (aka Supercomputers): provide the maximum computing power available to solve large problems: the emphasis is on problem size (large memory, lots of cores) . e.g: • IBM p775/p7 & p575/p6, Cray XK6, IBM BG/Q & BG/P – Capacity: typically use efficient cost-effective computing components: the emphasis is on throughput (dealing with loads larger than a single PC/small cluster), e.g: • IBM iDataPlex, HP Cluster Platform n000 • The essential differences between Capability & Capacity systems are: – the interconnect fabric performance; – the processor performance, and – reliability (i.e. resiliency to component failure). • Supercomputers have high efficiency: – Efficiency = sustained-performance / peak-performance (%). 18
  • 19.
  • 20. What types of HPC do we need? • It depends on the problem (and the data locality) • Is it “Embarrassingly Parallel” (EP)? – This means the problem can be split into independent tasks, with each sent to a different processor: • Eg: Image rendering, classification, Monte Carlo calculations, BLAST, etc… • If not EP, then is it highly-scalable? – This means the problem does not place high demands on processor performance – because the coupling between processors is relatively “loose” and you can use very many: • Eg: materials codes, Schrodinger’s equation, DCA++, LSMS, NWChem… • If not EP and not highly scalable then – This means that the problem will place high demands on processor performance and on the interconnect between processors: • Examples: numerical weather and climate prediction, Variational data assimilation, combustion, seismology, POP, 20
  • 21. NWP: The Computational Challenge Numerical Weather Prediction Model Scaling (Grid Size: 744 × 928 × 70) No science I/O 6000 Wall Clock (sec) 5000 4000 3000 2000 1000 0 124 256 504 1004 2020 # Cores • Speedup on 2020 4.7GHz cores: 7.4 • Relative number of cores : 16.3 45% of possible scaling
  • 22. The Limits to Scalability: Amdahl’s Law 8192 Amdahl's law states that if P is the proportion of a program that can be made 4096 parallel, and (1 − P) is the proportion that cannot be parallelized (remains serial), then 2048 the maximum speedup that can be achieved by using N processors is: 1024 1 512 P256 1 (1 P) 0.99 N128 • In the limit, as N tends to infinity, the maximum speedup tends to 1 / (1 − P). In 64 0.95 0.9 32 practice, performance to price ratio falls rapidly as N is increased once there is even0.5 16 a small component of (1 − P). 8 0.2 e.g. if P=0.99, & N=64, then speedup = 39.3 (perfect = 64) 4 if P=0.99, & N=1024, then speedup = 91.2 (perfect = 1024) 2 × 1 if P=0.90, & N=1024, then speedup = 9.9 (perfect = 1024) ××! It’s a Challenge… but this is “our” future!! 23
  • 23. Load Balancing • Consider a global atmospheric model: – Decompose the grid across the processors (chess-board like) – ~50% of the planet is in darkness all of the time only longwave radiation calculations are needed; – ~50% of the planet is in sun light all of the time both longwave and shortwave radiation calculations are needed; – P is going to be less than optimal (without some effort)! • Then there is the matter of the poles. • But… HPC means we can at last solve really large problems!
  • 24. Applications running on Jaguar at ORNL (2011) Domain area Code name Institution # of cores Performance Notes 2008 Gordon Bell Prize Materials DCA++ ORNL 213,120 1.9 PF Winner 2009 Gordon Bell Prize Materials WL-LSMS ORNL/ETH 223,232 1.8 PF Winner 2008 Gordon scalable Highly Bell Prize Chemistry NWChem PNNL/ORNL 224,196 1.4 PF highFinalist fraction of Peak 2010 Gordon Bell Prize Performance Nanoscience OMEN Duke 222,720 > 1 PF Finalist 2010 Gordon Bell Prize Biomedical MoBo GaTech 196,608 780 TF Winner Chemistry MADNESS UT/ORNL 140,000 550 TF 2008 Tightly Coupled Gordon Bell Prize Materials LS3DF LBL 147,456 442 TF Problem small Winner 2008 fraction of Peak Gordon Bell Prize Seismology SPECFEM3D USA (multiple) 149,784 165 TF Finalist Performance Combustion S3D SNL 147,456 83 TF Just 3.6% of DCA++ Performance Weather WRF USA (multiple) 150,000 50 TF 25 Wednesday, July 4, 2012 New Zealand HPC Applications Workshop, Wellington
  • 25. NeSI HPCs in Summary • University of Auckland: – IBM iDataPlex Intel processor Cluster, large node memory + some exotic hardware (i.e. GPGPUs) (Pan) • General purpose HPC cluster • Optimised for EP and Highly Scalable problems; • University of Canterbury: – IBM BlueGene/P Supercomputer • Optimised for EP and Highly Scalable problems; – IBM p755/POWER7 cluster • General purpose / capability HPC cluster – IBM IDataPlex Visualisation Cluster • NIWA High Performance Computing Facility: – IBM p575/POWER6 Supercomputer (FitzRoy) • Optimised for tightly coupled (large) problems • Operational/production-ready (i.e. IBM Support arrangements) 26
  • 26. Other Points about HPCs • Provide ultimate compute performance; • Provide ultimate I/O performance (think scalability): – Parallel file systems (read / write simultaneously from/to multiple cores); – Very high I/O throughput (e.g. multi- GB/s). • Data management matters – generate vast amounts of data: – Hierarchical Storage Management systems. • Jobs run in a batch-processing environment: – There will be time and resource limits on queues; – Need to be able to “self checkpoint” – i.e. write restart files; – It’s a shared resource. • Often characterised by small number of large users.
  • 27. Image: Lagoon by Paul Podsiadlo and Elena Shevchenko, Argonne's Center for Nanoscale Materials Creative Commons Atribution Share-Alike - http://creativecommons.org/licenses/by-sa/2.0/ NeSI Roadmap
  • 28. Access Scheme Allocation Classes • Proposal Development Free • Research (peer reviewed) 80% subsidy by NeSI • Educational Free • Collaborator Free • Private Industry Full cost recovery Access Policy Advisory Group • Development of an Access Policy, in partnership with funding agencies • Members are senior representatives of science and funding providers … more from Technical Qualification Panel Chair, Mark Cheeseman, shortly..
  • 29. Complexity & Computation Challenges for researchers Challenges for institutions • Multiple competencies required for • Comprehending opportunities that modern computational approaches support and grow research – “Computational Thinking” • Education and skills training • Computational facilities access and efficient use • Supporting distributed teamwork • Maintaining links with leaders in • Sustaining and growing research relevant fields computing infrastructure • Communicating with institutions • Coordination and collaboration and facilities and colleagues in other across institutional boundaries disciplines
  • 30. Scaling scientific computing NeSI plays a variety of roles in addressing these needs… Research computation – scaling up/out – Code porting and tuning and scaling Experts – Bridging from Campus to National scientific applications Experts, available – Bridging from National to International through our Research allocation class – Computation as a Service Infrastructure Partnerships – scaling platforms – Research communities scaling up, collaboration • REANNZ nationally & internationally • Tuakiri – Sector wide research infrastructures e.g. HPC, Research Data • Institutional IT Services – Institutional/campus bridging • AeRO, Caudit – Innovation, new facilities/services to meet evolving needs
  • 31. NZers are Computational Research leaders… Medical & Health Sciences Biological Sciences Physical Sciences Engineering Math & Statistics Humanities Social Sciences
  • 32. Outreach & Education Communities of Practice: 3rd year for… • NZ eResearch symposium – annual meeting that highlights work in the community to address researcher needs • 2013, Christchurch 1st year for... • NZ HPC Applications Workshop – focused on building a NZ community of researchers developing scientific applications • 2013, Christchurch Planned… • eResearch leaders forum Education: 3rd year for… • Summer of eResearch – annual summer scholarship programme bringing together software engineering students with researchers to tackle IT problems at a national scale • Aim is to develop a balanced programme, where institutions and Crown jointly invest into building capability thorugh education Planned… • Recruiting a national NeSI Education Lead, who will act as a colleague of those teaching HPC and eScience across the sector, collaborate on curriculum development, design, and delivery, and engage in training
  • 33. eScience Services • DataFabric (Federated Data Sharing Service) – Hosted at multiple redundant sites – Using best of breed platform: Integrated Rule Oriented Data Service (iRODS) – Tuakiri authentication – WWW / WebDAV / FUSE / GridFTP – Status: Prototype (BeSTGRID migration to NeSI) • Data Transfer – High throughput transfer of data across REANNZ network nationally, and internationally – GridFTP and GlobusOnline – Status: Protoype (BeSTGRID migration to NeSI) • Hosting (virtual) for national services – Hosting of databases and core NeSI services – Status: Prototype (New platform)
  • 34.
  • 35. Image: Natural Palette by Seth Darling and Muruganathan Ramanathan, Argonne's Center for Nanoscale Materials Creative Commons Atribution Share-Alike - http://creativecommons.org/licenses/by-sa/2.0/ Progress Report
  • 36. Collaboration “Big scientific problems require big teams these days and our current institutional arrangements, with their high transaction costs and researcher-scale accountabilities, are ill-suited to meet such challenges. Putting together large, multi- institutional teams to tackle complex problems remains depressingly difficult in the New Zealand environment.” 03/2012, Shaun Hendy, President, NZAS
  • 37. NeSI Status • Creating an advanced, scalable computing infrastructure to support New Zealand’s research communities – 3 international scale HPCs operating • Providing the grid middleware, research tools and applications, data management, user-support, and community engagement needed for the best possible uptake (of HPC) – NeSI is staffed (few open positions), grid middleware being developed, community engagement underway (HPC Workshop, Presentations beginning) • Encouraging a high level of coordination and cooperation within the research sector – Has been a challenge – but Auckland, Canterbury & NIWA working together for the good of NZ Science • Contributing to high quality research outputs… which support the Government’s published priorities for science – Too early to tell – but we need to do so within the next 12 months 52
  • 38. NeSI Statistics Operational HPC facilities since January 2012 First Call for proposals for national allocations: May 2012 Current Call – September: Third Call 28 Projects across development and research classes Allan Wilson Centre, ANU, Auckland, Auckland DHB, Canterbury, CSIRO, ESR, GNS, Lincoln, Livestock Improvement Corporation, IRL, MacDiarmid Institute, Massey, Met Service, NIWA, Ohio State, Otago, Scion, VUW, Waikato http://www.nesi.org.nz/projects
  • 39. Strategic opportunities Research community and sector wide partnerships e.g. – National Science Challenges – Research Data Management – Callaghan Institute (Advanced Technology Institute) Leveraging further institutional capabilities? – Significant institutional investments into eScience and HPC resources that might be better coordinated nationally
  • 40. Strategy & Plans 2012 2013 • Strategic Plan under • September: Contract development review, to inform renegotiation and • Seeking sector feedback continuation from July late 2012/early 2013 2014
  • 41. Summary NeSI is a big new investment (by the Crown and Sector, into HPC for NZ Science) • It is making a world-class HPC ecosystem available to NZ Science • It is a collaboration between Universities & CRIs • Initial contract funded till June 2014 – but will need to prove its success by September 2013 – is in long term budget, if we perform To be successful… (i.e. attract ongoing investment to support national HPC access) – NZ Scientists will need to demonstrate their need for HPC (see User Needs Survey) – This means transitioning from the problems that I can solve on “my” PC, and/or departmental cluster – to large scale HPC provided by NeSI • This is a function of the funding-round cycle too… • It will take time to learn new programming methods & tools: MPI, OpenMP, and new Operating Environments 56
  • 42. Summary and, importantly: – PIs will have to demonstrate the value of NeSI by funding 20% of their access costs • This has implications for the way our Institutions provide access to Operational Expenditure (most prefer to provide Capital Expenditure) – Research Projects using NeSI will need to generate excellent science (that could not be done without these HPC facilities) – Research Projects and NeSI need to contribute to Government Outcomes .. in which case we can expect a long period of HPC funding in the years ahead .. and can aspire to a broader scope of eScience infrastructure, too… 57
  • 43. Image: Natural Palette by Seth Darling and Muruganathan Ramanathan, Argonne's Center for Nanoscale Materials Creative Commons Atribution Share-Alike - http://creativecommons.org/licenses/by-sa/2.0/ NeSI http://www.nesi.org.nz Access Policy http://www.nesi.org.nz/access-policy Eligibility http://www.nesi.org.nz/eligibility Allocation Classeshttp://www.nesi.org.nz/allocations Application Forms http://www.nesi.org.nz/apply Calls Timetable http://www.nesi.org.nz/timetable Case Studies http://www.nesi.org.nz/case-studies Projects http://www.nesi.org.nz/projects Facilities Center for eResearch, Auckland http://www.eresearch.auckland.ac.nz/uoa BlueFern SuperComputer, Canterbury http://www.bluefern.canterbury.ac.nz FitzRoy HPCF, NIWA http://www.niwa.co.nz/our-services/hpcf

Notas do Editor

  1. Image credit: http://www.flickr.com/photos/argonne/3974231329/in/set-72157622501409938Natural PaletteBy SethDarling and MuruganathanRamanathanIn order to inventnew materials to use in betterbatteries, solarcells and othertechnologicaladvances, scientistsmustdelvedeeplyinto the nanoscale—the nearlyatomicscalewherestructuresdeterminehow materials react with eachother. At the nanoscale, anythingcanhappen; materials canchangecolors and form intoastonishingstructures. Thesearesome of the results from studiesat the nanoscale.ABOVE: Anopticalmicrograph of a hybridorganic/inorganicpolymer film thatexhibitsbothnanoscale and microscalestructuresdue to a competitionbetweenself-assembly and crystallization. These materials providefundamentalinsightsintopolymer science and havepotentialapplication in nanoscalepattern transfer processes.Previouslypublished as cover image in SoftMatter, DOI: 10.1039/b902114k (2009).ArgonneNationalLaboratory.
  2. Image credits:NWP – 3D plot temp, moisture[BlueFern]GastonFlanagan
  3. Industry capability enhancement…
  4. Key points here are that a) Collaborators needed to put in serious money, b) by way of compensation – Collaborators get to control 60% of the HPC resources (core-hours, disk &amp; tape) c) 40% is available for Merit Access – where the 40% is 40% of the resource purchased by NeSI… so corner cases are – i) that NeSI Merit Users could use all of the HPC for 40% of a year, or 40% of the cores over the whole year. ie. The size of any problem tackled is not limited to 40% of the cores on the machine.
  5. Imagecredit:http://www.flickr.com/photos/argonne/3974992962/in/set-72157622501409938/LagoonBy Paul Podsiadlo and Elena Shevchenko, Argonne&apos;s Center for Nanoscale MaterialsIn order to invent new materials to use in better batteries, solar cells and other technological advances, scientists must delve deeply into the nanoscale—the nearly atomic scale where structures determine how materials react with each other. At the nanoscale, anything can happen; materials can change colors and form into astonishing structures. These are some of the results from studies at the nanoscale.ABOVE: The colorful pattern represents a film of 7.5-nanometer lead sulfide nanocrystals evaporated on top of a silicon wafer. The islands are formed by micron-size “supercrystals”—faceted 3-D assemblies of the same nanocrystals. The picture is a true, unaltered image, obtained with an optical microscope in reflected light mode.Photo courtesy of Argonne National Laboratory.
  6. I think it is very important for potential users to understand the differences between “Capability” – i.e. BG/P and P575/P6 (FitzRoy) and “Capacity” – i.e. iDataPlex (pan) – and the types of problems that each system is optimised to solve. This will help them to understand why NeSI has the types of systems it does have.
  7. Obviously – machines that can handle the tightly coupled problems, can also handle the simpler, less demanding “highly scalable” and EP problems… but you want to fill them up in that order (i.e. using theme for their primary purpose first, and after that backfilling with the less demanding (on the interconnect) jobs.
  8. This law (see http://en.wikipedia.org/wiki/Amdahl%27s_law)places a very severe constraint on scalability for tightly coupled codes! If only 1% of the code cannot be parallelised – then you have almost no scalability beyond about 64 cores. This is the reason that underlies the “poor” scalability of weather codes on the Oak Ridge 200,000 core Cray that Thomas showed last week, and the reason why we need the highest performance processors available in order to compute solutions within a reasonable time – and the reason why NIWA purchased an IBM p575/p6 rather than a BG/P. There are variations on this point of view -e.g. Gustafson’s Law – but the essential point remains. Of course, for embarrassingly parallel (EP) problems P is essentially 1, and the size of the problem scales according to the number of processors.
  9. This law (see http://en.wikipedia.org/wiki/Amdahl%27s_law)places a very severe constraint on scalability for tightly coupled codes! If only 1% of the code cannot be parallelised – then you have almost no scalability beyond about 64 cores. This is the reason that underlies the “poor” scalability of weather codes on the Oak Ridge 200,000 core Cray that Thomas showed last week, and the reason why we need the highest performance processors available in order to compute solutions within a reasonable time – and the reason why NIWA purchased an IBM p575/p6 rather than a BG/P. There are variations on this point of view -e.g. Gustafson’s Law – but the essential point remains. Of course, for embarrassingly parallel (EP) problems P is essentially 1, and the size of the problem scales according to the number of processors.
  10. The point of this slide is to demonstrate the inherent scalability of some types of problems – e.g. materials codes scale well (e.g. will work well on BG, iDataPlex (and FitzRoy), while weather codes (tightly coupled) do not… - so will require high processor performance - hence FitzRoy (and possibly iDataPlex). Of course these numbers are also a function of problem size… to get a problem to scale to very large processor counts, it does need to be very large in the first place (no point in having just a “few” data poinst on each core!)
  11. Note: The US State Department places restrictions on who may use some of these computers – nations from ITAR states are unlikely to be able to use the BG/P (i.e. as I understand it the embargoed countries according to the International Traffic in Arms Regulations (ITAR) section 126.1 arehttp://www.pmddtc.state.gov/embargoed_countries/index.html) No such limitatios apply to the p575/p6 (FitzRoy) or the UC p755 system, and I expect (TBC) that pan (the iDataPlex) has no restictions on access either…
  12. Imagecredit:http://www.flickr.com/photos/argonne/3974992962/in/set-72157622501409938/LagoonBy Paul Podsiadlo and Elena Shevchenko, Argonne&apos;s Center for Nanoscale MaterialsIn order to invent new materials to use in better batteries, solar cells and other technological advances, scientists must delve deeply into the nanoscale—the nearly atomic scale where structures determine how materials react with each other. At the nanoscale, anything can happen; materials can change colors and form into astonishing structures. These are some of the results from studies at the nanoscale.ABOVE: The colorful pattern represents a film of 7.5-nanometer lead sulfide nanocrystals evaporated on top of a silicon wafer. The islands are formed by micron-size “supercrystals”—faceted 3-D assemblies of the same nanocrystals. The picture is a true, unaltered image, obtained with an optical microscope in reflected light mode.Photo courtesy of Argonne National Laboratory.
  13. I think it is very important for potential users to understand the differences between “Capability” – i.e. BG/P and P575/P6 (FitzRoy) and “Capacity” – i.e. iDataPlex (pan) – and the types of problems that each system is optimised to solve. This will help them to understand why NeSI has the types of systems it does have.
  14. This law (see http://en.wikipedia.org/wiki/Amdahl%27s_law)places a very severe constraint on scalability for tightly coupled codes! If only 1% of the code cannot be parallelised – then you have almost no scalability beyond about 64 cores. This is the reason that underlies the “poor” scalability of weather codes on the Oak Ridge 200,000 core Cray that Thomas showed last week, and the reason why we need the highest performance processors available in order to compute solutions within a reasonable time – and the reason why NIWA purchased an IBM p575/p6 rather than a BG/P. There are variations on this point of view -e.g. Gustafson’s Law – but the essential point remains. Of course, for embarrassingly parallel (EP) problems P is essentially 1, and the size of the problem scales according to the number of processors.
  15. Obviously – machines that can handle the tightly coupled problems, can also handle the simpler, less demanding “highly scalable” and EP problems… but you want to fill them up in that order (i.e. using theme for their primary purpose first, and after that backfilling with the less demanding (on the interconnect) jobs.
  16. Note: The US State Department places restrictions on who may use some of these computers – nations from ITAR states are unlikely to be able to use the BG/P (i.e. as I understand it the embargoed countries according to the International Traffic in Arms Regulations (ITAR) section 126.1 arehttp://www.pmddtc.state.gov/embargoed_countries/index.html) No such limitatios apply to the p575/p6 (FitzRoy) or the UC p755 system, and I expect (TBC) that pan (the iDataPlex) has no restictions on access either…
  17. * Westmere currently in use, Sandy Bridge to be commissioned July 2012
  18. * Westmere currently in use, Sandy Bridge to be commissioned July 2012
  19. FitzRoy Upgrade now expected in Q4 2012.
  20. So these codes are of the Complex, multi-PDE (as opposed to largely singe PDE materials and majority(?) life science codes) – which by their very nature are not highly scalable (see Amdahl’s Law slide), so to solve large problems need the highest performance processors available. This explains why these codes don’t find a natural home on systems like BG – where the memory / node is small, and the individual processor performance is slow.
  21. Imagecredit:http://www.flickr.com/photos/argonne/3974992962/in/set-72157622501409938/LagoonBy Paul Podsiadlo and Elena Shevchenko, Argonne&apos;s Center for Nanoscale MaterialsIn order to invent new materials to use in better batteries, solar cells and other technological advances, scientists must delve deeply into the nanoscale—the nearly atomic scale where structures determine how materials react with each other. At the nanoscale, anything can happen; materials can change colors and form into astonishing structures. These are some of the results from studies at the nanoscale.ABOVE: The colorful pattern represents a film of 7.5-nanometer lead sulfide nanocrystals evaporated on top of a silicon wafer. The islands are formed by micron-size “supercrystals”—faceted 3-D assemblies of the same nanocrystals. The picture is a true, unaltered image, obtained with an optical microscope in reflected light mode.Photo courtesy of Argonne National Laboratory.
  22. Approved grants:Competitive peer reviewed grants: international, national, institutional, with some examplesMoney to cover costs can come from anywhereStrategy? Free then paid? No, timeline too short due to review in Sept 2013HPC World as a reference for best practice on the access policy
  23. Exemplary computational software developed in NZ for example Geneious, R, Beast (Alexei Drummond), Cylc (Hilary Oliver), Gerris (StéphanePopinet)Plug: NZ Open Source Awards, Open Science category
  24. Image credit: http://www.flickr.com/photos/argonne/3974231329/in/set-72157622501409938Natural PaletteBy SethDarling and MuruganathanRamanathanIn order to inventnew materials to use in betterbatteries, solarcells and othertechnologicaladvances, scientistsmustdelvedeeplyinto the nanoscale—the nearlyatomicscalewherestructuresdeterminehow materials react with eachother. At the nanoscale, anythingcanhappen; materials canchangecolors and form intoastonishingstructures. Thesearesome of the results from studiesat the nanoscale.ABOVE: Anopticalmicrograph of a hybridorganic/inorganicpolymer film thatexhibitsbothnanoscale and microscalestructuresdue to a competitionbetweenself-assembly and crystallization. These materials providefundamentalinsightsintopolymer science and havepotentialapplication in nanoscalepattern transfer processes.Previouslypublished as cover image in SoftMatter, DOI: 10.1039/b902114k (2009).ArgonneNationalLaboratory.
  25. Image credit: http://www.flickr.com/photos/argonne/3974231329/in/set-72157622501409938Natural PaletteBy SethDarling and MuruganathanRamanathanIn order to inventnew materials to use in betterbatteries, solarcells and othertechnologicaladvances, scientistsmustdelvedeeplyinto the nanoscale—the nearlyatomicscalewherestructuresdeterminehow materials react with eachother. At the nanoscale, anythingcanhappen; materials canchangecolors and form intoastonishingstructures. Thesearesome of the results from studiesat the nanoscale.ABOVE: Anopticalmicrograph of a hybridorganic/inorganicpolymer film thatexhibitsbothnanoscale and microscalestructuresdue to a competitionbetweenself-assembly and crystallization. These materials providefundamentalinsightsintopolymer science and havepotentialapplication in nanoscalepattern transfer processes.Previouslypublished as cover image in SoftMatter, DOI: 10.1039/b902114k (2009).ArgonneNationalLaboratory.