SlideShare uma empresa Scribd logo
1 de 72
Baixar para ler offline
Price Modelling
Risk Analysis
HighPerformance
Computing2014/15
TechnologyCompass
			Special
SciencesRiskAnalysisSimulationBigDataAnalyticsCADHighPerformanceComputing
TechnologyCompass
Table of Contents and Introduction
IBM Technical Computing.....................................................4
Accelerate Insights and Results..................................................................6
IBM Application Ready Solutions ..........................................................10
Enterprise-Ready Cluster &
Workload Management....................................................... 16
IBM Platform HPC..............................................................................................18
What’s New in IBM Platform LSF 8 .........................................................22
IBM Platform MPI 8.1.......................................................................................32
Scalable, Energy Efficient
HPC Systems............................................................................... 38
IBM NeXtScale System....................................................................................40
General Parallel File System (GPFS).............................. 46
Technologies That Enable the Management of Big Data.........48
What´s New in GPFS Version 3.5..............................................................64
GPFS Storage Server – a home for Big Data......................................66
3
More than 30 years of experience in
Scientific Computing
1980 marked the beginning of a decade where numerous startups
were created, some of which later transformed into big players in
the IT market. Technical innovations brought dramatic changes to
the nascent computer market. In Tübingen, close to one of Germa-
ny’s prime and oldest universities, transtec was founded.
In the early days, transtec focused on reselling DEC computers and
peripherals, delivering high-performance workstations to univer-
sity institutes and research facilities. In 1987, SUN/Sparc and stor-
age solutions broadened the portfolio, enhanced by IBM/RS6000
products in 1991. These were the typical workstations and server
systems for high performance computing then, used by the major-
ity of researchers worldwide.
In the late 90s, transtec was one of the first companies to offer
highly customized HPC cluster solutions based on standard Intel
architecture servers, some of which entered the TOP500 list of the
world’s fastest computing systems.
Thus, given this background and history, it is fair to say that trans-
tec looks back upon a more than 30 years’ experience in scientific
computing; our track record shows nearly 500 HPC installations.
With this experience, we know exactly what customers’ demands
are and how to meet them. High performance and ease of manage-
ment – this is what customers require today. HPC systems are for
sure required to peak-perform, as their name indicates, but that is
not enough: they must also be easy to handle. Unwieldy design and
operationalcomplexitymustbeavoidedoratleasthiddenfromad-
ministrators and particularly users of HPC computer systems.
transtec HPC solutions deliver ease of management, both in the
Linux and Windows worlds, and even where the customer´s envi-
ronment is of a highly heterogeneous nature. Even the dynamical
provisioning of HPC resources as needed does not constitute any
problem, thus further leading to maximal utilization of the cluster.
transtec HPC solutions use the latest and most innovative technol-
ogy. Their superior performance goes hand in hand with energy ef-
ficiency, as you would expect from any leading edge IT solution. We
regard these basic characteristics.
In2010,transtecenteredintoastrategicpartnershipwithIBM,sure-
ly one of the biggest players in the HPC world with a very strong
brand. The flexibility and long-year experience of transtec, com-
bined with the power and quality of IBM HPC systems constitute
a perfect symbiosis and provide customers with the most optimal
HPC solution imaginable. IBM NeXtScale systems are highly opti-
mized for HPC workload in datacenter environments, regarding
performance, flexibility, and energy, space and cooling efficiency.
Platform HPC and LSF are both enterprise-ready HPC cluster and
workloadmanagementsolutionsandarewidespreadinallkindsof
industrial HPC environments.
Your decision for a transtec HPC solution means you opt for most
intensive customer care and best service in HPC. Our experts will
be glad to bring in their expertise and support to assist you at any
stage, from HPC design to daily cluster operations, to HPC Cloud
Services.
Last but not least, transtec HPC Cloud Services provide customers
with the possibility to have their jobs run on dynamically provided
nodes in a dedicated datacenter, professionally managed and indi-
viduallycustomizable.NumerousstandardapplicationslikeANSYS,
LS-Dyna, OpenFOAM, as well as lots of codes like Gromacs, NAMD,
VMD, and others are pre-installed, integrated into an enterprise-
ready cloud management environment, and ready to run.
Have fun reading the transtec HPC Compass 2014/15 IBM Special!
IBMTechnical
Computing
SciencesRiskAnalysisSimulationBigDataAnalyticsCADHighPerformanceComputing
All companies want more compute power, faster networks, better access to data and
applications available everywhere and at all times. But simply deploying a fast computer
without considering all the elements involved in planning, deployment, installation,
administration and ongoing maintenance and updates can actually hobble an organization,
affecting productivity and possibly damaging profitability and brand value.
Today’s technical computing solutions require an end-to-end view of the hardware,
operating environment, applications, data management, software and services. It is about
the overall system – where purpose-built systems reside next to general purpose solutions
to form a compute and systems capability that meets the most demanding requirements.
Whether dealing with real-time trading systems, managing the smart grid, optimizing
realtime customer relationship management across multiple distribution channels, or
running computationally demanding electronic design automation workloads, you need
more than one size fits all to tackle your unique challenges.
6
Analyzing big data requires more than fast
processors
As technical computing moves towards a data centric model,
the ability to deal with large sets of fast-moving structured
and unstructured data becomes paramount. Whether ana-
lyzing market data to make critical business decisions, or
running data-intensive simulations to better understand
physical phenomena, analytic processes must be carried out
in ever-shorter time spans to be of value to an organization.
Generating insights from the exploding volume, velocity and
variety of data requires optimized systems specifically archi-
tected for that task. To maximize performance, systems opti-
mization must be done at every layer of the technology stack
to exploit unique processor, memory and storage character-
istics. The increasingly sophisticated technical computing
workflows require computing tuned to domain knowledge
and workload characteristics, hardware with multi core ar-
chitectures and advanced threading, and software tuned
from the operating system through the middleware stack.
Meeting the challenges of your particular
operation
Multi-step, big data analytics also requires optimized workflows
– which means organizations can no longer pick a technical com-
puting solution based on a single benchmark, such as a server’s
maximum processing power or its ability to run a particular
workload faster than a competitor’s solution. Companies need
to examine the various tasks in their big data analytics work-
flows and match the requirements with suitable technical com-
puting solutions.
IBMTechnicalComputing
Accelerate Insights and Results
7
A broad portfolio of superior and innovative
products and technology
The IBM vision for technical computing is to bring together
technology, science, management and innovation to enable
major improvements in business and society – and help build a
smarter planet. IBM provides an extensive selection of technical
computing options from a portfolio of servers, storage, software,
services and financing components backed by access to subject
matter experts and world-class support. IBM solutions can help
you optimize workloads and overcome obstacles to parallelism
and other revolutionary approaches to supercomputing.
The sky’s the limit
IBM recognizes that one size does not fit all. That’s why we intro-
duced the IBM Engineering Solutions for Cloud. Based on proven
IBM technology, Engineering Solutions for Cloud let organizations
build a centralized, shared product development center that sup-
ports both interactive and batch design workloads. This solution
enablesdesignersandengineerstoaccesstheTechnicalComputing
cloud environment from a laptop practically anywhere in the world,
using interactive applications with 2D or 3D remote visualization
significantly saving cost and minimizing the amount of data that
must be transferred to and from the cloud.
Helping a broad array of industries
IBM is helping companies and organizations in more than a doz-
en industries. IBM has powerful, innovative solutions to compa-
nies’ most challenging and complex problems, that allow busi-
nesses and researchers to innovate, make critical technical and
business decisions, achieve breakthrough results, and establish
sustainable competitive advantage.
Making sense of dollars
Financial services firms are rethinking their strategies as they re-
spond to the sweeping changes in the markets and the regulato-
ry environment, and an incessant blizzard of data. In fact, some
financial organizations consume market data at rates exceeding
one million messages per second, twice the peak rates they ex-
perienced only a year ago. With an optimized technical comput-
ing system from IBM, financial services enterprise can process
vast amounts of structured and unstructured data, in real time
which means you won’t be lost in a data whiteout.
Engineering a smarter planet
Meeting the demands of today’s automotive, aerospace, de-
fense and manufacturing engineers requires unprecedented
computing power for structural analysis, noise, vibration, and
harshness tests, crash analytics, and fluid dynamics. IBM offers
computer aided engineering (CAE) optimized solutions that in-
clude systems, storage and software, from leading ISVs to help
you streamline your development environment, reduce design-
cycle times and infrastructure costs, and meet aggressive time-
to-market deadlines.
Searching for black gold
A tectonic shift is underway in upstream petroleum computing.
Reservoir modeling and sensor field data now interact in near
real-time to dramatically improve the fidelity of the analysis, its
accuracy, and reliability. With IBM technical computing systems,
energy companies can reduce the duration and cost of problem
solving in reservoir optimization and seismic imaging, continu-
ing to advance the field of exploration and production.
Community collaboration
Complete technical computing solutions may require
components supplied by specialized vendor such as ap-
plications and tools from ISVs, hardware for intercon-
nection and acceleration of processing nodes, and state-
of-the-art cooling technology for greener operation,
to name a few. IBM maintains technical and business
relationships with all the leading technical computing
providers.
IBM also works closely with industry, open standards
consortia, and government agencies around the world
to facilitate technology advancement and deployment,
and collaborates with leading academic institutions
through our shared university research programs and
fellowships. Such collaborations drive value back to the
community and result in improved products.
IBMTechnicalComputing
Accelerate Insights and Results
8
Using insight to help support a smarter planet
Whether optimizing traffic flow to lowering fuel consumption
and time wasted in traffic jams, or unraveling genetic codes to
develop new medicines and therapies, or increasing the pro-
duction of oil and gas from existing reservoirs, powerful and ef-
ficient technical computing solutions from IBM provide a foun-
dation to handle the associated computational challenges and
extract intelligence from complex systems of instrumented and
interconnected people and devices.
9
10
Accelerate Time to Value for Technical Computing
Businesses in nearly every industry are looking for ways to improve
the efficiency of their technical computing environments. Compa-
nies that design aerospace or automotive products need systems
that can help them meet time-to-market requirements and maxi-
mizeprofitability.Organizationshelpingtofindcausesandcuresfor
disease need ways to increase productivity, foster innovation and
compete more effectively. For reservoir engineers, the rising cost of
oil and gas drilling means evermore accurate models are required
to pinpoint potential well sites and extract higher percentages of
oil and gas resources, and communications service providers need
a way to quickly analyze data and act on it.
Keys to overcoming technical computing
challenges
Most technical computing tasks involve vast amounts of data and
require thousands of complex calculations. Pressures to “do more
with less” create requirements for greater efficiency. Increasing ap-
plication performance and workload throughput is important–but
that is only part of the solution. Organizations can also realize dra-
matic efficiency benefits from simplified installation, deployment
and management of an optimized technical computing environ-
ment.
Additionally,manycompanieshavelimitedITresourcestodevoteto
administering the high-performance systems required for sophisti-
cateddesign,analyticsandresearchtasks.Thesecompaniesrequire
asolutionthatisaffordableandeasytouse,andthatwillhelpmake
themostoftheirinfrastructureinvestmentbyensuringcomputere-
sources are fully utilized and prioritized.
IBMTechnicalComputing
IBM Application Ready Solutions
11
IBM has created workload-optimized solutions designed to meet
these challenges. IBM Application Ready Solutions for Technical
Computing are based on IBM Platform Computing software and
powerful IBM systems, integrated and optimized for leading appli-
cationsandbackedbyreferencearchitectures.WithIBMApplication
Ready Solutions, organizations can spend more time solving scien-
tificandengineeringproblems,insteadofadministeringcomputing
environments.
IBM Application Ready Solutions: Looking under the hood
IBM has created Application Ready Solution reference architec-
tures for target workloads and applications. Each of these refer-
ence architectures includes recommended small, medium and
large configurations designed to ensure optimal performance
at entry-level prices. These reference architectures are based on
powerful, predefined and tested infrastructure with a choice of
the following systems:
ʎ IBM Flex System provides the ability to combine leading-
edge compute nodes with integrated storage and network-
ing in a highly dense, scalable blade system. The IBM Appli-
cation Ready Solution supports IBM Flex System x240 (x86)
compute nodes.
ʎ IBM System x helps organizations address their most chal-
lenging and complex problems. The Application Ready Solu-
tion supports IBM NeXtScale System, a revolutionary new x86
high-performance system designed for modular flexibility
and scalability, System x rack-mounted servers and System x
iDataPlex dx360 M4 systems designed to optimize density,
performance and graphics acceleration for remote 3-D visu-
alization.
ʎ IBM System Storage Storwize V3700 is an entry-level disk sys-
tem delivering an ideal price/performance ratio and scalabil-
ity – or choose the optional IBM Storwize V7000 Unified for
enterprise-class, midrange storage designed to consolidate
block-and-file workloads into a single system.
ʎ IBM Intelligent Cluster is a factory-integrated, fully tested
solution that helps simplify and expedite deployment of x86-
based Application Ready Solutions.
The solutions also include pre-integrated IBM Platform Comput-
ing software designed to address technical computing challeng-
es:
ʎ IBM Platform HPC is a complete technical computing man-
agement solution in a single product, with a range of fea-
tures designed to improve time-to-results and help research-
ers focus on their work rather than on managing workloads.
ʎ IBM Platform LSF provides a comprehensive set of tools for
intelligently scheduling workloads and dynamically allocat-
ing resources to help ensure optimal job throughput.
ʎ IBM Platform Symphony delivers powerful enterpriseclass
management for running big data, analytics and compute-
intensive applications.
ʎ IBM Platform Cluster Manager–Standard Edition provides
easy-to-use yet powerful cluster management for technical
computing clusters that simplifies the entire process, from
initial deployment through provisioning to ongoing mainte-
nance.
ʎ IBM General Parallel File System (GPFS) is a highperformance
enterprise file management platform for optimizing data
management.
12
Technical computing workloads optimized for
Application Ready Solutions
IBM Application Ready Solutions take the guesswork and com-
plexity out of deploying, managing and using high-performance
clusters, grids and clouds in industries such as automotive, aero-
space, life sciences, electronics, telecommunications, chemistry
and petroleum.
IBM Application Ready Solution for Abaqus
Developed in partnership with Dassault Systèmes, the IBM Ap-
plication Ready Solution for Abaqus provides the framework to
consolidate numerous tools into a single, unified modeling and
analysis computing environment. High performance IBM sys-
tems, workload and file management, networking and storage
combine to provide a complete integrated environment. Easy
access to Abaqus job-related data and remote job management
provides the means to solve entry-level or extremely large simu-
lation problems with fast turnaround times.
IBM Application Ready Solution for Accelrys
Designed for healthcare and life sciences, the Application Ready
Solution for Accelrys simplifies and accelerates mapping, variant
calling and annotation for the Accelrys Enterprise Platform (AEP)
NGS Collection. It addresses file system performance–the num-
ber-one challenge for NGS workloads on AEP – by integrating
IBM GPFS for scalable I/O performance. IBM systems provide the
computational power and high-performance storage required,
along with simplified cluster management to speed deployment
and provisioning.
IBMTechnicalComputing
IBM Application Ready Solutions
13
IBM Application Ready Solution for ANSYS
ANSYS software helps engineers tackle demanding tasks such as
computational fluid dynamics (CFD) modeling, structural analysis
and digital wind-tunnel simulation. The IBM Application Ready Solu-
tion for ANSYS speeds deployment and optimizes performance for
the most demanding ANSYS Fluent and ANSYS Mechanical environ-
ments. Engineers can become productive quickly, easily submitting
simulations, sharing files with colleagues and enhancing insight
when optional remote 2-D and 3-D visualization is configured.
IBM Application Ready Solution for CLC bio
This integrated solution is architected for clients involved in ge-
nomics research in areas ranging from personalized medicine to
plant and food research. Combining CLC bio software with high-
performance IBM systems and GPFS, the solution accelerates high-
throughputsequencingandanalysisofnext-generationsequencing
data while improving the efficiency of CLC bio Genomic Server and
CLC Genomics Workbench environments.
IBM Application Ready Solution for Gaussian
Gaussian software is widely used by chemists, chemical engineers,
biochemists, physicists and other scientists performing molecular
electronic structure calculations in a variety of market segments.
TheIBMApplicationReadySolutionisdesignedtohelpspeedresults
by integrating the latest version of the Gaussian series of programs
with powerful IBM Flex System blades and integrated storage. IBM
Platform Computing provides simplified workload and resource
management.
IBM Application Ready Solution for IBM InfoSphere
BigInsights
The Application Ready Solution for IBM InfoSphere BigInsights
provides a powerful big data MapReduce analytics environment
and reference architecture based on IBM PowerLinux servers,
IBM Platform Symphony, IBM GPFS and integrated storage. The
solution delivers balanced performance for data-intensive work-
loads, along with tools and accelerators to simplify and speed
application development. The solution is ideal for solving time-
critical, data-intensive analytics problems in a wide range of in-
dustry sectors.
IBM Application Ready Solution for MSC Software
The IBM Application Ready Solution for MSC Software features
an optimized platform designed to help manufacturers rapidly
deploy a high-performance simulation, modeling and data man-
agement environment, complete with process workflow and
other high-demand usability features. The platform features IBM
systems, workload management and parallel file system seam-
lessly integrated with MSC Nastran, MSC Patran and MSC Sim-
Manager to provide clients robust and agile engineering clusters
or HPC clouds for accelerated results and lower cost.
IBM Application Ready Solution for Schlumberger
Fine-tuned for accelerating reservoir simulations using Schlum-
berger ECLIPSE and INTERSECT, this Application Ready Solution
provides application templates to reduce setup time and sim-
plify job submission. Architected specifically for Schlumberger
applications, the solution enables users to perform significantly
more iterations of their simulations and analysis, ultimately
yielding more accurate results. Easy access to Schlumberger job-
Time savings with IBM Application Ready Solutions.
14
IBMTechnicalComputing
IBM Application Ready Solutions
related data and remote management improves user and admin-
istrator productivity.
Complete, integrated solutions architected to
deliver real-world benefits
IBM Application Ready Solutions help organizations transform
environments to deliver results faster, better and at less expense.
The benefits start with pre-integrated and fully supported2 solu-
tions that reduce the complexity of the IT lifecycle and shorten
implementation time. Companies have one support number to
call for all IBM software and hardware components for dedicat-
ed assistance from technical computing industry experts. Exten-
sible IBM Application Ready Solutions also help protect a compa-
ny’s technical computing investment by scaling as requirements
grow. IBM Platform Computing software allows companies to
speed time-to-results and lower costs by simplifying manage-
ment of high-performance clusters and clouds. These products
enable research and development teams to easily access a pool
of shared resources to dramatically accelerate a wide range of
simulations and analytics. Job submission templates reduce
15
setup time while minimizing user errors during job submission,
and built-in workload management capabilities such as tracking
application license usage and scheduling jobs based on license
availability improve resource utilization and help ensure fastest
time-to-results. Designed, tested and optimized by experienced
technical computing architects from IBM and leading indepen-
dent software vendors (ISVs), IBM Application Ready Solutions
help deliver optimal application performance and robustness.
IBM high-performance systems, software and storage are de-
signed to accelerate even the most demanding workloads. Fur-
ther performance improvements are provided by the advanced
cluster file system, which improves efficiency and speed by re-
moving data-related bottlenecks.
Enterprise-Ready
Cluster&Workload
Management
High performance computing (HPC) is becoming a necessary tool for organizations to speed
up product design, scientific research, and business analytics. However, there are few
software environments more complex to manage and utilize than modern high performance
computing clusters. Therefore, addressing the problem of complexity in cluster
management is a key aspect of leveraging HPC to improve time to results and user
productivity.
EngineeringLifeSciencesAutomotivePriceModellingAerospaceCAEDataAnalytics
18
Introduction
Clusters based on the Linux operating system have become
increasingly prevalent at large supercomputing centers and
continue to make significant in-roads in commercial and aca-
demic settings. This is primarily due to their superior price/per-
formance and flexibility, as well as the availability of commercial
applications that are based on the Linux OS.
Ironically, the same factors that make Linux a clear choice for
high performance computing often make the operating system
less accessible to smaller computing centers. These organiza-
tions may have Microsoft Windows administrators on staff, but
have little or no Linux or cluster management experience. The
complexity and cost of cluster management often outweigh the
benefits that make open, commodity clusters so compelling. Not
only can HPC cluster deployments be difficult, but the ongoing
need to deal with heterogeneous hardware and operating sys-
tems, mixed workloads, and rapidly evolving toolsets make de-
ploying and managing an HPC cluster a daunting task.
These issues create a barrier to entry for scientists and research-
ers who require the performance of an HPC cluster, but are lim-
ited to the performance of a workstation. This is why ease of use
is now mandatory for HPC cluster management. This paper re-
views the most complete and easy to use cluster management
solution, Platform HPC, which is now commercially available
from Platform Computing.
The cluster management challenge
To provide a proper HPC application environment, system admin-
istrators need to provide a full set of capabilities to their users,
as shown below. These capabilities include cluster provisioning
and node management, application workload management, and
Enterprise-ReadyCluster&
WorkloadManagement
IBM Platform HPC
Application-Centric Interface
Unified Management Interface
Provisioning
& Node
Management
Workload
Management
Parallel Job
Enablement
Adaptive
Scheduling
Essential components of an HPC cluster solution
19
an environment that makes it easy to develop, run and manage
distributed parallel applications.
Modern application environments tend to be heterogeneous;
some workloads require Windows compute hosts while oth-
ers require particular Linux operating systems or versions.
The ability to change a node’s operating system on-the-fly in re-
sponse to changing application needs - referred to as adaptive
scheduling - is important since it allows system administrators
to maximize resource use, and present what appears to be a larg-
er resource pool to cluster users.
Learning how to use a command line interface to power-up, pro-
vision and manage a cluster is extremely time-consuming. Ad-
ministrators therefore need remote, web-based access to their
HPC environment that makes it easier for them to install and
manage an HPC cluster. An easy-to-use application-centric web
interface can have tangible benefits including improved produc-
tivity, reduced training requirements, reduced errors rates, and
secure remote access.
While there are several cluster management tools that address
parts of these requirements, few address them fully, and some
tools are little more than collections of discrete open-source
software components.
Some cluster toolkits focus largely on the problem of cluster pro-
visioning and management. While they clearly simplify cluster
deployment, administrators wanting to make changes to node
configurations or customize their environment will quickly
find themselves hand-editing XML configuration files or writ-
ing their own shell scripts. Third-party workload managers and
various open-source MPI libraries might be included as part of
a distribution. However, these included components are loosely
integrated and often need to be managed separately from the
cluster manager. As a result the cluster administrator needs to
learn how to utilize each additional piece of software in order to
manage the cluster effectively.
Other HPC solutions are designed purely for application work-
load management. While these are all capable workload manag-
ers, most do not address at all the issue of cluster management,
application integration, or adaptive scheduling. If such capabili-
ties exist they usually require the purchase of additional soft-
ware products.
Parallel job management is also critical. One of the primary rea-
sons that customers deploy HPC clusters is to maximize applica-
tion performance. Processing problems in parallel is a common
way to achieve performance gains. The choice of MPI, its scalabil-
ity, and the degree to which it is integrated with various OFED
drivers and high performance interconnects has a direct impact
on delivered application performance. Furthermore, if the work-
load manager does not incorporate specific parallel job manage-
ment features, busy cluster users and administrators can find
themselves manually cleaning up after failed MPI jobs or writing
their own shell scripts to do the same.
“Platform HPC and Platform LSF have been known
for many years as a highest-quality and enterprise-
ready cluster and workload management solutions,
and we have many customers in academia and in-
dustry relying on them.”
Dr. Oliver Tennert Director Technology Management & HPC Solutions
20
Enterprise-ReadyCluster&
WorkloadManagement
IBM Platform HPC
Complexity is a real problem. Many small organizations or de-
partments grapple with a new vocabulary full of cryptic com-
mands, configuring and troubleshooting Anaconda kick start
scripts, finding the correct OFED drivers for specialized hard-
ware, and configuring open source monitoring systems like Gan-
glia or Nagios. Without an integrated solution administrators
may need to deal with dozens of distinct software components,
making managing HPC cluster implementations extremely te-
dious and time-consuming.
Re-thinking HPC clusters
Clearly these challenges demand a fresh approach to HPC clus-
ter management. Platform HPC represents a “re-think” of how
HPC clusters are deployed and managed. Rather than address-
ing only part of the HPC management puzzle, Platform HPC ad-
dresses all facets of cluster management. It provides:
ʎʎ A complete, easy-to-use cluster management solution
ʎʎ Integrated application support
ʎʎ User-friendly, topology-aware workload management
ʎʎ Robust workload and system monitoring and reporting
ʎʎ Dynamic operating system multi-boot (adaptive scheduling)
ʎʎ GPU scheduling
ʎʎ Robust commercial MPI library (Platform MPI)
ʎʎ Web-based interface for access anywhere
Most complete HPC cluster management solution
PlatformHPCmakesiteasytodeploy,runandmanageHPCclusters
while meeting the most demanding requirements for application
performance and predictable workload management. It is a com-
plete solution that provides a robust set of cluster management ca-
pabilities; from cluster provisioning and management to workload
21
management and monitoring. The easy-to -use unified web portal
provides a single point of access into the cluster, making it easy to
manage your jobs and optimize application performance.
Platform HPC is more than just a stack of software; it is a fully in-
tegrated and certified solution designed to ensure ease of use and
simplified troubleshooting.
Integrated application support
High performing, HPC-optimized MPI libraries come integrated
with Platform HPC, making it easy to get parallel applications up
and running. Scripting guidelines and job submission templates for
commonly used commercial applications simplify job submission,
reducesetuptimeandminimizeoperationerrors.Oncetheapplica-
tions are up and running, Platform HPC improves application per-
formance by intelligently scheduling resources based on workload
characteristics.
Fully certified and supported
Platform HPC unlocks cluster management to provide the easiest
and most complete HPC management capabilities while reducing
overall cluster cost and improving administrator productivity. It is
based on the industry’s most mature and robust workload manag-
er, Platform LSF, making it the most reliable solution on the market.
Other solutions are typically a collection of open-source tools,
which may also include pieces of commercially developed soft-
ware. They lack key HPC functionality and vendor support, relying
on the administrator’s technical ability and time to implement.
Platform HPC is a single product with a single installer and a uni-
fiedweb-basedmanagementinterface.Withthebestsupportinthe
HPC industry, Platform HPC provides the most complete solution
for HPC cluster management.
Complete solution
Platform HPC provides a complete set of HPC cluster manage-
ment features. In this section we’ll explore some of these unique
capabilities in more detail.
Easy to use, cluster provisioning and management
With Platform HPC, administrators can quickly provision and
manage HPC clusters with unprecedented ease. It ensures maxi-
mum uptime and can transparently synchronize files to cluster
nodes without any downtime or re-installation.
Fast and efficient software Installation – Platform HPC can be
installed on the head node and takes less than one hour using
three different mechanisms:
ʎʎ Platform HPC DVD
ʎʎ Platform HPC ISO file
ʎʎ Platform partner’s factory install bootable USB drive
Installing software on cluster nodes is simply a matter of associ-
ating cluster nodes with flexible provisioning templates through
the web-based interface.
Flexible provisioning – Platform HPC offers multiple options for
provisioning Linux operating environments that include:
ʎʎ Package-based provisioning
ʎʎ Image based provisioning
ʎʎ Diskless node provisioning
Large collections of hosts can be provisioned using the same pro-
visioning template. Platform HPC automatically manages details
such as IP address assignment and node naming conventions that
reflect the position of cluster nodes in data center racks.
22
Unlike competing solutions, Platform HPC deploys multiple oper-
ating systems and OS versions to a cluster simultaneously. This in-
cludes Red Hat Enterprise Linux, CentOS, Scientific Linux, and SUSE
Linux Enterprise Server. This provides administrators with greater
flexibilityinhowtheyservetheirusercommunitiesandmeansthat
HPC clusters can grow and evolve incrementally as requirements
change.
Enterprise-ReadyCluster&
WorkloadManagement
What’s New in IBM Platform LSF 8
23
What’s New in IBM Platform LSF 8
Written with Platform LSF administrators in mind, this brief pro-
vides a short explanation of significant changes in Platform’s lat-
est release of Platform LSF, with a specific emphasis on schedul-
ing and workload management features.
About IBM Platform LSF 8
Platform LSF is the most powerful workload manager for de-
manding, distributed high performance computing environ-
ments. It provides a complete set of workload management
capabilities, all designed to work together to reduce cycle times
and maximize productivity in missioncritical environments.
This latest Platform LSF release delivers improvements in perfor-
mance and scalability while introducing new features that sim-
plify administration and boost user productivity. This includes:
ʎ Guaranteed resources – Aligns business SLA’s with infra-
structure configuration for simplified administration and
configuration
ʎ Live reconfiguration – Provides simplified administration
and enables agility
ʎ Delegation of administrative rights – Empowers line of busi-
ness owners to take control of their own projects
ʎ Fairshare & pre-emptive scheduling enhancements – Fine
tunes key production policies
Platform LSF 8 Features
Guaranteed Resources Ensure Deadlines are Met
In Platform LSF 8, resource-based scheduling has been extended
to guarantee resource availability to groups of jobs. Resources
can be slots, entire hosts or user-defined shared resources such
as software licenses. As an example, a business unit might guar-
antee that it has access to specific types of resources within ten
minutes of a job being submitted, even while sharing resources
between departments. This facility ensures that lower priority
jobs using the needed resources can be pre-empted in order to
meet the SLAs of higher priority jobs.
Because jobs can be automatically attached to an SLA class via
access controls, administrators can enable these guarantees
without requiring that end-users change their job submission
procedures, making it easy to implement this capability in exist-
ing environments.
Live Cluster Reconfiguration
Platform LSF 8 incorporates a new live reconfiguration capabil-
ity, allowing changes to be made to clusters without the need
to re-start LSF daemons. This is useful to customers who need
to add hosts, adjust sharing policies or re-assign users between
groups “on the fly”, without impacting cluster availability or run-
ning jobs.
24
Changes to the cluster configuration can be made via the bconf
command line utility, or via new API calls. This functionality can
also be integrated via a web-based interface using Platform Ap-
plication Center. All configuration modifications are logged for
a complete audit history, and changes are propagated almost
instantaneously. The majority of reconfiguration operations are
completed in under half a second.
With Live Reconfiguration, down-time is reduced, and administra-
torsarefreetomakeneededadjustmentsquicklyratherthanwait
for scheduled maintenance periods or non-peak hours. In cases
where users are members of multiple groups, controls can be put
in place so that a group administrator can only control jobs as-
sociated with their designated group rather than impacting jobs
related to another group submitted by the same user.
Delegation of Administrative Rights
With Platform LSF 8, the concept of group administrators has been
extended to enable project managers and line of business man-
agers to dynamically modify group membership and fairshare re-
source allocation policies within their group. The ability to make
these changes dynamically to a running cluster is made possible by
the Live Reconfiguration feature.
These capabilities can be delegated selectively depending on the
group and site policy. Different group administrators can manage
jobs, control sharing policies or adjust group membership.
More Flexible Fairshare Scheduling Policies
ToenablebetterresourcesharingflexibilitywithPlatformLSF8,the
algorithms used to tune dynamically calculated user priorities can
be adjusted at the queue level. These algorithms can vary based on
Enterprise-ReadyCluster&
WorkloadManagement
What’s New in IBM Platform LSF 8
25
department, application or project team preferences. The Fairshare
parameters ENABLE_HIST_RUN_TIME and HIST_HOURS enable
administrators to control the degree to which LSF considers prior
resource usage when determining user priority. The flexibility of
Platform LSF 8 has also been improved by allowing a similar “decay
rate” to apply to currently running jobs (RUN_TIME_DECAY), either
system-wide or at the queue level. This is most useful for custom-
ers with long-running jobs, where setting this parameter results in
a more accurate view of real resource use for the fairshare schedul-
ing to consider.
Performance & Scalability Enhancements
Platform LSF has been extended to support an unparalleled
scale of up to 100,000 cores and 1.5 million queued jobs for
very high throughput EDA workloads. Even higher scalability
is possible for more traditional HPC workloads.
Specific areas of improvement include the time required to
start the master-batch daemon (MBD), bjobs query perfor-
mance, job submission and job dispatching as well as impres-
sive performance gains resulting from the new Bulk Job Sub-
mission feature. In addition, on very large clusters with large
numbers of user groups employing fairshare scheduling, the
memory footprint of the master batch scheduler in LSF has
been reduced by approximately 70% and scheduler cycle time
has been reduced by 25%, resulting in better performance and
scalability.
More Sophisticated Host-based Resource Usage for Parallel Jobs
Platform LSF 8 provides several improvements to how resource
use is tracked and reported with parallel jobs. Accurate tracking
of how parallel jobs use resources such as CPUs, memory and
swap, is important for ease of management, optimal scheduling
and accurate reporting and workload analysis. With Platform
LSF 8 administrators can track resource usage on a per-host
basis and an aggregated basis (across all hosts), ensuring that
resource use is reported accurately. Additional details such as
running PIDs and PGIDs for distributed parallel jobs, manual
cleanup (if necessary) and the development of scripts for manag-
ing parallel jobs are simplified. These improvements in resource
usage reporting are reflected in LSF commands including bjobs,
bhist and bacct.
Improved Ease of Administration for Mixed Windows and
Linux Clusters
The lspasswd command in Platform LSF enables Windows LSF
users to advise LSF of changes to their Windows level pass-
words. With Platform LSF 8, password synchronization between
environments has become much easier to manage because the
Windows passwords can now be adjusted directly from Linux
hosts using the lspasswd command. This allows Linux users to
conveniently synchronize passwords on Windows hosts without
needing to explicitly login into the host.
Bulk Job Submission
When submitting large numbers of jobs with different resource
requirements or job level settings, Bulk Job Submission allows
for jobs to be submitted in bulk by referencing a single file con-
taining job details.
26
Simplified configuration changes – Platform HPC simplifies ad-
ministration and increases cluster availability by allowing chang-
es such as new package installations, patch updates, and changes
to configuration files to be propagated to cluster nodes automati-
cally without the need to re-install those nodes. It also provides a
mechanism whereby experienced administrators can quickly per-
form operations in parallel across multiple cluster nodes.
Repository snapshots / trial installations – Upgrading software
can be risky, particularly in complex environments. If a new soft-
ware upgrade introduces problems, administrators often need
to rapidly “rollback” to a known good state. With other cluster
managers this can mean having to re-install the entire cluster.
Platform HPC incorporates repository snapshots, which are “re-
store points” for the entire cluster. Administrators can snapshot a
known good repository, make changes to their environment, and
easily revert to a previous “known good” repository in the event of
anunforeseenproblem.Thispowerfulcapabilitytakestheriskout
of cluster software upgrades.
New hardware integration – When new hardware is added
to a cluster it may require new or updated device drivers that
are not supported by the OS environment on the installer
node. This means that a newly updated node may not net-
work boot and provision until the head node on the cluster is
updated with a new operating system; a tedious and disrup-
tive process. Platform HPC includes a driver patching utility
that allows updated device drivers to be inserted into exist-
ing repositories, essentially future proofing the cluster, and
providing a simplified means of supporting new hardware
without needing to re-install the environment from scratch.
Enterprise-ReadyCluster&
WorkloadManagement
IBM Platform HPC
Resource monitoring
27
Software updates with no re-boot – Some cluster managers al-
ways re-boot nodes when updating software, regardless of how
minor the change. This is a simple way to manage updates. How-
ever, scheduling downtime can be difficult and disruptive. Platform
HPCperformsupdatesintelligentlyandselectivelysothatcompute
nodes continue to run even as non-intrusive updates are applied.
The repository is automatically updated so that future installations
include the software update. Changes that require the re-installa-
tion of the node (e.g. upgrading an operating system) can be made
in a “pending” state until downtime can be scheduled.
User-friendly, topology aware workload
management
Platform HPC includes a robust workload scheduling capability,
which is based on Platform LSF - the industry’s most powerful,
comprehensive, policy driven workload management solution for
engineering and scientific distributed computing environments.
By scheduling workloads intelligently according to policy,
Platform HPC improves end user productivity with minimal sys-
tem administrative effort. In addition, it allows HPC user teams to
easily access and share all computing resources, while reducing
time between simulation iterations.
GPU scheduling – Platform HPC provides the capability to sched-
ule jobs to GPUs as well as CPUs. This is particularly advantageous
in heterogeneous hardware environments as it means that ad-
ministrators can configure Platform HPC so that only those jobs
that can benefit from running on GPUs are allocated to those re-
sources. This frees up CPU-based resources to run other jobs. Us-
ing the unified management interface, administrators can moni-
tor the GPU performance as well as detect ECC errors.
Unified management interface
Competing cluster management tools either do not have a web-
based interface or require multiple interfaces for managing dif-
ferent functional areas. In comparison, Platform HPC includes a
single unified interface through which all administrative tasks
can be performed including node-management, job-manage-
ment, jobs and cluster monitoring and reporting. Using the uni-
fied management interface, even cluster administrators with
very little Linux experience can competently manage a state of
the art HPC cluster.
Job management – While command line savvy users can contin-
ue using the remote terminal capability, the unified web portal
makes it easy to submit, monitor, and manage jobs. As changes
are made to the cluster configuration, Platform HPC automati-
cally re-configures key components, ensuring that jobs are allo-
cated to the appropriate resources.
The web portal is customizable and provides job data manage-
ment, remote visualization and interactive job support.
28
Workload/system correlation – Administrators can correlate
workload information with system load, so that they can make
timely decisions and proactively manage compute resources
against business demand. When it’s time for capacity planning,
the management interface can be used to run detailed reports
and analyses which quantify user needs and remove the guess
work from capacity expansion.
Simplified cluster management – The unified management con-
sole is used to administer all aspects of the cluster environment. It
enables administrators to easily install, manage and monitor their
cluster. It also provides an interactive environment to easily pack-
age software as kits for application deployment as well as pre-in-
tegrated commercial application support. One of the key features
of the interface is an operational dashboard that provides com-
prehensive administrative reports. As the image illustrates, Plat-
form HPC enables administrators to monitor and report on key
performance metrics such as cluster capacity, available memory
and CPU utilization. This enables administrators to easily identify
and troubleshoot issues.
The easy to use interface saves the cluster administrator time, and
means that they do not need to become an expert in the adminis-
tration of open-source software components. It also reduces the
possibility of errors and time lost due to incorrect configuration.
Cluster administrators enjoy the best of both worlds – easy ac-
cess to a powerful, web-based cluster manager without the need
to learn and separately administer all the tools that comprise the
HPC cluster environment.
Enterprise-ReadyCluster&
WorkloadManagement
IBM Platform HPC
Job submission templates
29
Robust Commercial MPI library
Platform MPI – In order to make it easier to get parallel applica-
tions up and running, Platform HPC includes the industry’s most
robust and highest performing MPI implementation, Platform
MPI. Platform MPI provides consistent performance at application
run-time and for application scaling, resulting in top performance
results across a range of third-party benchmarks.
Open Source MPI – Platform HPC also includes various other in-
dustry standard MPI implementations. This includes MPICH1,
MPICH2 and MVAPICH1, which are optimized for cluster hosts con-
nected via InfiniBand, iWARP or other RDMA based interconnects.
Integrated application support
Job submission templates – Platform HPC comes complete with
job submission templates for ANSYS Mechanical, ANSYS Flu-
ent, ANSYS CFX, LS-DYNA, MSC Nastran, Schlumberger ECLIPSE,
Simulia Abaqus, NCBI Blast, NWChem, ClustalW, and HMMER. By
configuring these templates based on the application settings
in your environment, users can start using the cluster without
writing scripts.
Scripting Guidelines – Cluster users that utilize homegrown or
open-source applications, can utilize the Platform HPC scripting
guidelines. These user-friendly interfaces help minimize job sub-
mission errors. They are also self-documenting, enabling users to
create their own job submission templates.
Benchmark tests – Platform HPC also includes standard bench-
mark tests to ensure that your cluster will deliver the best per-
formance without manual tuning.
Flexible OS provisioning
Platform HPC can deploy multiple operating system versions con-
currently on the same cluster and, based on job resource require-
ments, dynamically boot the Linux or Windows operating system
required to run the job. Administrators can also use a web inter-
face to manually switch nodes to the required OS to meet applica-
tion demands, providing them with the flexibility to support spe-
cial requests and accommodate unanticipated changes. Rather
than being an extracost item as it is with other HPC management
suites,thiscapabilityisincludedasacorefeatureofPlatformHPC.
Commercial Service and support
Certified cluster configurations – Platform HPC is tested and
certified on all partner hardware platforms. By qualifying each
platform individually and providing vendor-specific software
with optimized libraries and drivers that take maximum advan-
tage of unique hardware features, Platform Computing has es-
sentially done the integration work in advance.
30
As a result, clusters can be deployed quickly and predictably
with minimal effort. As a testament to this, Platform HPC is certi-
fied under the Intel Cluster Ready program.
Enterprise class service and support – Widely regarded as hav-
ing the best HPC support organization in the business, Platform
Computing is uniquely able to support an integrated HPC plat-
form. Because support personnel have direct access to the Plat-
form HPC developers, Platform Computing is able to offer a high-
er level of support and ensure that any problems encountered
are resolved quickly and efficiently.
Summary
Platform HPC is the ideal solution for deploying and manag-
ing state of the art HPC clusters. It makes cluster management
simple, enabling analysts, engineers and scientists from organi-
zations of any size to easily exploit the power of Linux clusters.
Unlike other HPC solutions that address only parts of the HPC
management challenge, Platform HPC uniquely addresses all as-
pects of cluster and management including:
ʎʎ Easy-to-use cluster provisioning and management
ʎʎ User-friendly, topology aware workload management
ʎʎ Unified management interface
ʎʎ Robust commercial MPI library
ʎʎ Integrated application support
ʎʎ Flexible OS provisioning
ʎʎ Commercial HPC service and support
Enterprise-ReadyCluster&
WorkloadManagement
IBM Platform HPC
31
By providing simplified management over the entire lifecycle of
a cluster, Platform HPC has a direct and positive impact on pro-
ductivity while helping to reduce complexity and cost.
The comprehensive web-based management interface, and fea-
tures like repository snapshots and the ability to update soft-
ware packages on the fly means that state-of-the-art HPC clus-
ters can be provisioned and managed even by administrators
with little or no Linux administration experience.
Capability / Feature Platform HPC
Cluster Provisioning and Management 
Initial cluster provisioning 
Multiple provisioning methods 
Web-based cluster management 
Node updates with no re-boot 
Repository snapshots 
Flexible node templates 
Multiple OS and OS versions 
Workload Management & Application Integration 
Integrated workload management 
HPC libraries & toolsets 
NVIDIA CUDA SDK support 
Web-based job management 
Web-based job data management 
Multi-boot based on workload 
Advanced parallel job management 
Commercial application integrations 
MPI Libraries 
Commercial grade MPI 
Workload and system monitoring, reporting and
correlation

Workload monitoring 
Workload reporting 
System monitoring & reporting 
Workload and system load correlation 
Integration with 3rd party management tools 
32
IBM Platform MPI 8.1
Benefits
ʎ Superior application performance
ʎ Reduced development and support costs
ʎ Faster time-to-market
ʎ The industry’s best technical support
Features
ʎ Supports the widest range of hardware, networks and oper-
ating systems
ʎ Distributed by over 30 leading commercial software vendors
ʎ Change interconnects or libraries with no need to re-compile
ʎ Seamless compatibility across Windows and Linux environ-
ments
ʎ Ensures a production quality implementation
Ideal for:
ʎ Enterprises that develop or deploy parallelized software ap-
plications on HPC clusters
ʎ Commercial software vendors wanting to improve applica-
tions performance over the widest range of computer hard-
ware, interconnects and operating systems
The Standard for Scalable, Parallel Applications
Platform MPI is a high performance, production–quality imple-
mentation of the Message Passing Interface (MPI). It is widely
used in the high performance computing (HPC) industry and is
considered the de facto standard for developing scalable, paral-
lel applications.
Enterprise-ReadyCluster&
WorkloadManagement
IBM Platform MPI 8.1
33
Platform MPI maintains full backward compatibility with HP-MPI
and Platform MPI applications and incorporates advanced CPU
affinity features, dynamic selection of interface libraries, supe-
rior workload manger integrations and improved performance
and scalability.
Platform MPI supports the broadest range of industry standard
platforms, interconnects and operating systems helping ensure
that your parallel applications can run anywhere.
Focus on portability
Platform MPI allows developers to build a single executable that
transparently leverages the performance features of any type of
interconnect, thereby providing applications with optimal laten-
cy and bandwidth for each protocol. This reduces development
effort, and enables applications to use the “latest and greatest”
technologies on Linux or Microsoft Windows without the need
to re-compile and re-link applications.
Platform MPI is optimized for both distributed (DMP) and shared
memory (SMP) environments and provides a variety of flexible
CPU binding strategies for processes and threads, enabling bet-
ter performance on multi–core environments. With this capa-
bility memory and cache conflicts are managed by more intel-
ligently distributing the load among multiple cores.
With support for Windows HPC Server 2008 and the Microsoft job
scheduler, as well as other Microsoft operating environments,
Platform MPI allows developers targeting Windows platforms to
enjoy the benefits of a standard portable MPI and avoid propri-
etary lock-in.
34
Enterprise-ReadyCluster&
WorkloadManagement
IBM Platform MPI 8.1
Supported Operating Systems
ʎ Red Hat Enterprise Linux 4.6, 5.x and 6.x
ʎ SUSE Linux Enterprise Server 10 and 11
ʎ CentOS 5.3
ʎ Microsoft Windows® XP/Vista, Server 2003/Server 200 8/HPC
Server 2008, Windows 7
Supported Interconnects and Protocols
Myrinet (Linux) GM & MX on X86-64 and Itanium2
InfiniBand (Linux) OFED, PSM, uDAPL on X86-64 and
Itanium2
OFED 1.1, 1.2, 1.3, 1.4, 1.5
SDR, DDR, QDR, ConnectX and
ConnextX2 Mellanox FCA
GigE (Linux) RDMA, uDAPL, TCP/IP
Infiniband (Windows) WinOF 2.x, IBAL, WSD, SDR, DDR,
QDR, ConnectX(2)
GigE (Windows) TCP/IP on x86-64
35
Features and Benefits
Features Benefits
Simplicity
ʎ Fully complies with the MPI 2.2 standard, providing dynamic processes,
one–sided communications, extended collectives, thread safety, and
updated ROMIO
ʎ Complete debugging, diagnostic and profiling tools
ʎ Auto-detection of interconnects and dynamic loading of libraries
ʎ No re-link required for debugging and profiling
ʎ Supported by the largest dedicated HPC support organization
ʎ Applications port easily to other platforms
ʎ Protects ISV software investment
ʎ Reduces time-to-market
ʎ Increased robustness and quality of applications
ʎ Technical problems resolved quickly and efficiently
Performance ʎ Improved shared memory performance, incorporating code and meth-
ods from Platform MPI 5.6 (Scali MPI)
ʎ 75% reduction in job startup and shutdown at scale
ʎ Scalability to 17,000 ranks
ʎ RDMA message progression & coalescing enhancements
ʎ Flexible CPU binding options maximize cache effectiveness and bal-
ance applications to minimize latency
ʎ Automated benchmarking of collective operations
ʎ Takes maximum advantage of available hardware
ʎ Reduced latency for better performance
ʎ Performance improves without explicit developer action
ʎ Better message throughput in streaming applications
ʎ Easier to optimize application performance
Compatibility ʎ Common source-code base between Linux and Windows
ʎ Binary compatible with applications developed for HP-MPI
ʎ MPICH-2 compatibility mode
ʎ Linux Standard Bindings ensure full compatibility across all major
Linux distributions
ʎ Scheduler agnostic with workload manager integrations for Windows
HPC, Platform LSF, PBS Pro, SLURM and other popular schedulers and
resource managers
ʎ Avoid the cost of separate releases for different platforms
ʎ Easily used with existing MPI applications
ʎ Common mpirun syntax between Linux and Windows
ʎ Customers avoid proprietary “lock-in”
ʎ Avoid floating point issues causing inconsistent results
Flexibility
ʎ Supports the widest variety of networks and interconnects
ʎ Select interconnects at run-time with no need to re-compile
ʎ Write applications once and deploy across multiple OS and hardware
topologies
ʎ CPU binding features well suited to GPU-aware applications
ʎ Develop applications that will run on more platforms
ʎ Reduce testing, maintenance and support costs
ʎ Enjoy strategic flexibility
transtec HPC as a Service
You will get a range of applications like LS-Dyna, ANSYS,
Gromacs, NAMD etc. from all kinds of areas pre-installed,
integrated into an enterprise-ready cloud and workload
management system, and ready-to run. Do you miss your
application?
Ask us: HPC@transtec.de
transtec Platform as a Service
You will be provided with dynamically provided compute
nodes for running your individual code. The operating
system will be pre-installed according to your require-
ments. Common Linux distributions like RedHat, CentOS,
or SLES are the standard. Do you need another distribu-
tion?
Ask us: HPC@transtec.de
transtec Hosting as a Service
You will be provided with hosting space inside a profes-
sionally managed and secured datacenter where you
can have your machines hosted, managed, maintained,
according to your requirements. Thus, you can build up
your own private cloud. What range of hosting and main-
tenance services do you need?
Tell us: HPC@transtec.de
Services and Customer Care from A to Z
individual Presales
consulting
application-,
customer-,
site-specific
sizing of
HPC solution
burn-in tests
of systems
benchmarking of
different systems
continual
improvement
software
& OS
installation
application
installation
onsite
hardware
assembly
integration
into
customer’s
environment
customer
training
maintenance,
support &
managed services
individual Presales
consulting
application-,
customer-,
site-specific
sizing of
HPC solution
burn-in tests
of systems
benchmarking of
different systems
continual
improvement
software
& OS
installation
application
installation
onsite
hardware
assembly
integration
into
customer’s
environment
customer
training
maintenance,
support &
managed services
36
HPC @ transtec:
Services and Customer Care from A to Z
transtec AG has over 30 years of experience in scientific computing
and is one of the earliest manufacturers of HPC clusters. For nearly a
decade, transtec has delivered highly customized High Performance
clusters based on standard components to academic and industry
customers across Europe with all the high quality standards and the
customer-centricapproachthattransteciswellknownfor.
Every transtec HPC solution is more than just a rack full of hardware
–itisacomprehensivesolutionwitheverythingtheHPCuser,owner,
andoperatorneed.
In the early stages of any customer’s HPC project, transtec experts
provide extensive and detailed consulting to the customer – they
benefit from expertise and experience. Consulting is followed by
benchmarking of different systems with either specifically crafted
HighPerformanceComputing
Performance Turns Into Productivity
37
customer code or generally accepted benchmarking routines; this
aids customers in sizing and devising the optimal and detailed HPC
configuration.
EachandeverypieceofHPChardwarethatleavesourfactoryunder-
goes a burn-in procedure of 24 hours or more if necessary. We make
sure that any hardware shipped meets our and our customers’ qual-
ityrequirements.transtecHPCsolutionsareturnkeysolutions.Byde-
fault, a transtec HPC cluster has everything installed and configured
– from hardware and operating system to important middleware
components like cluster management or developer tools and the
customer’s production applications. Onsite delivery means onsite
integration into the customer’s production environment, be it estab-
lishing network connectivity to the corporate network, or setting up
softwareandconfigurationparts.
transtec HPC clusters are ready-to-run systems – we deliver, you turn
the key, the system delivers high performance. Every HPC project en-
tailstransfertoproduction:IToperationprocessesandpoliciesapply
to the new HPC system. Effectively, IT personnel is trained hands-on,
introduced to hardware components and software, with all opera-
tionalaspectsofconfigurationmanagement.
transtec services do not stop when the implementation projects
ends. Beyond transfer to production, transtec takes care. transtec of-
fers a variety of support and service options, tailored to the custom-
er’sneeds.Whenyouareinneedofanewinstallation,amajorrecon-
figuration or an update of your solution – transtec is able to support
your staff and, if you lack the resources for maintaining the cluster
yourself,maintaintheHPCsolutionforyou.FromProfessionalServic-
es to Managed Services for daily operations and required service lev-
els,transtecwillbeyourcompleteHPCserviceandsolutionprovider.
transtec’shighstandardsofperformance,reliabilityanddependabil-
ityassureyourproductivityandcompletesatisfaction.
transtec’s offerings of HPC Managed Services offer customers the
possibility of having the complete management and administration
of the HPC cluster managed by transtec service specialists, in an ITIL
compliant way. Moreover, transtec’s HPC on Demand services help
provideaccesstoHPCresourceswhenevertheyneedthem,forexam-
ple, because they do not have the possibility of owning and running
an HPC cluster themselves, due to lacking infrastructure, know-how,
oradminstaff.
transtecHPCCloudServices
Last but not least transtec’s services portfolio evolves as customers‘
demands change. Starting this year, transtec is able to provide HPC
CloudServices.transtecusesadedicateddatacentertoprovidecom-
puting power to customers who are in need of more capacity than
they own, which is why this workflow model is sometimes called
computing-on-demand. With these dynamically provided resources,
customers with the possibility to have their jobs run on HPC nodes
in a dedicated datacenter, professionally managed and secured, and
individually customizable. Numerous standard applications like AN-
SYS,LS-Dyna,OpenFOAM,aswellaslotsofcodeslikeGromacs,NAMD,
VMD,andothersarepre-installed,integratedintoanenterprise-ready
cloudandworkloadmanagementenvironment,andreadytorun.
Alternatively, whenever customers are in need of space for hosting
theirownHPCequipmentbecausetheydonothavethespacecapac-
ity or cooling and power infrastructure themselves, transtec is also
able to provide Hosting Services to those customers who’d like to
have their equipment professionally hosted, maintained, and man-
aged.Customerscanthusbuilduptheirownprivatecloud!
Areyouinterestedinanyoftranstec’sbroadrangeofHPCrelatedser-
vices? Write us an email to HPC@transtec.de. We’ll be happy to hear
fromyou!
Scalable&Energy
EfficientHPCSystems
There is no end in sight to growing data and computing requirements – which poses a
serious challenge for space-constrained data centers. Also challenging for today’s
organizations is the need to perform a larger number and variety of functions – without
increasing budgets. IBM NeXtScale System, an economical addition to the IBM System x
family, offers an innovative approach to maximum usable density.
HighTrouhputComputingCADBigDataAnalyticsSimulationAerospaceAutomotive
40
Optimized to handle a number of workloads, all demanding agil-
ity, NeXtScale System helps drive business velocity by providing
rapid procurement, deployment and flexible options. This sim-
ple, yet powerful, system can handle applications ranging from
technical computing, to grid deployments, to analytics work-
loads, to large-scale cloud and virtualization infrastructures.
Designed with industry-standard, off-the-shelf components, this
generalpurpose platform enables users to create a flexible, mix-
and-match offering with compute, storage, and acceleration via
graphics processing unit (GPU) or Intel Xeon Phi coprocessor.
Customized solutions can be configured to provide application-
appropriate platform with choice of servers, networking switch-
es, adapters, and racks.
This modular system is designed to scale and grow along with
data center needs in order to protect and maximize IT invest-
ments. Since it is optimized for standard racks, users can easily
mix high-density NeXtScale server offerings and non-NeXtScale
components within the same rack. NeXtScale System also pro-
vides tremendous time to value by enabling users to get it up
and running – and to the production phase – faster.
Building upon a strong System x foundation
Extending the System x family to a larger range of users, the
customizable, space-saving NeXtScale System comprises pow-
erful compute nodes and an energy-efficient, low-cost 12-bay
chassis.
IBM NeXtScale nx360 M4 server
This powerful server provides a dense, flexible solution with
a low total cost of ownership. The half-wide, dual-socket
NeXtScale nx360 M4 server is designed for data centers
Scalable,EnergyEfficient
HPCSystems
IBM NeXtScale System
IBM NeXtScale nx360 M4
IBM NeXtScale n1200 Enclosure
41
that require high performance but are constrained by floor
space. By taking up less physical space in the data center, the
NeXtScale server significantly enhances density. And it sup-
ports Intel Xeon E5-2600 v2 series up to 130 W and 12-core
processors thus providing more performance per server. The
nx360 M4 compute node contains only essential components
in the base architecture to provide a cost-optimized platform.
IBM NeXtScale n1200 Enclosure
The NeXtScale n1200 Enclosure is an efficient, 6U, 12-bay
chassis with no built-in networking or switching capabilities –
requiring no chassis-level management. Sensibly designed to
provide shared, high-efficiency power and cooling for housed
servers, the n1200 enclosure is designed to scale with your
business needs. Adding compute, storage, or acceleration ca-
pability is as simple as adding specific nodes to the chassis.
Because each node is independent and self-sufficient, there
is no contention for resources among nodes within the enclo-
sure. And while a typical rack holds only 42 1U systems, this
chassis doubles the density up to 84 compute nodes within
the same footprint.
Flexible, IT your way
Developed at the solution level, the NeXtScale System archi-
tecture is extremely flexible – enabling different technologies
to easily fit into its design, for varied workloads. And since the
system allows compute, storage, and acceleration via GPU or
Intel Xeon Phi coprocessor to share the same chassis and archi-
tecture, it is very easy to deploy and grow. Front-access cabling
– either from the bottom or the top of the rack – and direct-dock
power capabilities enable users to make quick and easy changes
to nodes, cables and networking switches. Plus, NeXtScale Sys-
tem supports multiple networking topologies, including Ether-
net, InfiniBand and Fibre Channel.
42
System flexibility even extends to procurement: Organizations
can either receive the system fully configured, pretested, IBM in-
stalled, and ready to power on; or self-configure and install using
existing components to build a custom system.
Simple yet elegant
NeXtScale System makes choosing the right architecture for in-
dividual applications, budgets and data centers simple and eco-
nomical. It optimizes shared infrastructure with common fans
and power supplies leaving nodes to be completely indepen-
dent and self-sufficient. The nodes do not share resources such
as disks or memory. To manage costs, only essential components
are included in the base architecture, and nodes can be used for
either storage or GPU/coprocessor acceleration. This enables
NeXtScale for an easy insertion into your infrastructure with
your current tools and best practices. The ability of NeXtScale
System to work with any standard switch, rack or networking
card provides almost unlimited options to space- and budget-
conscious organizations in even the most demanding industries.
Scale for everyone
The high-performance NeXtScale System enables organizations
of all sizes and budgets to start small and scale rapidly, as need-
ed, into future requirements. Rather than requiring organiza-
tions to purchase large clusters, this system offers a complete
building-block approach in which users can start out with one
chassis and add systems and components as needed. Designed
to be easily run and simply managed at any scale – from a hand-
ful to thousands – NeXtScale System can help organizations
achieve maximum impact per dollar.
Scalable,EnergyEfficient
HPCSystems
IBM NeXtScale System
43
IBM NeXtScale nx360 M4 at a glance
Form factor/height Half-wide 1U
Processor Two Intel Xeon E5-2600 v2 series
Cache
Level 2: 256 KB per core
Level 3: 4 cores – 15 MB, 6 cores – 15 MB, 8 cores – 20 MB, 10 cores – 25 MB, 12 cores – 30 MB
Memory 8 DDR3/DDR3L LP, 128 GB maximum with 16 GB LP RDIMM
Chassis support NeXtScale n1200 Enclosure
Local Storage
One 3.5-inch, two 2.5-inch SAS/SATA hard disk drives (HDDs) or four 1.8-inch solid state drives, up to 4 TB maximum
capacity with one 4 TB 3.5-inch HDD
Storage Native Expansion (NEX) Tray Eight 3.5-inch SAS/SATA HDDs, up to 32 TB maximum capacity
Internal RAID Onboard SATA controller with RAID options
USB ports One internal USB key
Ethernet Two built-in 1 Gigabit Ethernet (GbE) ports standard
Input/output Two InfiniBand FDR ports (slotless option), two 10 GbE (slotless option), one PCIe (x16 PCI Express 3.0)
Power management Rack-level power capping and management via IBM Extreme Cloud Administration Toolkit (xCAT)
Systems management
IBM Integrated Management Module 2 (IMM2) with dedicated management port, IPMI 2.0 compliant, Platform LSF and
Platform HPC
Operating systems supported Microsoft Windows Server, SUSE Linux Enterprise Server, Red Hat Enterprise Linux, VMware vSphere Hypervisor (ESXi)
Limited warranty 3-year customer replaceable unit and onsite limited warranty, next business day 9x5, service upgrades available
44
Scalable,EnergyEfficient
HPCSystems
IBM NeXtScale System
45
IBM NeXtScale n1200 Enclosure at a glance
Form factor 6U NeXtScale, standard rack
Bays 12
Power supply Six hot-swappable, non-redundant, N+N or N+1 redundant 80 PLUS Platinum, high energy efficiency, 900 W
Fans 10 hot-swappable
Controller Fan and power controller
Big Data, Cloud Storage it doesn’t matter what you call it, there is certainly increasing
demand to store larger and larger amounts of unstructured data.
The IBM General Parallel File System (GPFSTM) has always been considered a pioneer of
big data storage and continues today to lead in introducing industry leading storage
technologies. Since 1998 GPFS has lead the industry with many technologies that make
the storage of large quantities of file data possible. The latest version continues in that
tradition, GPFS 3.5 represents a significant milestone in the evolution of big data
management. GPFS 3.5 introduces some revolutionary new features that clearly
demonstrate IBM’s commitment to providing industry leading storage solutions.
GeneralParallel
FileSystem(GPFS)
LifeSciencesCAEHighPerformanceComputingBigDataAnalyticsSimulationCAD
48
What is GPFS?
GPFS is more than clustered file system software; it is a full
featured set of file management tools. This includes advanced
storage virtualization, integrated high availability, automated
tiered storage management and the performance to effectively
manage very large quantities of file data.
GPFS allows a group of computers concurrent access to a com-
mon set of file data over a common SAN infrastructure, a net-
work or a mix of connection types. The computers can run any
mix of AIX, Linux or Windows Server operating systems. GPFS
provides storage management, information life cycle manage-
ment tools, centralized administration and allows for shared
access to file systems from remote GPFS clusters providing a
global namespace.
A GPFS cluster can be a single node, two nodes providing a high
availability platform supporting a database application, for
example, or thousands of nodes used for applications like the
modeling of weather patterns. The largest existing configura-
tions exceed 5,000 nodes. GPFS has been available on since 1998
and has been field proven for more than 14 years on some of
the world’s most powerful supercomputers2 to provide reliabil-
ity and efficient use of infrastructure bandwidth.
GPFS was designed from the beginning to support high per-
formance parallel workloads and has since been proven very
effective for a variety of applications. Today it is installed in
clusters supporting big data analytics, gene sequencing, digi-
tal media and scalable file serving. These applications are used
across many industries including financial, retail, digital media,
biotechnology, science and government. GPFS continues to
push technology limits by being deployed in very demanding
large environments. You may not need multiple petabytes of
GeneralParallelFileSystem(GPFS)
Technologies That Enable the Management
of Big Data
49
data today, but you will, and when you get there you can rest
assured GPFS has already been tested in these enviroments.
This leadership is what makes GPFS a solid solution for any size
application.
Supported operating systems for GPFS Version 3.5 include AIX,
Red Hat, SUSE and Debian Linux distributions and Windows
Server 2008.
The file system
A GPFS file system is built from a collection of arrays that contain
the file system data and metadata. A file system can be built from
a single disk or contain thousands of disks storing petabytes of
data. Each file system can be accessible from all nodes within
the cluster. There is no practical limit on the size of a file system.
The architectural limit is 299 bytes. As an example, current GPFS
customers are using single file systems up to 5.4PB in size and
others have file systems containing billions of files.
Application interfaces
Applications access files through standard POSIX file system in-
terfaces. Since all nodes see all of the file data applications can
scale-out easily. Any node in the cluster can concurrently read
or update a common set of files. GPFS maintains the coheren-
cy and consistency of the file system using sophisticated byte
range locking, token (distributed lock) management and journal-
ing. This means that applications using standard POSIX locking
semantics do not need to be modified to run successfully on a
GPFS file system.
In addition to standard interfaces GPFS provides a unique set of
extended interfaces which can be used to provide advanced ap-
plication functionality. Using these extended interfaces an appli-
cation can determine the storage pool placement of a file, create
a file clone and manage quotas. These extended interfaces pro-
vide features in addition to the standard POSIX interface.
Performance and scalability
GPFS provides unparalleled performance for unstructured data.
GPFS achieves high performance I/O by:
Striping data across multiple disks attached to multiple nodes.
ʎʎ High performance metadata (inode) scans.
ʎʎ Supporting a wide range of file system block sizes to match I/O
requirements.
ʎʎ Utilizing advanced algorithms to improve read-ahead and wri-
te-behind IO operations.
ʎʎ Using block level locking based on a very sophisticated scalable
token management system to provide data consistency while
allowing multiple application nodes concurrent access to the
files.
When creating a GPFS file system you provide a list of raw devices
andtheyareassignedtoGPFSasNetworkSharedDisks(NSD).Once
a NSD is defined all of the nodes in the GPFS cluster can access the
disk, using local disk connection, or using t-he GPFS NSD network
protocol for shipping data over a TCP/IP or InfiniBand connection.
GPFS token (distributed lock) management coordinates access to
NSD’s ensuring the consistency of file system data and metadata
when different nodes access the same file. Token management
responsibility is dynamically allocated among designated nodes
in the cluster. GPFS can assign one or more nodes to act as token
managers for a single file system. This allows greater scalabil-
ity when you have a large number of files with high transaction
workloads. In the event of a node failure the token management
responsibility is moved to another node.
50
All data stored in a GPFS file system is striped across all of the disks
within a storage pool, whether the pool contains 2 LUNS or 2,000
LUNS.Thiswidedatastripingallowsyoutogetthebestperformance
for the available storage. When disks are added to or removed from
astoragepoolexistingfiledatacanberedistributedacrossthenew
storage to improve performance. Data redistribution can be done
automatically or can be scheduled. When redistributing data you
can assign a single node to perform the task to control the impact
onaproductionworkloadorhaveallofthenodesintheclusterpar-
ticipate in data movement to complete the operation as quickly as
possible. Online storage configuration is a good example of an en-
terprise class storage management feature included in GPFS.
To achieve the highest possible data access performance GPFS
recognizes typical access patterns including sequential, reverse se-
quential and random optimizing I/O access for these patterns.
Along with distributed token management, GPFS provides scalable
metadata management by allowing all nodes of the cluster access-
ingthefilesystemtoperformfilemetadataoperations.Thisfeature
distinguishes GPFS from other cluster file systems which typically
have a centralized metadata server handling fixed regions of the
file namespace. A centralized metadata server can often become
a performance bottleneck for metadata intensive operations, lim-
iting scalability and possibly introducing a single point of failure.
GPFS solves this problem by enabling all nodes to manage meta-
data.
Administration
GPFS provides an administration model that is easy to use and is
consistentwithstandardfilesystemadministrationpracticeswhile
providingextensionsfortheclusteringaspectsofGPFS.Thesefunc-
tions support cluster management and other standard file system
GeneralParallelFileSystem(GPFS)
Technologies That Enable the Management
of Big Data
51
administration functions such as user quotas, snapshots and ex-
tended access control lists.
GPFSadministrationtoolssimplifycluster-widetasks.AsingleGPFS
commandcanperform a filesystemfunction acrosstheentireclus-
ter and most can be issued from any node in the cluster. Option-
ally you can designate a group of administration nodes that can be
used to perform all cluster administration tasks, or only authorize a
single login session to perform admin commands cluster-wide. This
allows for higher security by reducing the scope of node to node
administrative access.
Rolling upgrades allow you to upgrade individual nodes in the clus-
ter while the file system remains online. Rolling upgrades are sup-
ported between two major version levels of GPFS (and service lev-
els within those releases). For example you can mix GPFS 3.4 nodes
with GPFS 3.5 nodes while migrating between releases.
Quotasenabletheadministratortomanagefilesystemusagebyus-
ers and groups across the cluster. GPFS provides commands to gen-
erate quota reports by user, group and on a sub-tree of a file system
called a fileset. Quotas can be set on the number of files (inodes)
and the total size of the files. New in GPFS 3.5 you can now define
user and group per fileset quotas which allows for more options in
quota configuration. In addition to traditional quota management,
the GPFS policy engine can be used query the file system metadata
and generate customized space usage reports.
An SNMP interface allows monitoring by network management
applications. The SNMP agent provides information on the GPFS
cluster and generates traps when events occur in the cluster. For
example, an event is generated when a file system is mounted or if
a node fails. The SNMP agent runs on Linux and AIX. You can moni-
tor a heterogeneous cluster as long as the agent runs on a Linux or
AIX node.
You can customize the response to cluster events using GPFS
callbacks. A callback is an administrator defined script that is ex-
ecuted when an event occurs, for example, when a file system is
un-mounted for or a file system is low on free space. Callbacks can
be used to create custom responses to GPFS events and integrate
these notifications into various cluster monitoring tools.
GPFS provides support for the Data Management API (DMAPI) in-
terface which is IBM’s implementation of the X/Open data storage
management API. This DMAPI interface allows vendors of storage
management applications such as IBM Tivoli® Storage Manager
(TSM) and High Performance Storage System (HPSS) to provide Hi-
erarchical Storage Management (HSM) support for GPFS.
GPFS supports POSIX and NFS V4 access control lists (ACLs). NFS v4
ACLs can be used to serve files using NFSv4, but can also be used in
other deployments, for example, to provide ACL support to nodes
running Windows. To provide concurrent access from multiple op-
erating system types GPFS allows you to run mixed POSIX and NFS
v4 permissions in a single file system and map user and group IDs
between Windows and Linux/UNIX environments.
File systems may be exported to clients outside the cluster
through NFS. GPFS is often used as the base for a scalable NFS
file service infrastructure. The GPFS clustered NFS (cNFS) feature
provides data availability to NFS clients by providing NFS service
continuation if an NFS server fails. This allows a GPFS cluster to
provide scalable file service by providing simultaneous access to a
common set of data from multiple nodes. The clustered NFS tools
include monitoring of file services and IP address fail over. GPFS
cNFS supports NFSv3 only. You can export a GPFS file system using
NFSv4 but not with cNFS.
52
Data availability
GPFS is fault tolerant and can be configured for continued ac-
cess to data even if cluster nodes or storage systems fail. This is
accomplished though robust clustering features and support for
synchronous and asynchronous data replication.
GPFS software includes the infrastructure to handle data consis-
tency and availability. This means that GPFS does not rely on ex-
ternal applications for cluster operations like node failover. The
clustering support goes beyond who owns the data or who has
access to the disks. In a GPFS cluster all nodes see all of the data
and all cluster operations can be done by any node in the clus-
ter with a server license. All nodes are capable of performing all
tasks. What tasks a node can perform is determined by the type
of license and the cluster configuration.
As a part of the built-in availability tools GPFS continuously mon-
itors the health of the file system components. When failures are
detected appropriate recovery action is taken automatically. Ex-
tensive journaling and recovery capabilities are provided which
maintain metadata consistency when a node holding locks or
performing administrative services fails.
Snapshots can be used to protect the file system’s contents
against a user error by preserving a point in time version of the
file system or a sub-tree of a file system called a fileset. GPFS
implements a space efficient snapshot mechanism that gener-
ates a map of the file system or fileset at the time the snaphot
is taken. New data blocks are consumed only when the file sys-
tem data has been deleted or modified after the snapshot was
created. This is done using a redirect-on-write technique (some-
times called copy-on-write). Snapshot data is placed in existing
storage pools simplifying administration and optimizing the use
of existing storage. The snapshot function can be used with a
GeneralParallelFileSystem(GPFS)
Technologies That Enable the Management
of Big Data
53
backup program, for example, to run while the file system is in
use and still obtain a consistent copy of the file system as it was
when the snapshot was created. In addition, snapshots provide
an online backup capability that allows files to be recovered eas-
ily from common problems such as accidental file deletion.
Data Replication
For an additional level of data availability and protection synchro-
nous data replication is available for file system metadata and
data. GPFS provides a very flexible replication model that allows
you to replicate a file, set of files, or an entire file system. The rep-
lication status of a file can be changed using a command or by us-
ing the policy based management tools. Synchronous replication
allowsforcontinuousoperationevenifapathtoanarray,anarray
itself or an entire site fails.
Synchronous replication is location aware which allows you to
optimize data access when the replicas are separated across a
WAN. GPFS has knowledge of what copy of the data is “local” so
read-heavy applications can get local data read performance even
when data replicated over a WAN. Synchronous replication works
well for many workloads by replicating data across storage arrays
within a data center, within a campus or across geographical dis-
tances using high quality wide area network connections.
When wide area network connections are not high performance
or are not reliable, an asynchronous approach to data replication
is required. GPFS 3.5 introduces a feature called Active File Man-
agement (AFM). AFM is a distributed disk caching technology de-
veloped at IBM Research that allows the expansion of the GPFS
global namespace across geographical distances. It can be used to
provide high availability between sites or to provide local “copies”
of data distributed to one or more GPFS clusters. For more details
on AFM see the section entitled Sharing data between clusters.
For a higher level of cluster reliability GPFS includes advanced
clustering features to maintain network connections. If a network
connection to a node fails GPFS automatically tries to reestablish
the connection before marking the node unavailable. This can
provide for better uptime in environments communicating across
a WAN or experiencing network issues.
Using these features along with a high availability infrastructure
ensures a reliable enterprise class storage solution.
GPFS Native Raid (GNR)
Larger disk drives and larger file systems are creating challenges
for traditional storage controllers. Current RAID 5 and RAID 6
based arrays do not address the challenges of Exabyte scale stor-
age performance, reliability and management. To address these
challenges GPFS Native RAID (GNR) brings storage device man-
agement into GPFS. With GNR GPFS can directly manage thou-
sands of storage devices. These storage devices can be individual
disk drives or any other block device eliminating the need for a
storage controller.
GNR employs a de-clustered approach to RAID. The de-clustered
architecture reduces the impact of drive failures by spreading
data over all of the available storage devices improving appli-
cation IO and recovery performance. GNR provides very high
reliability through an 8+3 Reed Solomon based raid code that
divides each block of a file into 8 parts and associated parity.
This algorithm scales easily starting with as few as 11 storage
devices and growing to over 500 per storage pod. Spreading the
data over many devices helps provide predicable storage perfor-
mance and fast recovery times measured in minutes rather than
hours in the case of a device failure.
54
In addition to performance improvements GNR provides ad-
vanced checksum protection to ensure data integrity. Checksum
information is stored on disk and verified all the way to the NSD
client.
Information lifecycle management (ILM) toolset
GPFS can help you to achieve data lifecycle management efficien-
cies through policy-driven automation and tiered storage man-
agement. The use of storage pools, filesets and user-defined poli-
cies provide the ability to better match the cost of your storage to
the value of your data.
Storage pools are used to manage groups of disks within a file sys-
tem. Using storage pools you can create tiers of storage by group-
ing disks based on performance, locality or reliability characteris-
tics. For example, one pool could contain high performance solid
statedisk(SSD)disksandanothermoreeconomical7,200RPMdisk
storage. These types of storage pools are called internal storage
pools. When data is placed in or moved between internal storage
pools all of the data management is done by GPFS. In addition to
internal storage pools GPFS supports external storage pools. Ex-
ternal storage pools are used to interact with an external storage
management application including IBM Tivoli Storage Manager
(TSM) and High Performance Storage System (HPSS). When moving
data to an external pool GPFS handles all of the metadata process-
ing then hands the data to the external application for storage on
alternate media, tape for example. When using TSM or HPSS data
can be retrieved from the external storage pool on demand, as a
result of an application opening a file or data can be retrieved in a
batch operation using a command or GPFS policy. A fileset is a sub-
tree of the file system namespace and provides a way to partition
the namespace into smaller, more manageable units.
GeneralParallelFileSystem(GPFS)
Technologies That Enable the Management
of Big Data
55
Filesets provide an administrative boundary that can be used to
set quotas, take snapshots, define AFM relationships and be used
in user defined policies to control initial data placement or data
migration. Data within a single fileset can reside in one or more
storage pools. Where the file data resides and how it is managed
once it is created is based on a set of rules in a user defined policy.
There are two types of user defined policies in GPFS: file place-
ment and file management. File placement policies determine in
whichstoragepoolfiledataisinitiallyplaced.Fileplacementrules
are defined using attributes of a file known when a file is created
such as file name, fileset or the user who is creating the file. For
example a placement policy may be defined that states ‘place all
files with names that end in .mov onto the near-line SAS based
storage pool and place all files created by the CEO onto the SSD
based storage pool’ or ‘place all files in the fileset ‘development’
onto the SAS based storage pool’.
Once files exist in a file system, file management policies can be
used for file migration, deletion, changing file replication status
or generating reports.
You can use a migration policy to transparently move data from
one storage pool to another without changing the file’s location
in the directory structure. Similarly you can use a policy to change
the replication status of a file or set of files, allowing fine grained
control over the space used for data availability.
You can use migration and replication policies together, for exam-
ple a policy that says: ‘migrate all of the files located in the subdi-
rectory /database/payroll which end in *.dat and are greater than
1 MB in size to storage pool #2 and un-replicate these files’.
File deletion policies allow you to prune the file system, deleting
files as defined by policy rules. Reporting on the contents of a file
system can be done through list policies. List policies allow you to
quickly scan the file system metadata and produce information
listing selected attributes of candidate files.
Filemanagementpoliciescanbebasedonmoreattributesofafile
than placement policies because once a file exists there is more
known about the file. For example file placement attributes can
utilize attributes such as last access time, size of the file or a mix
of user and file size. This may result in policies like: ‘Delete all files
with a name ending in .temp that have not been accessed in the
last 30 days’, or ‘Migrate all files owned by Sally that are larger
than 4GB to the SATA storage pool’.
Rule processing can be further automated by including attri-
butes related to a storage pool instead of a file using the thresh-
old option. Using thresholds you can create a rule that moves
files out of the high performance pool if it is more than 80%
full, for example. The threshold option comes with the ability to
set high, low and pre-migrate thresholds. Pre-migrated files are
files that exist on disk and are migrated to tape. This method
is typically used to allow disk access to the data while allow-
ing disk space to be freed up quickly when a maximum space
threshold is reached. This means that GPFS begins migrating
data at the high threshold, until the low threshold is reached.
If a pre-migrate threshold is set GPFS begins copying data un-
til the pre-migrate threshold is reached. This allows the data to
continue to be accessed in the original pool until it is quickly
deleted to free up space the next time the high threshold is
reached. Thresholds allow you to fully utilize your highest per-
formance storage and automate the task of making room for
new high priority content.
Policy rule syntax is based on the SQL 92 syntax standard and
supports multiple complex statements in a single rule enabling
powerful policies. Multiple levels of rules can be applied to a
56
file system, and rules are evaluated in order for each file when
the policy engine executes allowing a high level of flexibility.
GPFS provides unique functionality through standard interfaces,
an example of this is extended attributes. Extended attributes
are a standard POSIX facility.
GPFS has long supported the use of extended attributes, though
in the past they were not commonly used, in part because of per-
formance concerns. In GPFS 3.4, a comprehensive redesign of the
extended attributes support infrastructure was implemented,
resulting in significant performance improvements. In GPFS 3.5,
extended attributes are accessible by the GPFS policy engine al-
lowing you to write rules that utilize your custom file attributes.
Executing file management operations requires the ability to
efficiently process the file metadata. GPFS includes a high per-
formance metadata scan interface that allows you to efficiently
process the metadata for billions of files. This makes the GPFS
ILM toolset a very scalable tool for automating file management.
This high performance metadata scan engine employs a scale-
out approach. The identification of candidate files and data
movement operations can be performed concurrently by one or
more nodes in the cluster. GPFS can spread rule evaluation and
data movement responsibilities over multiple nodes in the clus-
ter providing a very scalable, high performance rule processing
engine.
Cluster configurations
GPFS supports a variety of cluster configurations independent
of which file system features you use. Cluster configuration op-
tions can be characterized into three basic categories:
ʎʎ Shared disk
ʎʎ Network block I/O
ʎʎ Synchronously sharing data between clusters.
ʎʎ Asynchronously sharing data between clusters.
GeneralParallelFileSystem(GPFS)
Technologies That Enable the Management
of Big Data
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special
HPC Compass 2014/2015 IBM Special

Mais conteúdo relacionado

Mais procurados

ESG Datacore SANsymphony-V Whitepaper
ESG Datacore SANsymphony-V WhitepaperESG Datacore SANsymphony-V Whitepaper
ESG Datacore SANsymphony-V WhitepaperDataCore Software
 
Prodev Solutions Intro
Prodev Solutions IntroProdev Solutions Intro
Prodev Solutions IntrolarryATprodev
 
Choosing the right tool for the job - Ten reasons why workstations trump your PC
Choosing the right tool for the job - Ten reasons why workstations trump your PCChoosing the right tool for the job - Ten reasons why workstations trump your PC
Choosing the right tool for the job - Ten reasons why workstations trump your PCServium
 
Excalibur: best practices for virtual desktop operations leveraging Citrix Di...
Excalibur: best practices for virtual desktop operations leveraging Citrix Di...Excalibur: best practices for virtual desktop operations leveraging Citrix Di...
Excalibur: best practices for virtual desktop operations leveraging Citrix Di...Citrix
 
Dell supply chain group
Dell supply chain groupDell supply chain group
Dell supply chain groupFPT Univesity
 
Microsoft dynamicsgp sod_may2012
Microsoft dynamicsgp sod_may2012Microsoft dynamicsgp sod_may2012
Microsoft dynamicsgp sod_may2012Sinapsys Sac
 
Dell Supply Chain
Dell Supply ChainDell Supply Chain
Dell Supply ChainAmit Kumar
 
SAP Business One for Manufacturing
SAP Business One for ManufacturingSAP Business One for Manufacturing
SAP Business One for ManufacturingKevin Kuttappa
 
All-Flash Arrays Power Digital Transformation
All-Flash Arrays Power Digital TransformationAll-Flash Arrays Power Digital Transformation
All-Flash Arrays Power Digital TransformationPoovendiran Ramasamy
 

Mais procurados (17)

Supply chain of dell
Supply chain of dellSupply chain of dell
Supply chain of dell
 
ESG Datacore SANsymphony-V Whitepaper
ESG Datacore SANsymphony-V WhitepaperESG Datacore SANsymphony-V Whitepaper
ESG Datacore SANsymphony-V Whitepaper
 
The Future Data Center
The Future Data CenterThe Future Data Center
The Future Data Center
 
IBM 2016 - Six reasons to upgrade your database
IBM 2016 - Six reasons to upgrade your databaseIBM 2016 - Six reasons to upgrade your database
IBM 2016 - Six reasons to upgrade your database
 
Prodev Solutions Intro
Prodev Solutions IntroProdev Solutions Intro
Prodev Solutions Intro
 
Choosing the right tool for the job - Ten reasons why workstations trump your PC
Choosing the right tool for the job - Ten reasons why workstations trump your PCChoosing the right tool for the job - Ten reasons why workstations trump your PC
Choosing the right tool for the job - Ten reasons why workstations trump your PC
 
Excalibur: best practices for virtual desktop operations leveraging Citrix Di...
Excalibur: best practices for virtual desktop operations leveraging Citrix Di...Excalibur: best practices for virtual desktop operations leveraging Citrix Di...
Excalibur: best practices for virtual desktop operations leveraging Citrix Di...
 
DC-Book by Estap
DC-Book by EstapDC-Book by Estap
DC-Book by Estap
 
Dell supply chain group
Dell supply chain groupDell supply chain group
Dell supply chain group
 
Microsoft dynamicsgp sod_may2012
Microsoft dynamicsgp sod_may2012Microsoft dynamicsgp sod_may2012
Microsoft dynamicsgp sod_may2012
 
Dell Supply Chain
Dell Supply ChainDell Supply Chain
Dell Supply Chain
 
SAP Business One for Manufacturing
SAP Business One for ManufacturingSAP Business One for Manufacturing
SAP Business One for Manufacturing
 
Intel Cloud
Intel CloudIntel Cloud
Intel Cloud
 
StruxureWare DCIM Q&A
StruxureWare DCIM Q&AStruxureWare DCIM Q&A
StruxureWare DCIM Q&A
 
All-Flash Arrays Power Digital Transformation
All-Flash Arrays Power Digital TransformationAll-Flash Arrays Power Digital Transformation
All-Flash Arrays Power Digital Transformation
 
DELL supply chain management
DELL supply chain management DELL supply chain management
DELL supply chain management
 
DCIM
DCIMDCIM
DCIM
 

Destaque (7)

HPC Compass IBM Special 2013/14
HPC Compass IBM Special 2013/14HPC Compass IBM Special 2013/14
HPC Compass IBM Special 2013/14
 
ttec / transtec | IBM NeXtScale
ttec / transtec | IBM NeXtScale ttec / transtec | IBM NeXtScale
ttec / transtec | IBM NeXtScale
 
8-way-server
8-way-server8-way-server
8-way-server
 
ttec | Microsoft Windows Server 2012
ttec | Microsoft Windows Server 2012ttec | Microsoft Windows Server 2012
ttec | Microsoft Windows Server 2012
 
ttec vSphere 5
ttec vSphere 5ttec vSphere 5
ttec vSphere 5
 
Hpc compass 2013-final_web
Hpc compass 2013-final_webHpc compass 2013-final_web
Hpc compass 2013-final_web
 
HPC Technology Compass 2014/15
HPC Technology Compass 2014/15HPC Technology Compass 2014/15
HPC Technology Compass 2014/15
 

Semelhante a HPC Compass 2014/2015 IBM Special

Hpc kompass 2015
Hpc kompass 2015Hpc kompass 2015
Hpc kompass 2015TTEC
 
HPC compass 2013/2014
HPC compass 2013/2014HPC compass 2013/2014
HPC compass 2013/2014TTEC
 
HPC kompass ibm_special_2013/2014
HPC kompass ibm_special_2013/2014HPC kompass ibm_special_2013/2014
HPC kompass ibm_special_2013/2014TTEC
 
Gartner Cool Vendor Report 2014
Gartner Cool Vendor Report 2014Gartner Cool Vendor Report 2014
Gartner Cool Vendor Report 2014jenjermain
 
Ανδρέας Τσαγκάρης, 7th Digital Banking Forum
Ανδρέας Τσαγκάρης, 7th Digital Banking ForumΑνδρέας Τσαγκάρης, 7th Digital Banking Forum
Ανδρέας Τσαγκάρης, 7th Digital Banking ForumStarttech Ventures
 
Best Compute Solutions, Backup Services, and Data Storage Center
Best Compute Solutions, Backup Services, and Data Storage CenterBest Compute Solutions, Backup Services, and Data Storage Center
Best Compute Solutions, Backup Services, and Data Storage CenterSamidhaTakle1
 
flexpod_hadoop_cloudera
flexpod_hadoop_clouderaflexpod_hadoop_cloudera
flexpod_hadoop_clouderaPrem Jain
 
IBM z/OS Version 2 Release 2 -- Fueling the digital enterprise
IBM z/OS Version 2 Release 2 -- Fueling the digital enterpriseIBM z/OS Version 2 Release 2 -- Fueling the digital enterprise
IBM z/OS Version 2 Release 2 -- Fueling the digital enterpriseAnderson Bassani
 
Dell Scalable Server Platforms
Dell Scalable Server PlatformsDell Scalable Server Platforms
Dell Scalable Server PlatformsLiamJohnson30
 
High Performance Computing
High Performance ComputingHigh Performance Computing
High Performance ComputingNous Infosystems
 
IBM’s Smarter Computing—Helping Organizations Align IT and Business Decisions
IBM’s Smarter Computing—Helping Organizations Align IT and Business DecisionsIBM’s Smarter Computing—Helping Organizations Align IT and Business Decisions
IBM’s Smarter Computing—Helping Organizations Align IT and Business DecisionsIBM India Smarter Computing
 
Vdi strategy
Vdi strategyVdi strategy
Vdi strategylatheefca
 
Make from your it department a competitive differentiator for your business
Make from your it department a competitive differentiator for your businessMake from your it department a competitive differentiator for your business
Make from your it department a competitive differentiator for your businessMarcos Quezada
 
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...Anand Haridass
 
Innovating to Create a Brighter Future for AI, HPC, and Big Data
Innovating to Create a Brighter Future for AI, HPC, and Big DataInnovating to Create a Brighter Future for AI, HPC, and Big Data
Innovating to Create a Brighter Future for AI, HPC, and Big Datainside-BigData.com
 
The IBM and SAP Partnership
The IBM and SAP PartnershipThe IBM and SAP Partnership
The IBM and SAP PartnershipthinkASG
 
Future Facilities 6SigmaDCX Overview HD
Future Facilities 6SigmaDCX Overview HDFuture Facilities 6SigmaDCX Overview HD
Future Facilities 6SigmaDCX Overview HDRobert Schmidt
 
Gluon Consulting - Specialized Software Development for Finance
Gluon Consulting - Specialized Software Development for FinanceGluon Consulting - Specialized Software Development for Finance
Gluon Consulting - Specialized Software Development for FinanceDennis Cabarroguis
 

Semelhante a HPC Compass 2014/2015 IBM Special (20)

Hpc kompass 2015
Hpc kompass 2015Hpc kompass 2015
Hpc kompass 2015
 
HPC compass 2013/2014
HPC compass 2013/2014HPC compass 2013/2014
HPC compass 2013/2014
 
HPC kompass ibm_special_2013/2014
HPC kompass ibm_special_2013/2014HPC kompass ibm_special_2013/2014
HPC kompass ibm_special_2013/2014
 
Gartner Cool Vendor Report 2014
Gartner Cool Vendor Report 2014Gartner Cool Vendor Report 2014
Gartner Cool Vendor Report 2014
 
Ανδρέας Τσαγκάρης, 7th Digital Banking Forum
Ανδρέας Τσαγκάρης, 7th Digital Banking ForumΑνδρέας Τσαγκάρης, 7th Digital Banking Forum
Ανδρέας Τσαγκάρης, 7th Digital Banking Forum
 
Best Compute Solutions, Backup Services, and Data Storage Center
Best Compute Solutions, Backup Services, and Data Storage CenterBest Compute Solutions, Backup Services, and Data Storage Center
Best Compute Solutions, Backup Services, and Data Storage Center
 
flexpod_hadoop_cloudera
flexpod_hadoop_clouderaflexpod_hadoop_cloudera
flexpod_hadoop_cloudera
 
IBM z/OS Version 2 Release 2 -- Fueling the digital enterprise
IBM z/OS Version 2 Release 2 -- Fueling the digital enterpriseIBM z/OS Version 2 Release 2 -- Fueling the digital enterprise
IBM z/OS Version 2 Release 2 -- Fueling the digital enterprise
 
Dell Scalable Server Platforms
Dell Scalable Server PlatformsDell Scalable Server Platforms
Dell Scalable Server Platforms
 
High Performance Computing
High Performance ComputingHigh Performance Computing
High Performance Computing
 
IBM’s Smarter Computing—Helping Organizations Align IT and Business Decisions
IBM’s Smarter Computing—Helping Organizations Align IT and Business DecisionsIBM’s Smarter Computing—Helping Organizations Align IT and Business Decisions
IBM’s Smarter Computing—Helping Organizations Align IT and Business Decisions
 
zEnterprise Executive Overview
zEnterprise Executive OverviewzEnterprise Executive Overview
zEnterprise Executive Overview
 
Vdi strategy
Vdi strategyVdi strategy
Vdi strategy
 
Make from your it department a competitive differentiator for your business
Make from your it department a competitive differentiator for your businessMake from your it department a competitive differentiator for your business
Make from your it department a competitive differentiator for your business
 
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...
2016 Sept 1st - IBM Consultants & System Integrators Interchange - Big Data -...
 
Innovating to Create a Brighter Future for AI, HPC, and Big Data
Innovating to Create a Brighter Future for AI, HPC, and Big DataInnovating to Create a Brighter Future for AI, HPC, and Big Data
Innovating to Create a Brighter Future for AI, HPC, and Big Data
 
The IBM and SAP Partnership
The IBM and SAP PartnershipThe IBM and SAP Partnership
The IBM and SAP Partnership
 
Future Facilities 6SigmaDCX Overview HD
Future Facilities 6SigmaDCX Overview HDFuture Facilities 6SigmaDCX Overview HD
Future Facilities 6SigmaDCX Overview HD
 
DCX_Digital
DCX_DigitalDCX_Digital
DCX_Digital
 
Gluon Consulting - Specialized Software Development for Finance
Gluon Consulting - Specialized Software Development for FinanceGluon Consulting - Specialized Software Development for Finance
Gluon Consulting - Specialized Software Development for Finance
 

Último

A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????blackmambaettijean
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 

Último (20)

A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 

HPC Compass 2014/2015 IBM Special

  • 2. TechnologyCompass Table of Contents and Introduction IBM Technical Computing.....................................................4 Accelerate Insights and Results..................................................................6 IBM Application Ready Solutions ..........................................................10 Enterprise-Ready Cluster & Workload Management....................................................... 16 IBM Platform HPC..............................................................................................18 What’s New in IBM Platform LSF 8 .........................................................22 IBM Platform MPI 8.1.......................................................................................32 Scalable, Energy Efficient HPC Systems............................................................................... 38 IBM NeXtScale System....................................................................................40 General Parallel File System (GPFS).............................. 46 Technologies That Enable the Management of Big Data.........48 What´s New in GPFS Version 3.5..............................................................64 GPFS Storage Server – a home for Big Data......................................66
  • 3. 3 More than 30 years of experience in Scientific Computing 1980 marked the beginning of a decade where numerous startups were created, some of which later transformed into big players in the IT market. Technical innovations brought dramatic changes to the nascent computer market. In Tübingen, close to one of Germa- ny’s prime and oldest universities, transtec was founded. In the early days, transtec focused on reselling DEC computers and peripherals, delivering high-performance workstations to univer- sity institutes and research facilities. In 1987, SUN/Sparc and stor- age solutions broadened the portfolio, enhanced by IBM/RS6000 products in 1991. These were the typical workstations and server systems for high performance computing then, used by the major- ity of researchers worldwide. In the late 90s, transtec was one of the first companies to offer highly customized HPC cluster solutions based on standard Intel architecture servers, some of which entered the TOP500 list of the world’s fastest computing systems. Thus, given this background and history, it is fair to say that trans- tec looks back upon a more than 30 years’ experience in scientific computing; our track record shows nearly 500 HPC installations. With this experience, we know exactly what customers’ demands are and how to meet them. High performance and ease of manage- ment – this is what customers require today. HPC systems are for sure required to peak-perform, as their name indicates, but that is not enough: they must also be easy to handle. Unwieldy design and operationalcomplexitymustbeavoidedoratleasthiddenfromad- ministrators and particularly users of HPC computer systems. transtec HPC solutions deliver ease of management, both in the Linux and Windows worlds, and even where the customer´s envi- ronment is of a highly heterogeneous nature. Even the dynamical provisioning of HPC resources as needed does not constitute any problem, thus further leading to maximal utilization of the cluster. transtec HPC solutions use the latest and most innovative technol- ogy. Their superior performance goes hand in hand with energy ef- ficiency, as you would expect from any leading edge IT solution. We regard these basic characteristics. In2010,transtecenteredintoastrategicpartnershipwithIBM,sure- ly one of the biggest players in the HPC world with a very strong brand. The flexibility and long-year experience of transtec, com- bined with the power and quality of IBM HPC systems constitute a perfect symbiosis and provide customers with the most optimal HPC solution imaginable. IBM NeXtScale systems are highly opti- mized for HPC workload in datacenter environments, regarding performance, flexibility, and energy, space and cooling efficiency. Platform HPC and LSF are both enterprise-ready HPC cluster and workloadmanagementsolutionsandarewidespreadinallkindsof industrial HPC environments. Your decision for a transtec HPC solution means you opt for most intensive customer care and best service in HPC. Our experts will be glad to bring in their expertise and support to assist you at any stage, from HPC design to daily cluster operations, to HPC Cloud Services. Last but not least, transtec HPC Cloud Services provide customers with the possibility to have their jobs run on dynamically provided nodes in a dedicated datacenter, professionally managed and indi- viduallycustomizable.NumerousstandardapplicationslikeANSYS, LS-Dyna, OpenFOAM, as well as lots of codes like Gromacs, NAMD, VMD, and others are pre-installed, integrated into an enterprise- ready cloud management environment, and ready to run. Have fun reading the transtec HPC Compass 2014/15 IBM Special!
  • 4.
  • 5. IBMTechnical Computing SciencesRiskAnalysisSimulationBigDataAnalyticsCADHighPerformanceComputing All companies want more compute power, faster networks, better access to data and applications available everywhere and at all times. But simply deploying a fast computer without considering all the elements involved in planning, deployment, installation, administration and ongoing maintenance and updates can actually hobble an organization, affecting productivity and possibly damaging profitability and brand value. Today’s technical computing solutions require an end-to-end view of the hardware, operating environment, applications, data management, software and services. It is about the overall system – where purpose-built systems reside next to general purpose solutions to form a compute and systems capability that meets the most demanding requirements. Whether dealing with real-time trading systems, managing the smart grid, optimizing realtime customer relationship management across multiple distribution channels, or running computationally demanding electronic design automation workloads, you need more than one size fits all to tackle your unique challenges.
  • 6. 6 Analyzing big data requires more than fast processors As technical computing moves towards a data centric model, the ability to deal with large sets of fast-moving structured and unstructured data becomes paramount. Whether ana- lyzing market data to make critical business decisions, or running data-intensive simulations to better understand physical phenomena, analytic processes must be carried out in ever-shorter time spans to be of value to an organization. Generating insights from the exploding volume, velocity and variety of data requires optimized systems specifically archi- tected for that task. To maximize performance, systems opti- mization must be done at every layer of the technology stack to exploit unique processor, memory and storage character- istics. The increasingly sophisticated technical computing workflows require computing tuned to domain knowledge and workload characteristics, hardware with multi core ar- chitectures and advanced threading, and software tuned from the operating system through the middleware stack. Meeting the challenges of your particular operation Multi-step, big data analytics also requires optimized workflows – which means organizations can no longer pick a technical com- puting solution based on a single benchmark, such as a server’s maximum processing power or its ability to run a particular workload faster than a competitor’s solution. Companies need to examine the various tasks in their big data analytics work- flows and match the requirements with suitable technical com- puting solutions. IBMTechnicalComputing Accelerate Insights and Results
  • 7. 7 A broad portfolio of superior and innovative products and technology The IBM vision for technical computing is to bring together technology, science, management and innovation to enable major improvements in business and society – and help build a smarter planet. IBM provides an extensive selection of technical computing options from a portfolio of servers, storage, software, services and financing components backed by access to subject matter experts and world-class support. IBM solutions can help you optimize workloads and overcome obstacles to parallelism and other revolutionary approaches to supercomputing. The sky’s the limit IBM recognizes that one size does not fit all. That’s why we intro- duced the IBM Engineering Solutions for Cloud. Based on proven IBM technology, Engineering Solutions for Cloud let organizations build a centralized, shared product development center that sup- ports both interactive and batch design workloads. This solution enablesdesignersandengineerstoaccesstheTechnicalComputing cloud environment from a laptop practically anywhere in the world, using interactive applications with 2D or 3D remote visualization significantly saving cost and minimizing the amount of data that must be transferred to and from the cloud. Helping a broad array of industries IBM is helping companies and organizations in more than a doz- en industries. IBM has powerful, innovative solutions to compa- nies’ most challenging and complex problems, that allow busi- nesses and researchers to innovate, make critical technical and business decisions, achieve breakthrough results, and establish sustainable competitive advantage. Making sense of dollars Financial services firms are rethinking their strategies as they re- spond to the sweeping changes in the markets and the regulato- ry environment, and an incessant blizzard of data. In fact, some financial organizations consume market data at rates exceeding one million messages per second, twice the peak rates they ex- perienced only a year ago. With an optimized technical comput- ing system from IBM, financial services enterprise can process vast amounts of structured and unstructured data, in real time which means you won’t be lost in a data whiteout. Engineering a smarter planet Meeting the demands of today’s automotive, aerospace, de- fense and manufacturing engineers requires unprecedented computing power for structural analysis, noise, vibration, and harshness tests, crash analytics, and fluid dynamics. IBM offers computer aided engineering (CAE) optimized solutions that in- clude systems, storage and software, from leading ISVs to help you streamline your development environment, reduce design- cycle times and infrastructure costs, and meet aggressive time- to-market deadlines. Searching for black gold A tectonic shift is underway in upstream petroleum computing. Reservoir modeling and sensor field data now interact in near real-time to dramatically improve the fidelity of the analysis, its accuracy, and reliability. With IBM technical computing systems, energy companies can reduce the duration and cost of problem solving in reservoir optimization and seismic imaging, continu- ing to advance the field of exploration and production.
  • 8. Community collaboration Complete technical computing solutions may require components supplied by specialized vendor such as ap- plications and tools from ISVs, hardware for intercon- nection and acceleration of processing nodes, and state- of-the-art cooling technology for greener operation, to name a few. IBM maintains technical and business relationships with all the leading technical computing providers. IBM also works closely with industry, open standards consortia, and government agencies around the world to facilitate technology advancement and deployment, and collaborates with leading academic institutions through our shared university research programs and fellowships. Such collaborations drive value back to the community and result in improved products. IBMTechnicalComputing Accelerate Insights and Results 8 Using insight to help support a smarter planet Whether optimizing traffic flow to lowering fuel consumption and time wasted in traffic jams, or unraveling genetic codes to develop new medicines and therapies, or increasing the pro- duction of oil and gas from existing reservoirs, powerful and ef- ficient technical computing solutions from IBM provide a foun- dation to handle the associated computational challenges and extract intelligence from complex systems of instrumented and interconnected people and devices.
  • 9. 9
  • 10. 10 Accelerate Time to Value for Technical Computing Businesses in nearly every industry are looking for ways to improve the efficiency of their technical computing environments. Compa- nies that design aerospace or automotive products need systems that can help them meet time-to-market requirements and maxi- mizeprofitability.Organizationshelpingtofindcausesandcuresfor disease need ways to increase productivity, foster innovation and compete more effectively. For reservoir engineers, the rising cost of oil and gas drilling means evermore accurate models are required to pinpoint potential well sites and extract higher percentages of oil and gas resources, and communications service providers need a way to quickly analyze data and act on it. Keys to overcoming technical computing challenges Most technical computing tasks involve vast amounts of data and require thousands of complex calculations. Pressures to “do more with less” create requirements for greater efficiency. Increasing ap- plication performance and workload throughput is important–but that is only part of the solution. Organizations can also realize dra- matic efficiency benefits from simplified installation, deployment and management of an optimized technical computing environ- ment. Additionally,manycompanieshavelimitedITresourcestodevoteto administering the high-performance systems required for sophisti- cateddesign,analyticsandresearchtasks.Thesecompaniesrequire asolutionthatisaffordableandeasytouse,andthatwillhelpmake themostoftheirinfrastructureinvestmentbyensuringcomputere- sources are fully utilized and prioritized. IBMTechnicalComputing IBM Application Ready Solutions
  • 11. 11 IBM has created workload-optimized solutions designed to meet these challenges. IBM Application Ready Solutions for Technical Computing are based on IBM Platform Computing software and powerful IBM systems, integrated and optimized for leading appli- cationsandbackedbyreferencearchitectures.WithIBMApplication Ready Solutions, organizations can spend more time solving scien- tificandengineeringproblems,insteadofadministeringcomputing environments. IBM Application Ready Solutions: Looking under the hood IBM has created Application Ready Solution reference architec- tures for target workloads and applications. Each of these refer- ence architectures includes recommended small, medium and large configurations designed to ensure optimal performance at entry-level prices. These reference architectures are based on powerful, predefined and tested infrastructure with a choice of the following systems: ʎ IBM Flex System provides the ability to combine leading- edge compute nodes with integrated storage and network- ing in a highly dense, scalable blade system. The IBM Appli- cation Ready Solution supports IBM Flex System x240 (x86) compute nodes. ʎ IBM System x helps organizations address their most chal- lenging and complex problems. The Application Ready Solu- tion supports IBM NeXtScale System, a revolutionary new x86 high-performance system designed for modular flexibility and scalability, System x rack-mounted servers and System x iDataPlex dx360 M4 systems designed to optimize density, performance and graphics acceleration for remote 3-D visu- alization. ʎ IBM System Storage Storwize V3700 is an entry-level disk sys- tem delivering an ideal price/performance ratio and scalabil- ity – or choose the optional IBM Storwize V7000 Unified for enterprise-class, midrange storage designed to consolidate block-and-file workloads into a single system. ʎ IBM Intelligent Cluster is a factory-integrated, fully tested solution that helps simplify and expedite deployment of x86- based Application Ready Solutions. The solutions also include pre-integrated IBM Platform Comput- ing software designed to address technical computing challeng- es: ʎ IBM Platform HPC is a complete technical computing man- agement solution in a single product, with a range of fea- tures designed to improve time-to-results and help research- ers focus on their work rather than on managing workloads. ʎ IBM Platform LSF provides a comprehensive set of tools for intelligently scheduling workloads and dynamically allocat- ing resources to help ensure optimal job throughput. ʎ IBM Platform Symphony delivers powerful enterpriseclass management for running big data, analytics and compute- intensive applications. ʎ IBM Platform Cluster Manager–Standard Edition provides easy-to-use yet powerful cluster management for technical computing clusters that simplifies the entire process, from initial deployment through provisioning to ongoing mainte- nance. ʎ IBM General Parallel File System (GPFS) is a highperformance enterprise file management platform for optimizing data management.
  • 12. 12 Technical computing workloads optimized for Application Ready Solutions IBM Application Ready Solutions take the guesswork and com- plexity out of deploying, managing and using high-performance clusters, grids and clouds in industries such as automotive, aero- space, life sciences, electronics, telecommunications, chemistry and petroleum. IBM Application Ready Solution for Abaqus Developed in partnership with Dassault Systèmes, the IBM Ap- plication Ready Solution for Abaqus provides the framework to consolidate numerous tools into a single, unified modeling and analysis computing environment. High performance IBM sys- tems, workload and file management, networking and storage combine to provide a complete integrated environment. Easy access to Abaqus job-related data and remote job management provides the means to solve entry-level or extremely large simu- lation problems with fast turnaround times. IBM Application Ready Solution for Accelrys Designed for healthcare and life sciences, the Application Ready Solution for Accelrys simplifies and accelerates mapping, variant calling and annotation for the Accelrys Enterprise Platform (AEP) NGS Collection. It addresses file system performance–the num- ber-one challenge for NGS workloads on AEP – by integrating IBM GPFS for scalable I/O performance. IBM systems provide the computational power and high-performance storage required, along with simplified cluster management to speed deployment and provisioning. IBMTechnicalComputing IBM Application Ready Solutions
  • 13. 13 IBM Application Ready Solution for ANSYS ANSYS software helps engineers tackle demanding tasks such as computational fluid dynamics (CFD) modeling, structural analysis and digital wind-tunnel simulation. The IBM Application Ready Solu- tion for ANSYS speeds deployment and optimizes performance for the most demanding ANSYS Fluent and ANSYS Mechanical environ- ments. Engineers can become productive quickly, easily submitting simulations, sharing files with colleagues and enhancing insight when optional remote 2-D and 3-D visualization is configured. IBM Application Ready Solution for CLC bio This integrated solution is architected for clients involved in ge- nomics research in areas ranging from personalized medicine to plant and food research. Combining CLC bio software with high- performance IBM systems and GPFS, the solution accelerates high- throughputsequencingandanalysisofnext-generationsequencing data while improving the efficiency of CLC bio Genomic Server and CLC Genomics Workbench environments. IBM Application Ready Solution for Gaussian Gaussian software is widely used by chemists, chemical engineers, biochemists, physicists and other scientists performing molecular electronic structure calculations in a variety of market segments. TheIBMApplicationReadySolutionisdesignedtohelpspeedresults by integrating the latest version of the Gaussian series of programs with powerful IBM Flex System blades and integrated storage. IBM Platform Computing provides simplified workload and resource management. IBM Application Ready Solution for IBM InfoSphere BigInsights The Application Ready Solution for IBM InfoSphere BigInsights provides a powerful big data MapReduce analytics environment and reference architecture based on IBM PowerLinux servers, IBM Platform Symphony, IBM GPFS and integrated storage. The solution delivers balanced performance for data-intensive work- loads, along with tools and accelerators to simplify and speed application development. The solution is ideal for solving time- critical, data-intensive analytics problems in a wide range of in- dustry sectors. IBM Application Ready Solution for MSC Software The IBM Application Ready Solution for MSC Software features an optimized platform designed to help manufacturers rapidly deploy a high-performance simulation, modeling and data man- agement environment, complete with process workflow and other high-demand usability features. The platform features IBM systems, workload management and parallel file system seam- lessly integrated with MSC Nastran, MSC Patran and MSC Sim- Manager to provide clients robust and agile engineering clusters or HPC clouds for accelerated results and lower cost. IBM Application Ready Solution for Schlumberger Fine-tuned for accelerating reservoir simulations using Schlum- berger ECLIPSE and INTERSECT, this Application Ready Solution provides application templates to reduce setup time and sim- plify job submission. Architected specifically for Schlumberger applications, the solution enables users to perform significantly more iterations of their simulations and analysis, ultimately yielding more accurate results. Easy access to Schlumberger job-
  • 14. Time savings with IBM Application Ready Solutions. 14 IBMTechnicalComputing IBM Application Ready Solutions related data and remote management improves user and admin- istrator productivity. Complete, integrated solutions architected to deliver real-world benefits IBM Application Ready Solutions help organizations transform environments to deliver results faster, better and at less expense. The benefits start with pre-integrated and fully supported2 solu- tions that reduce the complexity of the IT lifecycle and shorten implementation time. Companies have one support number to call for all IBM software and hardware components for dedicat- ed assistance from technical computing industry experts. Exten- sible IBM Application Ready Solutions also help protect a compa- ny’s technical computing investment by scaling as requirements grow. IBM Platform Computing software allows companies to speed time-to-results and lower costs by simplifying manage- ment of high-performance clusters and clouds. These products enable research and development teams to easily access a pool of shared resources to dramatically accelerate a wide range of simulations and analytics. Job submission templates reduce
  • 15. 15 setup time while minimizing user errors during job submission, and built-in workload management capabilities such as tracking application license usage and scheduling jobs based on license availability improve resource utilization and help ensure fastest time-to-results. Designed, tested and optimized by experienced technical computing architects from IBM and leading indepen- dent software vendors (ISVs), IBM Application Ready Solutions help deliver optimal application performance and robustness. IBM high-performance systems, software and storage are de- signed to accelerate even the most demanding workloads. Fur- ther performance improvements are provided by the advanced cluster file system, which improves efficiency and speed by re- moving data-related bottlenecks.
  • 16.
  • 17. Enterprise-Ready Cluster&Workload Management High performance computing (HPC) is becoming a necessary tool for organizations to speed up product design, scientific research, and business analytics. However, there are few software environments more complex to manage and utilize than modern high performance computing clusters. Therefore, addressing the problem of complexity in cluster management is a key aspect of leveraging HPC to improve time to results and user productivity. EngineeringLifeSciencesAutomotivePriceModellingAerospaceCAEDataAnalytics
  • 18. 18 Introduction Clusters based on the Linux operating system have become increasingly prevalent at large supercomputing centers and continue to make significant in-roads in commercial and aca- demic settings. This is primarily due to their superior price/per- formance and flexibility, as well as the availability of commercial applications that are based on the Linux OS. Ironically, the same factors that make Linux a clear choice for high performance computing often make the operating system less accessible to smaller computing centers. These organiza- tions may have Microsoft Windows administrators on staff, but have little or no Linux or cluster management experience. The complexity and cost of cluster management often outweigh the benefits that make open, commodity clusters so compelling. Not only can HPC cluster deployments be difficult, but the ongoing need to deal with heterogeneous hardware and operating sys- tems, mixed workloads, and rapidly evolving toolsets make de- ploying and managing an HPC cluster a daunting task. These issues create a barrier to entry for scientists and research- ers who require the performance of an HPC cluster, but are lim- ited to the performance of a workstation. This is why ease of use is now mandatory for HPC cluster management. This paper re- views the most complete and easy to use cluster management solution, Platform HPC, which is now commercially available from Platform Computing. The cluster management challenge To provide a proper HPC application environment, system admin- istrators need to provide a full set of capabilities to their users, as shown below. These capabilities include cluster provisioning and node management, application workload management, and Enterprise-ReadyCluster& WorkloadManagement IBM Platform HPC
  • 19. Application-Centric Interface Unified Management Interface Provisioning & Node Management Workload Management Parallel Job Enablement Adaptive Scheduling Essential components of an HPC cluster solution 19 an environment that makes it easy to develop, run and manage distributed parallel applications. Modern application environments tend to be heterogeneous; some workloads require Windows compute hosts while oth- ers require particular Linux operating systems or versions. The ability to change a node’s operating system on-the-fly in re- sponse to changing application needs - referred to as adaptive scheduling - is important since it allows system administrators to maximize resource use, and present what appears to be a larg- er resource pool to cluster users. Learning how to use a command line interface to power-up, pro- vision and manage a cluster is extremely time-consuming. Ad- ministrators therefore need remote, web-based access to their HPC environment that makes it easier for them to install and manage an HPC cluster. An easy-to-use application-centric web interface can have tangible benefits including improved produc- tivity, reduced training requirements, reduced errors rates, and secure remote access. While there are several cluster management tools that address parts of these requirements, few address them fully, and some tools are little more than collections of discrete open-source software components. Some cluster toolkits focus largely on the problem of cluster pro- visioning and management. While they clearly simplify cluster deployment, administrators wanting to make changes to node configurations or customize their environment will quickly find themselves hand-editing XML configuration files or writ- ing their own shell scripts. Third-party workload managers and various open-source MPI libraries might be included as part of a distribution. However, these included components are loosely integrated and often need to be managed separately from the cluster manager. As a result the cluster administrator needs to learn how to utilize each additional piece of software in order to manage the cluster effectively. Other HPC solutions are designed purely for application work- load management. While these are all capable workload manag- ers, most do not address at all the issue of cluster management, application integration, or adaptive scheduling. If such capabili- ties exist they usually require the purchase of additional soft- ware products. Parallel job management is also critical. One of the primary rea- sons that customers deploy HPC clusters is to maximize applica- tion performance. Processing problems in parallel is a common way to achieve performance gains. The choice of MPI, its scalabil- ity, and the degree to which it is integrated with various OFED drivers and high performance interconnects has a direct impact on delivered application performance. Furthermore, if the work- load manager does not incorporate specific parallel job manage- ment features, busy cluster users and administrators can find themselves manually cleaning up after failed MPI jobs or writing their own shell scripts to do the same.
  • 20. “Platform HPC and Platform LSF have been known for many years as a highest-quality and enterprise- ready cluster and workload management solutions, and we have many customers in academia and in- dustry relying on them.” Dr. Oliver Tennert Director Technology Management & HPC Solutions 20 Enterprise-ReadyCluster& WorkloadManagement IBM Platform HPC Complexity is a real problem. Many small organizations or de- partments grapple with a new vocabulary full of cryptic com- mands, configuring and troubleshooting Anaconda kick start scripts, finding the correct OFED drivers for specialized hard- ware, and configuring open source monitoring systems like Gan- glia or Nagios. Without an integrated solution administrators may need to deal with dozens of distinct software components, making managing HPC cluster implementations extremely te- dious and time-consuming. Re-thinking HPC clusters Clearly these challenges demand a fresh approach to HPC clus- ter management. Platform HPC represents a “re-think” of how HPC clusters are deployed and managed. Rather than address- ing only part of the HPC management puzzle, Platform HPC ad- dresses all facets of cluster management. It provides: ʎʎ A complete, easy-to-use cluster management solution ʎʎ Integrated application support ʎʎ User-friendly, topology-aware workload management ʎʎ Robust workload and system monitoring and reporting ʎʎ Dynamic operating system multi-boot (adaptive scheduling) ʎʎ GPU scheduling ʎʎ Robust commercial MPI library (Platform MPI) ʎʎ Web-based interface for access anywhere Most complete HPC cluster management solution PlatformHPCmakesiteasytodeploy,runandmanageHPCclusters while meeting the most demanding requirements for application performance and predictable workload management. It is a com- plete solution that provides a robust set of cluster management ca- pabilities; from cluster provisioning and management to workload
  • 21. 21 management and monitoring. The easy-to -use unified web portal provides a single point of access into the cluster, making it easy to manage your jobs and optimize application performance. Platform HPC is more than just a stack of software; it is a fully in- tegrated and certified solution designed to ensure ease of use and simplified troubleshooting. Integrated application support High performing, HPC-optimized MPI libraries come integrated with Platform HPC, making it easy to get parallel applications up and running. Scripting guidelines and job submission templates for commonly used commercial applications simplify job submission, reducesetuptimeandminimizeoperationerrors.Oncetheapplica- tions are up and running, Platform HPC improves application per- formance by intelligently scheduling resources based on workload characteristics. Fully certified and supported Platform HPC unlocks cluster management to provide the easiest and most complete HPC management capabilities while reducing overall cluster cost and improving administrator productivity. It is based on the industry’s most mature and robust workload manag- er, Platform LSF, making it the most reliable solution on the market. Other solutions are typically a collection of open-source tools, which may also include pieces of commercially developed soft- ware. They lack key HPC functionality and vendor support, relying on the administrator’s technical ability and time to implement. Platform HPC is a single product with a single installer and a uni- fiedweb-basedmanagementinterface.Withthebestsupportinthe HPC industry, Platform HPC provides the most complete solution for HPC cluster management. Complete solution Platform HPC provides a complete set of HPC cluster manage- ment features. In this section we’ll explore some of these unique capabilities in more detail. Easy to use, cluster provisioning and management With Platform HPC, administrators can quickly provision and manage HPC clusters with unprecedented ease. It ensures maxi- mum uptime and can transparently synchronize files to cluster nodes without any downtime or re-installation. Fast and efficient software Installation – Platform HPC can be installed on the head node and takes less than one hour using three different mechanisms: ʎʎ Platform HPC DVD ʎʎ Platform HPC ISO file ʎʎ Platform partner’s factory install bootable USB drive Installing software on cluster nodes is simply a matter of associ- ating cluster nodes with flexible provisioning templates through the web-based interface. Flexible provisioning – Platform HPC offers multiple options for provisioning Linux operating environments that include: ʎʎ Package-based provisioning ʎʎ Image based provisioning ʎʎ Diskless node provisioning Large collections of hosts can be provisioned using the same pro- visioning template. Platform HPC automatically manages details such as IP address assignment and node naming conventions that reflect the position of cluster nodes in data center racks.
  • 22. 22 Unlike competing solutions, Platform HPC deploys multiple oper- ating systems and OS versions to a cluster simultaneously. This in- cludes Red Hat Enterprise Linux, CentOS, Scientific Linux, and SUSE Linux Enterprise Server. This provides administrators with greater flexibilityinhowtheyservetheirusercommunitiesandmeansthat HPC clusters can grow and evolve incrementally as requirements change. Enterprise-ReadyCluster& WorkloadManagement What’s New in IBM Platform LSF 8
  • 23. 23 What’s New in IBM Platform LSF 8 Written with Platform LSF administrators in mind, this brief pro- vides a short explanation of significant changes in Platform’s lat- est release of Platform LSF, with a specific emphasis on schedul- ing and workload management features. About IBM Platform LSF 8 Platform LSF is the most powerful workload manager for de- manding, distributed high performance computing environ- ments. It provides a complete set of workload management capabilities, all designed to work together to reduce cycle times and maximize productivity in missioncritical environments. This latest Platform LSF release delivers improvements in perfor- mance and scalability while introducing new features that sim- plify administration and boost user productivity. This includes: ʎ Guaranteed resources – Aligns business SLA’s with infra- structure configuration for simplified administration and configuration ʎ Live reconfiguration – Provides simplified administration and enables agility ʎ Delegation of administrative rights – Empowers line of busi- ness owners to take control of their own projects ʎ Fairshare & pre-emptive scheduling enhancements – Fine tunes key production policies Platform LSF 8 Features Guaranteed Resources Ensure Deadlines are Met In Platform LSF 8, resource-based scheduling has been extended to guarantee resource availability to groups of jobs. Resources can be slots, entire hosts or user-defined shared resources such as software licenses. As an example, a business unit might guar- antee that it has access to specific types of resources within ten minutes of a job being submitted, even while sharing resources between departments. This facility ensures that lower priority jobs using the needed resources can be pre-empted in order to meet the SLAs of higher priority jobs. Because jobs can be automatically attached to an SLA class via access controls, administrators can enable these guarantees without requiring that end-users change their job submission procedures, making it easy to implement this capability in exist- ing environments. Live Cluster Reconfiguration Platform LSF 8 incorporates a new live reconfiguration capabil- ity, allowing changes to be made to clusters without the need to re-start LSF daemons. This is useful to customers who need to add hosts, adjust sharing policies or re-assign users between groups “on the fly”, without impacting cluster availability or run- ning jobs.
  • 24. 24 Changes to the cluster configuration can be made via the bconf command line utility, or via new API calls. This functionality can also be integrated via a web-based interface using Platform Ap- plication Center. All configuration modifications are logged for a complete audit history, and changes are propagated almost instantaneously. The majority of reconfiguration operations are completed in under half a second. With Live Reconfiguration, down-time is reduced, and administra- torsarefreetomakeneededadjustmentsquicklyratherthanwait for scheduled maintenance periods or non-peak hours. In cases where users are members of multiple groups, controls can be put in place so that a group administrator can only control jobs as- sociated with their designated group rather than impacting jobs related to another group submitted by the same user. Delegation of Administrative Rights With Platform LSF 8, the concept of group administrators has been extended to enable project managers and line of business man- agers to dynamically modify group membership and fairshare re- source allocation policies within their group. The ability to make these changes dynamically to a running cluster is made possible by the Live Reconfiguration feature. These capabilities can be delegated selectively depending on the group and site policy. Different group administrators can manage jobs, control sharing policies or adjust group membership. More Flexible Fairshare Scheduling Policies ToenablebetterresourcesharingflexibilitywithPlatformLSF8,the algorithms used to tune dynamically calculated user priorities can be adjusted at the queue level. These algorithms can vary based on Enterprise-ReadyCluster& WorkloadManagement What’s New in IBM Platform LSF 8
  • 25. 25 department, application or project team preferences. The Fairshare parameters ENABLE_HIST_RUN_TIME and HIST_HOURS enable administrators to control the degree to which LSF considers prior resource usage when determining user priority. The flexibility of Platform LSF 8 has also been improved by allowing a similar “decay rate” to apply to currently running jobs (RUN_TIME_DECAY), either system-wide or at the queue level. This is most useful for custom- ers with long-running jobs, where setting this parameter results in a more accurate view of real resource use for the fairshare schedul- ing to consider. Performance & Scalability Enhancements Platform LSF has been extended to support an unparalleled scale of up to 100,000 cores and 1.5 million queued jobs for very high throughput EDA workloads. Even higher scalability is possible for more traditional HPC workloads. Specific areas of improvement include the time required to start the master-batch daemon (MBD), bjobs query perfor- mance, job submission and job dispatching as well as impres- sive performance gains resulting from the new Bulk Job Sub- mission feature. In addition, on very large clusters with large numbers of user groups employing fairshare scheduling, the memory footprint of the master batch scheduler in LSF has been reduced by approximately 70% and scheduler cycle time has been reduced by 25%, resulting in better performance and scalability. More Sophisticated Host-based Resource Usage for Parallel Jobs Platform LSF 8 provides several improvements to how resource use is tracked and reported with parallel jobs. Accurate tracking of how parallel jobs use resources such as CPUs, memory and swap, is important for ease of management, optimal scheduling and accurate reporting and workload analysis. With Platform LSF 8 administrators can track resource usage on a per-host basis and an aggregated basis (across all hosts), ensuring that resource use is reported accurately. Additional details such as running PIDs and PGIDs for distributed parallel jobs, manual cleanup (if necessary) and the development of scripts for manag- ing parallel jobs are simplified. These improvements in resource usage reporting are reflected in LSF commands including bjobs, bhist and bacct. Improved Ease of Administration for Mixed Windows and Linux Clusters The lspasswd command in Platform LSF enables Windows LSF users to advise LSF of changes to their Windows level pass- words. With Platform LSF 8, password synchronization between environments has become much easier to manage because the Windows passwords can now be adjusted directly from Linux hosts using the lspasswd command. This allows Linux users to conveniently synchronize passwords on Windows hosts without needing to explicitly login into the host. Bulk Job Submission When submitting large numbers of jobs with different resource requirements or job level settings, Bulk Job Submission allows for jobs to be submitted in bulk by referencing a single file con- taining job details.
  • 26. 26 Simplified configuration changes – Platform HPC simplifies ad- ministration and increases cluster availability by allowing chang- es such as new package installations, patch updates, and changes to configuration files to be propagated to cluster nodes automati- cally without the need to re-install those nodes. It also provides a mechanism whereby experienced administrators can quickly per- form operations in parallel across multiple cluster nodes. Repository snapshots / trial installations – Upgrading software can be risky, particularly in complex environments. If a new soft- ware upgrade introduces problems, administrators often need to rapidly “rollback” to a known good state. With other cluster managers this can mean having to re-install the entire cluster. Platform HPC incorporates repository snapshots, which are “re- store points” for the entire cluster. Administrators can snapshot a known good repository, make changes to their environment, and easily revert to a previous “known good” repository in the event of anunforeseenproblem.Thispowerfulcapabilitytakestheriskout of cluster software upgrades. New hardware integration – When new hardware is added to a cluster it may require new or updated device drivers that are not supported by the OS environment on the installer node. This means that a newly updated node may not net- work boot and provision until the head node on the cluster is updated with a new operating system; a tedious and disrup- tive process. Platform HPC includes a driver patching utility that allows updated device drivers to be inserted into exist- ing repositories, essentially future proofing the cluster, and providing a simplified means of supporting new hardware without needing to re-install the environment from scratch. Enterprise-ReadyCluster& WorkloadManagement IBM Platform HPC
  • 27. Resource monitoring 27 Software updates with no re-boot – Some cluster managers al- ways re-boot nodes when updating software, regardless of how minor the change. This is a simple way to manage updates. How- ever, scheduling downtime can be difficult and disruptive. Platform HPCperformsupdatesintelligentlyandselectivelysothatcompute nodes continue to run even as non-intrusive updates are applied. The repository is automatically updated so that future installations include the software update. Changes that require the re-installa- tion of the node (e.g. upgrading an operating system) can be made in a “pending” state until downtime can be scheduled. User-friendly, topology aware workload management Platform HPC includes a robust workload scheduling capability, which is based on Platform LSF - the industry’s most powerful, comprehensive, policy driven workload management solution for engineering and scientific distributed computing environments. By scheduling workloads intelligently according to policy, Platform HPC improves end user productivity with minimal sys- tem administrative effort. In addition, it allows HPC user teams to easily access and share all computing resources, while reducing time between simulation iterations. GPU scheduling – Platform HPC provides the capability to sched- ule jobs to GPUs as well as CPUs. This is particularly advantageous in heterogeneous hardware environments as it means that ad- ministrators can configure Platform HPC so that only those jobs that can benefit from running on GPUs are allocated to those re- sources. This frees up CPU-based resources to run other jobs. Us- ing the unified management interface, administrators can moni- tor the GPU performance as well as detect ECC errors. Unified management interface Competing cluster management tools either do not have a web- based interface or require multiple interfaces for managing dif- ferent functional areas. In comparison, Platform HPC includes a single unified interface through which all administrative tasks can be performed including node-management, job-manage- ment, jobs and cluster monitoring and reporting. Using the uni- fied management interface, even cluster administrators with very little Linux experience can competently manage a state of the art HPC cluster. Job management – While command line savvy users can contin- ue using the remote terminal capability, the unified web portal makes it easy to submit, monitor, and manage jobs. As changes are made to the cluster configuration, Platform HPC automati- cally re-configures key components, ensuring that jobs are allo- cated to the appropriate resources. The web portal is customizable and provides job data manage- ment, remote visualization and interactive job support.
  • 28. 28 Workload/system correlation – Administrators can correlate workload information with system load, so that they can make timely decisions and proactively manage compute resources against business demand. When it’s time for capacity planning, the management interface can be used to run detailed reports and analyses which quantify user needs and remove the guess work from capacity expansion. Simplified cluster management – The unified management con- sole is used to administer all aspects of the cluster environment. It enables administrators to easily install, manage and monitor their cluster. It also provides an interactive environment to easily pack- age software as kits for application deployment as well as pre-in- tegrated commercial application support. One of the key features of the interface is an operational dashboard that provides com- prehensive administrative reports. As the image illustrates, Plat- form HPC enables administrators to monitor and report on key performance metrics such as cluster capacity, available memory and CPU utilization. This enables administrators to easily identify and troubleshoot issues. The easy to use interface saves the cluster administrator time, and means that they do not need to become an expert in the adminis- tration of open-source software components. It also reduces the possibility of errors and time lost due to incorrect configuration. Cluster administrators enjoy the best of both worlds – easy ac- cess to a powerful, web-based cluster manager without the need to learn and separately administer all the tools that comprise the HPC cluster environment. Enterprise-ReadyCluster& WorkloadManagement IBM Platform HPC
  • 29. Job submission templates 29 Robust Commercial MPI library Platform MPI – In order to make it easier to get parallel applica- tions up and running, Platform HPC includes the industry’s most robust and highest performing MPI implementation, Platform MPI. Platform MPI provides consistent performance at application run-time and for application scaling, resulting in top performance results across a range of third-party benchmarks. Open Source MPI – Platform HPC also includes various other in- dustry standard MPI implementations. This includes MPICH1, MPICH2 and MVAPICH1, which are optimized for cluster hosts con- nected via InfiniBand, iWARP or other RDMA based interconnects. Integrated application support Job submission templates – Platform HPC comes complete with job submission templates for ANSYS Mechanical, ANSYS Flu- ent, ANSYS CFX, LS-DYNA, MSC Nastran, Schlumberger ECLIPSE, Simulia Abaqus, NCBI Blast, NWChem, ClustalW, and HMMER. By configuring these templates based on the application settings in your environment, users can start using the cluster without writing scripts. Scripting Guidelines – Cluster users that utilize homegrown or open-source applications, can utilize the Platform HPC scripting guidelines. These user-friendly interfaces help minimize job sub- mission errors. They are also self-documenting, enabling users to create their own job submission templates. Benchmark tests – Platform HPC also includes standard bench- mark tests to ensure that your cluster will deliver the best per- formance without manual tuning. Flexible OS provisioning Platform HPC can deploy multiple operating system versions con- currently on the same cluster and, based on job resource require- ments, dynamically boot the Linux or Windows operating system required to run the job. Administrators can also use a web inter- face to manually switch nodes to the required OS to meet applica- tion demands, providing them with the flexibility to support spe- cial requests and accommodate unanticipated changes. Rather than being an extracost item as it is with other HPC management suites,thiscapabilityisincludedasacorefeatureofPlatformHPC. Commercial Service and support Certified cluster configurations – Platform HPC is tested and certified on all partner hardware platforms. By qualifying each platform individually and providing vendor-specific software with optimized libraries and drivers that take maximum advan- tage of unique hardware features, Platform Computing has es- sentially done the integration work in advance.
  • 30. 30 As a result, clusters can be deployed quickly and predictably with minimal effort. As a testament to this, Platform HPC is certi- fied under the Intel Cluster Ready program. Enterprise class service and support – Widely regarded as hav- ing the best HPC support organization in the business, Platform Computing is uniquely able to support an integrated HPC plat- form. Because support personnel have direct access to the Plat- form HPC developers, Platform Computing is able to offer a high- er level of support and ensure that any problems encountered are resolved quickly and efficiently. Summary Platform HPC is the ideal solution for deploying and manag- ing state of the art HPC clusters. It makes cluster management simple, enabling analysts, engineers and scientists from organi- zations of any size to easily exploit the power of Linux clusters. Unlike other HPC solutions that address only parts of the HPC management challenge, Platform HPC uniquely addresses all as- pects of cluster and management including: ʎʎ Easy-to-use cluster provisioning and management ʎʎ User-friendly, topology aware workload management ʎʎ Unified management interface ʎʎ Robust commercial MPI library ʎʎ Integrated application support ʎʎ Flexible OS provisioning ʎʎ Commercial HPC service and support Enterprise-ReadyCluster& WorkloadManagement IBM Platform HPC
  • 31. 31 By providing simplified management over the entire lifecycle of a cluster, Platform HPC has a direct and positive impact on pro- ductivity while helping to reduce complexity and cost. The comprehensive web-based management interface, and fea- tures like repository snapshots and the ability to update soft- ware packages on the fly means that state-of-the-art HPC clus- ters can be provisioned and managed even by administrators with little or no Linux administration experience. Capability / Feature Platform HPC Cluster Provisioning and Management  Initial cluster provisioning  Multiple provisioning methods  Web-based cluster management  Node updates with no re-boot  Repository snapshots  Flexible node templates  Multiple OS and OS versions  Workload Management & Application Integration  Integrated workload management  HPC libraries & toolsets  NVIDIA CUDA SDK support  Web-based job management  Web-based job data management  Multi-boot based on workload  Advanced parallel job management  Commercial application integrations  MPI Libraries  Commercial grade MPI  Workload and system monitoring, reporting and correlation  Workload monitoring  Workload reporting  System monitoring & reporting  Workload and system load correlation  Integration with 3rd party management tools 
  • 32. 32 IBM Platform MPI 8.1 Benefits ʎ Superior application performance ʎ Reduced development and support costs ʎ Faster time-to-market ʎ The industry’s best technical support Features ʎ Supports the widest range of hardware, networks and oper- ating systems ʎ Distributed by over 30 leading commercial software vendors ʎ Change interconnects or libraries with no need to re-compile ʎ Seamless compatibility across Windows and Linux environ- ments ʎ Ensures a production quality implementation Ideal for: ʎ Enterprises that develop or deploy parallelized software ap- plications on HPC clusters ʎ Commercial software vendors wanting to improve applica- tions performance over the widest range of computer hard- ware, interconnects and operating systems The Standard for Scalable, Parallel Applications Platform MPI is a high performance, production–quality imple- mentation of the Message Passing Interface (MPI). It is widely used in the high performance computing (HPC) industry and is considered the de facto standard for developing scalable, paral- lel applications. Enterprise-ReadyCluster& WorkloadManagement IBM Platform MPI 8.1
  • 33. 33 Platform MPI maintains full backward compatibility with HP-MPI and Platform MPI applications and incorporates advanced CPU affinity features, dynamic selection of interface libraries, supe- rior workload manger integrations and improved performance and scalability. Platform MPI supports the broadest range of industry standard platforms, interconnects and operating systems helping ensure that your parallel applications can run anywhere. Focus on portability Platform MPI allows developers to build a single executable that transparently leverages the performance features of any type of interconnect, thereby providing applications with optimal laten- cy and bandwidth for each protocol. This reduces development effort, and enables applications to use the “latest and greatest” technologies on Linux or Microsoft Windows without the need to re-compile and re-link applications. Platform MPI is optimized for both distributed (DMP) and shared memory (SMP) environments and provides a variety of flexible CPU binding strategies for processes and threads, enabling bet- ter performance on multi–core environments. With this capa- bility memory and cache conflicts are managed by more intel- ligently distributing the load among multiple cores. With support for Windows HPC Server 2008 and the Microsoft job scheduler, as well as other Microsoft operating environments, Platform MPI allows developers targeting Windows platforms to enjoy the benefits of a standard portable MPI and avoid propri- etary lock-in.
  • 34. 34 Enterprise-ReadyCluster& WorkloadManagement IBM Platform MPI 8.1 Supported Operating Systems ʎ Red Hat Enterprise Linux 4.6, 5.x and 6.x ʎ SUSE Linux Enterprise Server 10 and 11 ʎ CentOS 5.3 ʎ Microsoft Windows® XP/Vista, Server 2003/Server 200 8/HPC Server 2008, Windows 7 Supported Interconnects and Protocols Myrinet (Linux) GM & MX on X86-64 and Itanium2 InfiniBand (Linux) OFED, PSM, uDAPL on X86-64 and Itanium2 OFED 1.1, 1.2, 1.3, 1.4, 1.5 SDR, DDR, QDR, ConnectX and ConnextX2 Mellanox FCA GigE (Linux) RDMA, uDAPL, TCP/IP Infiniband (Windows) WinOF 2.x, IBAL, WSD, SDR, DDR, QDR, ConnectX(2) GigE (Windows) TCP/IP on x86-64
  • 35. 35 Features and Benefits Features Benefits Simplicity ʎ Fully complies with the MPI 2.2 standard, providing dynamic processes, one–sided communications, extended collectives, thread safety, and updated ROMIO ʎ Complete debugging, diagnostic and profiling tools ʎ Auto-detection of interconnects and dynamic loading of libraries ʎ No re-link required for debugging and profiling ʎ Supported by the largest dedicated HPC support organization ʎ Applications port easily to other platforms ʎ Protects ISV software investment ʎ Reduces time-to-market ʎ Increased robustness and quality of applications ʎ Technical problems resolved quickly and efficiently Performance ʎ Improved shared memory performance, incorporating code and meth- ods from Platform MPI 5.6 (Scali MPI) ʎ 75% reduction in job startup and shutdown at scale ʎ Scalability to 17,000 ranks ʎ RDMA message progression & coalescing enhancements ʎ Flexible CPU binding options maximize cache effectiveness and bal- ance applications to minimize latency ʎ Automated benchmarking of collective operations ʎ Takes maximum advantage of available hardware ʎ Reduced latency for better performance ʎ Performance improves without explicit developer action ʎ Better message throughput in streaming applications ʎ Easier to optimize application performance Compatibility ʎ Common source-code base between Linux and Windows ʎ Binary compatible with applications developed for HP-MPI ʎ MPICH-2 compatibility mode ʎ Linux Standard Bindings ensure full compatibility across all major Linux distributions ʎ Scheduler agnostic with workload manager integrations for Windows HPC, Platform LSF, PBS Pro, SLURM and other popular schedulers and resource managers ʎ Avoid the cost of separate releases for different platforms ʎ Easily used with existing MPI applications ʎ Common mpirun syntax between Linux and Windows ʎ Customers avoid proprietary “lock-in” ʎ Avoid floating point issues causing inconsistent results Flexibility ʎ Supports the widest variety of networks and interconnects ʎ Select interconnects at run-time with no need to re-compile ʎ Write applications once and deploy across multiple OS and hardware topologies ʎ CPU binding features well suited to GPU-aware applications ʎ Develop applications that will run on more platforms ʎ Reduce testing, maintenance and support costs ʎ Enjoy strategic flexibility
  • 36. transtec HPC as a Service You will get a range of applications like LS-Dyna, ANSYS, Gromacs, NAMD etc. from all kinds of areas pre-installed, integrated into an enterprise-ready cloud and workload management system, and ready-to run. Do you miss your application? Ask us: HPC@transtec.de transtec Platform as a Service You will be provided with dynamically provided compute nodes for running your individual code. The operating system will be pre-installed according to your require- ments. Common Linux distributions like RedHat, CentOS, or SLES are the standard. Do you need another distribu- tion? Ask us: HPC@transtec.de transtec Hosting as a Service You will be provided with hosting space inside a profes- sionally managed and secured datacenter where you can have your machines hosted, managed, maintained, according to your requirements. Thus, you can build up your own private cloud. What range of hosting and main- tenance services do you need? Tell us: HPC@transtec.de Services and Customer Care from A to Z individual Presales consulting application-, customer-, site-specific sizing of HPC solution burn-in tests of systems benchmarking of different systems continual improvement software & OS installation application installation onsite hardware assembly integration into customer’s environment customer training maintenance, support & managed services individual Presales consulting application-, customer-, site-specific sizing of HPC solution burn-in tests of systems benchmarking of different systems continual improvement software & OS installation application installation onsite hardware assembly integration into customer’s environment customer training maintenance, support & managed services 36 HPC @ transtec: Services and Customer Care from A to Z transtec AG has over 30 years of experience in scientific computing and is one of the earliest manufacturers of HPC clusters. For nearly a decade, transtec has delivered highly customized High Performance clusters based on standard components to academic and industry customers across Europe with all the high quality standards and the customer-centricapproachthattransteciswellknownfor. Every transtec HPC solution is more than just a rack full of hardware –itisacomprehensivesolutionwitheverythingtheHPCuser,owner, andoperatorneed. In the early stages of any customer’s HPC project, transtec experts provide extensive and detailed consulting to the customer – they benefit from expertise and experience. Consulting is followed by benchmarking of different systems with either specifically crafted HighPerformanceComputing Performance Turns Into Productivity
  • 37. 37 customer code or generally accepted benchmarking routines; this aids customers in sizing and devising the optimal and detailed HPC configuration. EachandeverypieceofHPChardwarethatleavesourfactoryunder- goes a burn-in procedure of 24 hours or more if necessary. We make sure that any hardware shipped meets our and our customers’ qual- ityrequirements.transtecHPCsolutionsareturnkeysolutions.Byde- fault, a transtec HPC cluster has everything installed and configured – from hardware and operating system to important middleware components like cluster management or developer tools and the customer’s production applications. Onsite delivery means onsite integration into the customer’s production environment, be it estab- lishing network connectivity to the corporate network, or setting up softwareandconfigurationparts. transtec HPC clusters are ready-to-run systems – we deliver, you turn the key, the system delivers high performance. Every HPC project en- tailstransfertoproduction:IToperationprocessesandpoliciesapply to the new HPC system. Effectively, IT personnel is trained hands-on, introduced to hardware components and software, with all opera- tionalaspectsofconfigurationmanagement. transtec services do not stop when the implementation projects ends. Beyond transfer to production, transtec takes care. transtec of- fers a variety of support and service options, tailored to the custom- er’sneeds.Whenyouareinneedofanewinstallation,amajorrecon- figuration or an update of your solution – transtec is able to support your staff and, if you lack the resources for maintaining the cluster yourself,maintaintheHPCsolutionforyou.FromProfessionalServic- es to Managed Services for daily operations and required service lev- els,transtecwillbeyourcompleteHPCserviceandsolutionprovider. transtec’shighstandardsofperformance,reliabilityanddependabil- ityassureyourproductivityandcompletesatisfaction. transtec’s offerings of HPC Managed Services offer customers the possibility of having the complete management and administration of the HPC cluster managed by transtec service specialists, in an ITIL compliant way. Moreover, transtec’s HPC on Demand services help provideaccesstoHPCresourceswhenevertheyneedthem,forexam- ple, because they do not have the possibility of owning and running an HPC cluster themselves, due to lacking infrastructure, know-how, oradminstaff. transtecHPCCloudServices Last but not least transtec’s services portfolio evolves as customers‘ demands change. Starting this year, transtec is able to provide HPC CloudServices.transtecusesadedicateddatacentertoprovidecom- puting power to customers who are in need of more capacity than they own, which is why this workflow model is sometimes called computing-on-demand. With these dynamically provided resources, customers with the possibility to have their jobs run on HPC nodes in a dedicated datacenter, professionally managed and secured, and individually customizable. Numerous standard applications like AN- SYS,LS-Dyna,OpenFOAM,aswellaslotsofcodeslikeGromacs,NAMD, VMD,andothersarepre-installed,integratedintoanenterprise-ready cloudandworkloadmanagementenvironment,andreadytorun. Alternatively, whenever customers are in need of space for hosting theirownHPCequipmentbecausetheydonothavethespacecapac- ity or cooling and power infrastructure themselves, transtec is also able to provide Hosting Services to those customers who’d like to have their equipment professionally hosted, maintained, and man- aged.Customerscanthusbuilduptheirownprivatecloud! Areyouinterestedinanyoftranstec’sbroadrangeofHPCrelatedser- vices? Write us an email to HPC@transtec.de. We’ll be happy to hear fromyou!
  • 38.
  • 39. Scalable&Energy EfficientHPCSystems There is no end in sight to growing data and computing requirements – which poses a serious challenge for space-constrained data centers. Also challenging for today’s organizations is the need to perform a larger number and variety of functions – without increasing budgets. IBM NeXtScale System, an economical addition to the IBM System x family, offers an innovative approach to maximum usable density. HighTrouhputComputingCADBigDataAnalyticsSimulationAerospaceAutomotive
  • 40. 40 Optimized to handle a number of workloads, all demanding agil- ity, NeXtScale System helps drive business velocity by providing rapid procurement, deployment and flexible options. This sim- ple, yet powerful, system can handle applications ranging from technical computing, to grid deployments, to analytics work- loads, to large-scale cloud and virtualization infrastructures. Designed with industry-standard, off-the-shelf components, this generalpurpose platform enables users to create a flexible, mix- and-match offering with compute, storage, and acceleration via graphics processing unit (GPU) or Intel Xeon Phi coprocessor. Customized solutions can be configured to provide application- appropriate platform with choice of servers, networking switch- es, adapters, and racks. This modular system is designed to scale and grow along with data center needs in order to protect and maximize IT invest- ments. Since it is optimized for standard racks, users can easily mix high-density NeXtScale server offerings and non-NeXtScale components within the same rack. NeXtScale System also pro- vides tremendous time to value by enabling users to get it up and running – and to the production phase – faster. Building upon a strong System x foundation Extending the System x family to a larger range of users, the customizable, space-saving NeXtScale System comprises pow- erful compute nodes and an energy-efficient, low-cost 12-bay chassis. IBM NeXtScale nx360 M4 server This powerful server provides a dense, flexible solution with a low total cost of ownership. The half-wide, dual-socket NeXtScale nx360 M4 server is designed for data centers Scalable,EnergyEfficient HPCSystems IBM NeXtScale System
  • 41. IBM NeXtScale nx360 M4 IBM NeXtScale n1200 Enclosure 41 that require high performance but are constrained by floor space. By taking up less physical space in the data center, the NeXtScale server significantly enhances density. And it sup- ports Intel Xeon E5-2600 v2 series up to 130 W and 12-core processors thus providing more performance per server. The nx360 M4 compute node contains only essential components in the base architecture to provide a cost-optimized platform. IBM NeXtScale n1200 Enclosure The NeXtScale n1200 Enclosure is an efficient, 6U, 12-bay chassis with no built-in networking or switching capabilities – requiring no chassis-level management. Sensibly designed to provide shared, high-efficiency power and cooling for housed servers, the n1200 enclosure is designed to scale with your business needs. Adding compute, storage, or acceleration ca- pability is as simple as adding specific nodes to the chassis. Because each node is independent and self-sufficient, there is no contention for resources among nodes within the enclo- sure. And while a typical rack holds only 42 1U systems, this chassis doubles the density up to 84 compute nodes within the same footprint. Flexible, IT your way Developed at the solution level, the NeXtScale System archi- tecture is extremely flexible – enabling different technologies to easily fit into its design, for varied workloads. And since the system allows compute, storage, and acceleration via GPU or Intel Xeon Phi coprocessor to share the same chassis and archi- tecture, it is very easy to deploy and grow. Front-access cabling – either from the bottom or the top of the rack – and direct-dock power capabilities enable users to make quick and easy changes to nodes, cables and networking switches. Plus, NeXtScale Sys- tem supports multiple networking topologies, including Ether- net, InfiniBand and Fibre Channel.
  • 42. 42 System flexibility even extends to procurement: Organizations can either receive the system fully configured, pretested, IBM in- stalled, and ready to power on; or self-configure and install using existing components to build a custom system. Simple yet elegant NeXtScale System makes choosing the right architecture for in- dividual applications, budgets and data centers simple and eco- nomical. It optimizes shared infrastructure with common fans and power supplies leaving nodes to be completely indepen- dent and self-sufficient. The nodes do not share resources such as disks or memory. To manage costs, only essential components are included in the base architecture, and nodes can be used for either storage or GPU/coprocessor acceleration. This enables NeXtScale for an easy insertion into your infrastructure with your current tools and best practices. The ability of NeXtScale System to work with any standard switch, rack or networking card provides almost unlimited options to space- and budget- conscious organizations in even the most demanding industries. Scale for everyone The high-performance NeXtScale System enables organizations of all sizes and budgets to start small and scale rapidly, as need- ed, into future requirements. Rather than requiring organiza- tions to purchase large clusters, this system offers a complete building-block approach in which users can start out with one chassis and add systems and components as needed. Designed to be easily run and simply managed at any scale – from a hand- ful to thousands – NeXtScale System can help organizations achieve maximum impact per dollar. Scalable,EnergyEfficient HPCSystems IBM NeXtScale System
  • 43. 43 IBM NeXtScale nx360 M4 at a glance Form factor/height Half-wide 1U Processor Two Intel Xeon E5-2600 v2 series Cache Level 2: 256 KB per core Level 3: 4 cores – 15 MB, 6 cores – 15 MB, 8 cores – 20 MB, 10 cores – 25 MB, 12 cores – 30 MB Memory 8 DDR3/DDR3L LP, 128 GB maximum with 16 GB LP RDIMM Chassis support NeXtScale n1200 Enclosure Local Storage One 3.5-inch, two 2.5-inch SAS/SATA hard disk drives (HDDs) or four 1.8-inch solid state drives, up to 4 TB maximum capacity with one 4 TB 3.5-inch HDD Storage Native Expansion (NEX) Tray Eight 3.5-inch SAS/SATA HDDs, up to 32 TB maximum capacity Internal RAID Onboard SATA controller with RAID options USB ports One internal USB key Ethernet Two built-in 1 Gigabit Ethernet (GbE) ports standard Input/output Two InfiniBand FDR ports (slotless option), two 10 GbE (slotless option), one PCIe (x16 PCI Express 3.0) Power management Rack-level power capping and management via IBM Extreme Cloud Administration Toolkit (xCAT) Systems management IBM Integrated Management Module 2 (IMM2) with dedicated management port, IPMI 2.0 compliant, Platform LSF and Platform HPC Operating systems supported Microsoft Windows Server, SUSE Linux Enterprise Server, Red Hat Enterprise Linux, VMware vSphere Hypervisor (ESXi) Limited warranty 3-year customer replaceable unit and onsite limited warranty, next business day 9x5, service upgrades available
  • 45. 45 IBM NeXtScale n1200 Enclosure at a glance Form factor 6U NeXtScale, standard rack Bays 12 Power supply Six hot-swappable, non-redundant, N+N or N+1 redundant 80 PLUS Platinum, high energy efficiency, 900 W Fans 10 hot-swappable Controller Fan and power controller
  • 46.
  • 47. Big Data, Cloud Storage it doesn’t matter what you call it, there is certainly increasing demand to store larger and larger amounts of unstructured data. The IBM General Parallel File System (GPFSTM) has always been considered a pioneer of big data storage and continues today to lead in introducing industry leading storage technologies. Since 1998 GPFS has lead the industry with many technologies that make the storage of large quantities of file data possible. The latest version continues in that tradition, GPFS 3.5 represents a significant milestone in the evolution of big data management. GPFS 3.5 introduces some revolutionary new features that clearly demonstrate IBM’s commitment to providing industry leading storage solutions. GeneralParallel FileSystem(GPFS) LifeSciencesCAEHighPerformanceComputingBigDataAnalyticsSimulationCAD
  • 48. 48 What is GPFS? GPFS is more than clustered file system software; it is a full featured set of file management tools. This includes advanced storage virtualization, integrated high availability, automated tiered storage management and the performance to effectively manage very large quantities of file data. GPFS allows a group of computers concurrent access to a com- mon set of file data over a common SAN infrastructure, a net- work or a mix of connection types. The computers can run any mix of AIX, Linux or Windows Server operating systems. GPFS provides storage management, information life cycle manage- ment tools, centralized administration and allows for shared access to file systems from remote GPFS clusters providing a global namespace. A GPFS cluster can be a single node, two nodes providing a high availability platform supporting a database application, for example, or thousands of nodes used for applications like the modeling of weather patterns. The largest existing configura- tions exceed 5,000 nodes. GPFS has been available on since 1998 and has been field proven for more than 14 years on some of the world’s most powerful supercomputers2 to provide reliabil- ity and efficient use of infrastructure bandwidth. GPFS was designed from the beginning to support high per- formance parallel workloads and has since been proven very effective for a variety of applications. Today it is installed in clusters supporting big data analytics, gene sequencing, digi- tal media and scalable file serving. These applications are used across many industries including financial, retail, digital media, biotechnology, science and government. GPFS continues to push technology limits by being deployed in very demanding large environments. You may not need multiple petabytes of GeneralParallelFileSystem(GPFS) Technologies That Enable the Management of Big Data
  • 49. 49 data today, but you will, and when you get there you can rest assured GPFS has already been tested in these enviroments. This leadership is what makes GPFS a solid solution for any size application. Supported operating systems for GPFS Version 3.5 include AIX, Red Hat, SUSE and Debian Linux distributions and Windows Server 2008. The file system A GPFS file system is built from a collection of arrays that contain the file system data and metadata. A file system can be built from a single disk or contain thousands of disks storing petabytes of data. Each file system can be accessible from all nodes within the cluster. There is no practical limit on the size of a file system. The architectural limit is 299 bytes. As an example, current GPFS customers are using single file systems up to 5.4PB in size and others have file systems containing billions of files. Application interfaces Applications access files through standard POSIX file system in- terfaces. Since all nodes see all of the file data applications can scale-out easily. Any node in the cluster can concurrently read or update a common set of files. GPFS maintains the coheren- cy and consistency of the file system using sophisticated byte range locking, token (distributed lock) management and journal- ing. This means that applications using standard POSIX locking semantics do not need to be modified to run successfully on a GPFS file system. In addition to standard interfaces GPFS provides a unique set of extended interfaces which can be used to provide advanced ap- plication functionality. Using these extended interfaces an appli- cation can determine the storage pool placement of a file, create a file clone and manage quotas. These extended interfaces pro- vide features in addition to the standard POSIX interface. Performance and scalability GPFS provides unparalleled performance for unstructured data. GPFS achieves high performance I/O by: Striping data across multiple disks attached to multiple nodes. ʎʎ High performance metadata (inode) scans. ʎʎ Supporting a wide range of file system block sizes to match I/O requirements. ʎʎ Utilizing advanced algorithms to improve read-ahead and wri- te-behind IO operations. ʎʎ Using block level locking based on a very sophisticated scalable token management system to provide data consistency while allowing multiple application nodes concurrent access to the files. When creating a GPFS file system you provide a list of raw devices andtheyareassignedtoGPFSasNetworkSharedDisks(NSD).Once a NSD is defined all of the nodes in the GPFS cluster can access the disk, using local disk connection, or using t-he GPFS NSD network protocol for shipping data over a TCP/IP or InfiniBand connection. GPFS token (distributed lock) management coordinates access to NSD’s ensuring the consistency of file system data and metadata when different nodes access the same file. Token management responsibility is dynamically allocated among designated nodes in the cluster. GPFS can assign one or more nodes to act as token managers for a single file system. This allows greater scalabil- ity when you have a large number of files with high transaction workloads. In the event of a node failure the token management responsibility is moved to another node.
  • 50. 50 All data stored in a GPFS file system is striped across all of the disks within a storage pool, whether the pool contains 2 LUNS or 2,000 LUNS.Thiswidedatastripingallowsyoutogetthebestperformance for the available storage. When disks are added to or removed from astoragepoolexistingfiledatacanberedistributedacrossthenew storage to improve performance. Data redistribution can be done automatically or can be scheduled. When redistributing data you can assign a single node to perform the task to control the impact onaproductionworkloadorhaveallofthenodesintheclusterpar- ticipate in data movement to complete the operation as quickly as possible. Online storage configuration is a good example of an en- terprise class storage management feature included in GPFS. To achieve the highest possible data access performance GPFS recognizes typical access patterns including sequential, reverse se- quential and random optimizing I/O access for these patterns. Along with distributed token management, GPFS provides scalable metadata management by allowing all nodes of the cluster access- ingthefilesystemtoperformfilemetadataoperations.Thisfeature distinguishes GPFS from other cluster file systems which typically have a centralized metadata server handling fixed regions of the file namespace. A centralized metadata server can often become a performance bottleneck for metadata intensive operations, lim- iting scalability and possibly introducing a single point of failure. GPFS solves this problem by enabling all nodes to manage meta- data. Administration GPFS provides an administration model that is easy to use and is consistentwithstandardfilesystemadministrationpracticeswhile providingextensionsfortheclusteringaspectsofGPFS.Thesefunc- tions support cluster management and other standard file system GeneralParallelFileSystem(GPFS) Technologies That Enable the Management of Big Data
  • 51. 51 administration functions such as user quotas, snapshots and ex- tended access control lists. GPFSadministrationtoolssimplifycluster-widetasks.AsingleGPFS commandcanperform a filesystemfunction acrosstheentireclus- ter and most can be issued from any node in the cluster. Option- ally you can designate a group of administration nodes that can be used to perform all cluster administration tasks, or only authorize a single login session to perform admin commands cluster-wide. This allows for higher security by reducing the scope of node to node administrative access. Rolling upgrades allow you to upgrade individual nodes in the clus- ter while the file system remains online. Rolling upgrades are sup- ported between two major version levels of GPFS (and service lev- els within those releases). For example you can mix GPFS 3.4 nodes with GPFS 3.5 nodes while migrating between releases. Quotasenabletheadministratortomanagefilesystemusagebyus- ers and groups across the cluster. GPFS provides commands to gen- erate quota reports by user, group and on a sub-tree of a file system called a fileset. Quotas can be set on the number of files (inodes) and the total size of the files. New in GPFS 3.5 you can now define user and group per fileset quotas which allows for more options in quota configuration. In addition to traditional quota management, the GPFS policy engine can be used query the file system metadata and generate customized space usage reports. An SNMP interface allows monitoring by network management applications. The SNMP agent provides information on the GPFS cluster and generates traps when events occur in the cluster. For example, an event is generated when a file system is mounted or if a node fails. The SNMP agent runs on Linux and AIX. You can moni- tor a heterogeneous cluster as long as the agent runs on a Linux or AIX node. You can customize the response to cluster events using GPFS callbacks. A callback is an administrator defined script that is ex- ecuted when an event occurs, for example, when a file system is un-mounted for or a file system is low on free space. Callbacks can be used to create custom responses to GPFS events and integrate these notifications into various cluster monitoring tools. GPFS provides support for the Data Management API (DMAPI) in- terface which is IBM’s implementation of the X/Open data storage management API. This DMAPI interface allows vendors of storage management applications such as IBM Tivoli® Storage Manager (TSM) and High Performance Storage System (HPSS) to provide Hi- erarchical Storage Management (HSM) support for GPFS. GPFS supports POSIX and NFS V4 access control lists (ACLs). NFS v4 ACLs can be used to serve files using NFSv4, but can also be used in other deployments, for example, to provide ACL support to nodes running Windows. To provide concurrent access from multiple op- erating system types GPFS allows you to run mixed POSIX and NFS v4 permissions in a single file system and map user and group IDs between Windows and Linux/UNIX environments. File systems may be exported to clients outside the cluster through NFS. GPFS is often used as the base for a scalable NFS file service infrastructure. The GPFS clustered NFS (cNFS) feature provides data availability to NFS clients by providing NFS service continuation if an NFS server fails. This allows a GPFS cluster to provide scalable file service by providing simultaneous access to a common set of data from multiple nodes. The clustered NFS tools include monitoring of file services and IP address fail over. GPFS cNFS supports NFSv3 only. You can export a GPFS file system using NFSv4 but not with cNFS.
  • 52. 52 Data availability GPFS is fault tolerant and can be configured for continued ac- cess to data even if cluster nodes or storage systems fail. This is accomplished though robust clustering features and support for synchronous and asynchronous data replication. GPFS software includes the infrastructure to handle data consis- tency and availability. This means that GPFS does not rely on ex- ternal applications for cluster operations like node failover. The clustering support goes beyond who owns the data or who has access to the disks. In a GPFS cluster all nodes see all of the data and all cluster operations can be done by any node in the clus- ter with a server license. All nodes are capable of performing all tasks. What tasks a node can perform is determined by the type of license and the cluster configuration. As a part of the built-in availability tools GPFS continuously mon- itors the health of the file system components. When failures are detected appropriate recovery action is taken automatically. Ex- tensive journaling and recovery capabilities are provided which maintain metadata consistency when a node holding locks or performing administrative services fails. Snapshots can be used to protect the file system’s contents against a user error by preserving a point in time version of the file system or a sub-tree of a file system called a fileset. GPFS implements a space efficient snapshot mechanism that gener- ates a map of the file system or fileset at the time the snaphot is taken. New data blocks are consumed only when the file sys- tem data has been deleted or modified after the snapshot was created. This is done using a redirect-on-write technique (some- times called copy-on-write). Snapshot data is placed in existing storage pools simplifying administration and optimizing the use of existing storage. The snapshot function can be used with a GeneralParallelFileSystem(GPFS) Technologies That Enable the Management of Big Data
  • 53. 53 backup program, for example, to run while the file system is in use and still obtain a consistent copy of the file system as it was when the snapshot was created. In addition, snapshots provide an online backup capability that allows files to be recovered eas- ily from common problems such as accidental file deletion. Data Replication For an additional level of data availability and protection synchro- nous data replication is available for file system metadata and data. GPFS provides a very flexible replication model that allows you to replicate a file, set of files, or an entire file system. The rep- lication status of a file can be changed using a command or by us- ing the policy based management tools. Synchronous replication allowsforcontinuousoperationevenifapathtoanarray,anarray itself or an entire site fails. Synchronous replication is location aware which allows you to optimize data access when the replicas are separated across a WAN. GPFS has knowledge of what copy of the data is “local” so read-heavy applications can get local data read performance even when data replicated over a WAN. Synchronous replication works well for many workloads by replicating data across storage arrays within a data center, within a campus or across geographical dis- tances using high quality wide area network connections. When wide area network connections are not high performance or are not reliable, an asynchronous approach to data replication is required. GPFS 3.5 introduces a feature called Active File Man- agement (AFM). AFM is a distributed disk caching technology de- veloped at IBM Research that allows the expansion of the GPFS global namespace across geographical distances. It can be used to provide high availability between sites or to provide local “copies” of data distributed to one or more GPFS clusters. For more details on AFM see the section entitled Sharing data between clusters. For a higher level of cluster reliability GPFS includes advanced clustering features to maintain network connections. If a network connection to a node fails GPFS automatically tries to reestablish the connection before marking the node unavailable. This can provide for better uptime in environments communicating across a WAN or experiencing network issues. Using these features along with a high availability infrastructure ensures a reliable enterprise class storage solution. GPFS Native Raid (GNR) Larger disk drives and larger file systems are creating challenges for traditional storage controllers. Current RAID 5 and RAID 6 based arrays do not address the challenges of Exabyte scale stor- age performance, reliability and management. To address these challenges GPFS Native RAID (GNR) brings storage device man- agement into GPFS. With GNR GPFS can directly manage thou- sands of storage devices. These storage devices can be individual disk drives or any other block device eliminating the need for a storage controller. GNR employs a de-clustered approach to RAID. The de-clustered architecture reduces the impact of drive failures by spreading data over all of the available storage devices improving appli- cation IO and recovery performance. GNR provides very high reliability through an 8+3 Reed Solomon based raid code that divides each block of a file into 8 parts and associated parity. This algorithm scales easily starting with as few as 11 storage devices and growing to over 500 per storage pod. Spreading the data over many devices helps provide predicable storage perfor- mance and fast recovery times measured in minutes rather than hours in the case of a device failure.
  • 54. 54 In addition to performance improvements GNR provides ad- vanced checksum protection to ensure data integrity. Checksum information is stored on disk and verified all the way to the NSD client. Information lifecycle management (ILM) toolset GPFS can help you to achieve data lifecycle management efficien- cies through policy-driven automation and tiered storage man- agement. The use of storage pools, filesets and user-defined poli- cies provide the ability to better match the cost of your storage to the value of your data. Storage pools are used to manage groups of disks within a file sys- tem. Using storage pools you can create tiers of storage by group- ing disks based on performance, locality or reliability characteris- tics. For example, one pool could contain high performance solid statedisk(SSD)disksandanothermoreeconomical7,200RPMdisk storage. These types of storage pools are called internal storage pools. When data is placed in or moved between internal storage pools all of the data management is done by GPFS. In addition to internal storage pools GPFS supports external storage pools. Ex- ternal storage pools are used to interact with an external storage management application including IBM Tivoli Storage Manager (TSM) and High Performance Storage System (HPSS). When moving data to an external pool GPFS handles all of the metadata process- ing then hands the data to the external application for storage on alternate media, tape for example. When using TSM or HPSS data can be retrieved from the external storage pool on demand, as a result of an application opening a file or data can be retrieved in a batch operation using a command or GPFS policy. A fileset is a sub- tree of the file system namespace and provides a way to partition the namespace into smaller, more manageable units. GeneralParallelFileSystem(GPFS) Technologies That Enable the Management of Big Data
  • 55. 55 Filesets provide an administrative boundary that can be used to set quotas, take snapshots, define AFM relationships and be used in user defined policies to control initial data placement or data migration. Data within a single fileset can reside in one or more storage pools. Where the file data resides and how it is managed once it is created is based on a set of rules in a user defined policy. There are two types of user defined policies in GPFS: file place- ment and file management. File placement policies determine in whichstoragepoolfiledataisinitiallyplaced.Fileplacementrules are defined using attributes of a file known when a file is created such as file name, fileset or the user who is creating the file. For example a placement policy may be defined that states ‘place all files with names that end in .mov onto the near-line SAS based storage pool and place all files created by the CEO onto the SSD based storage pool’ or ‘place all files in the fileset ‘development’ onto the SAS based storage pool’. Once files exist in a file system, file management policies can be used for file migration, deletion, changing file replication status or generating reports. You can use a migration policy to transparently move data from one storage pool to another without changing the file’s location in the directory structure. Similarly you can use a policy to change the replication status of a file or set of files, allowing fine grained control over the space used for data availability. You can use migration and replication policies together, for exam- ple a policy that says: ‘migrate all of the files located in the subdi- rectory /database/payroll which end in *.dat and are greater than 1 MB in size to storage pool #2 and un-replicate these files’. File deletion policies allow you to prune the file system, deleting files as defined by policy rules. Reporting on the contents of a file system can be done through list policies. List policies allow you to quickly scan the file system metadata and produce information listing selected attributes of candidate files. Filemanagementpoliciescanbebasedonmoreattributesofafile than placement policies because once a file exists there is more known about the file. For example file placement attributes can utilize attributes such as last access time, size of the file or a mix of user and file size. This may result in policies like: ‘Delete all files with a name ending in .temp that have not been accessed in the last 30 days’, or ‘Migrate all files owned by Sally that are larger than 4GB to the SATA storage pool’. Rule processing can be further automated by including attri- butes related to a storage pool instead of a file using the thresh- old option. Using thresholds you can create a rule that moves files out of the high performance pool if it is more than 80% full, for example. The threshold option comes with the ability to set high, low and pre-migrate thresholds. Pre-migrated files are files that exist on disk and are migrated to tape. This method is typically used to allow disk access to the data while allow- ing disk space to be freed up quickly when a maximum space threshold is reached. This means that GPFS begins migrating data at the high threshold, until the low threshold is reached. If a pre-migrate threshold is set GPFS begins copying data un- til the pre-migrate threshold is reached. This allows the data to continue to be accessed in the original pool until it is quickly deleted to free up space the next time the high threshold is reached. Thresholds allow you to fully utilize your highest per- formance storage and automate the task of making room for new high priority content. Policy rule syntax is based on the SQL 92 syntax standard and supports multiple complex statements in a single rule enabling powerful policies. Multiple levels of rules can be applied to a
  • 56. 56 file system, and rules are evaluated in order for each file when the policy engine executes allowing a high level of flexibility. GPFS provides unique functionality through standard interfaces, an example of this is extended attributes. Extended attributes are a standard POSIX facility. GPFS has long supported the use of extended attributes, though in the past they were not commonly used, in part because of per- formance concerns. In GPFS 3.4, a comprehensive redesign of the extended attributes support infrastructure was implemented, resulting in significant performance improvements. In GPFS 3.5, extended attributes are accessible by the GPFS policy engine al- lowing you to write rules that utilize your custom file attributes. Executing file management operations requires the ability to efficiently process the file metadata. GPFS includes a high per- formance metadata scan interface that allows you to efficiently process the metadata for billions of files. This makes the GPFS ILM toolset a very scalable tool for automating file management. This high performance metadata scan engine employs a scale- out approach. The identification of candidate files and data movement operations can be performed concurrently by one or more nodes in the cluster. GPFS can spread rule evaluation and data movement responsibilities over multiple nodes in the clus- ter providing a very scalable, high performance rule processing engine. Cluster configurations GPFS supports a variety of cluster configurations independent of which file system features you use. Cluster configuration op- tions can be characterized into three basic categories: ʎʎ Shared disk ʎʎ Network block I/O ʎʎ Synchronously sharing data between clusters. ʎʎ Asynchronously sharing data between clusters. GeneralParallelFileSystem(GPFS) Technologies That Enable the Management of Big Data