SlideShare uma empresa Scribd logo
1 de 17
Baixar para ler offline
Pinnacle Compute Blades
     Intel Xeon and AMD Opteron 2-in-1U compute blades




advanced clustering technologies
  www.advancedclustering.com • 866.802.8222
what is a compute blade?
• Features and benefits of a modular “blade”
  system, but designed specifically for the needs of
  HPC cluster environment
• Consists of 2 pieces:
 • Blade Housing - enclosure to hold the
    compute modules, mounted into the rack
    cabinet
 • Compute Blade Module - complete
    independent high-end server that slides into
    the Blade Housing

                        2
blade housing
1U enclosure that holds 2x Compute blades,
standard 19” design - fits in any cabinet.




                     3
blade module

Complete independent compute
node - contains CPUs, RAM, disk
drives and power supply




                        4
compute blade key points
                               Each node is
   2x Computing power
                               independent - no
   in the same space
                               impact on other nodes



   80+ efficient power          Nodes equipped with
   supplies, low power         management engine:
   CPU and drive options       IPMI and iKVM



   Each compute node is        Mix and match
   modular, removable          architecture in same
   and tool-less               blade housing




                           5
compute blade vs 1U twin

                                          Twin system
                                          • Nodes fixed into enclosure
                                          • Must take both nodes down even if
                                              servicing only 1 node
                                          •   Single shared power supply




Our Compute Blade
• Individual removable nodes
• Nodes can run outside of housing
    for testing and serviceability
•   Dedicated 80%+ efficient power
    supply
•   Mix and match CPU architectures


                                      6
blade product highlights
• High density without sacrificing performance
• High reliability - independent power supply and
  removable blade modules
• Easy serviceability - each module is removable
  and useable without blade housing
• Tool-less design for easy replacement of failed
  components
• Multiple system architectures - available with
  both AMD Opteron and Intel Xeon

                         7
compute blade - front




1   Power LED          4   Slide out ID label area
2   Power Switch       5   Quick release handles
3   HDD LED

                   8
compute blade - inside




blade modules
independently slide out
of the housing




           1     Power supply                         4   System memory
           2     Drive bay (1x 3.5” or 2x 2.5”)       5   Processors
           3     Cooling Fans                         6   Low-profile expansion card
                                                  9
compute blade - features
                                       • Easy swap tool-less fan
                                           housing
                                       •   Fans and hard drives shock
                                           mounted to prevent
                                           vibration and failure




• 80% efficient power supply
    per blade
•   Thumbscrew installation for
    easy replacement


                                  10
compute blade - density
Standard 1U, dual CPU servers:               Compute blade servers:
   Max: 42 servers per rack 336 cores          Max: 84 servers per rack 672 cores




                                        11
compute blade models

                            1BX5501                                   1BA2301
Processor          Dual Quad-Core Intel Xeon 5500 series      Dual AMD Opteron 4 core 2300 or 6
                                                              core 2400 series

Chipset            Intel 5500 chipset with QPI interconnect   NVIDIA MCP55 Pro with Hyper Transport

System Memory      Maximum of 12 DDR3 DIMMs or 96GB           Maximum of 8 DDR2 DIMMs or 64GB

Expansion Slot     1x PCI-e Gen 2.0 x16                       1x PCI-e x16

LAN                2x 1Gbps RJ-45 ethernet ports              2x 1Gbps RJ-45 ethernet ports

InfiniBand          Optional onboard ConnectX DDR              Optional onboard ConnectX DDR

                   Dedicated LAN for IPMI 2.0 and
Manageability                                                 Dedicated LAN for IPMI 2.0
                   iKVM

Power supply       80%+ efficient power supply                 80%+ efficient power supply




                                             12
compute blade - 1BX5501
• Processor (per blade)                                        • Management (per blade)
  • Two Intel Xeon 5500 Series processors                        • Integrated IPMI 2.0 module
  • Next generation "Nehalem" microarchitecture                  • Integrated management controller providing iKVM
  • Integrated memory controller and 2x QPI chipset                  and remote disk emulation.
        interconnects per processor                                • Dedicated RJ45 LAN for management network
    •   45nm process technology                                •   I/O connections (per blade)
• Chipset (per blade)                                              • One open PCI-Express 2.0 expansion slot running
  • Intel 5500 I/O controller hub                                      at 16x

• Memory (per blade)                                               •   Two independent 10/100/1000Base-T (Gigabit)
                                                                       RJ-45 Ethernet interfaces
  • 800MHz, 1066MHz, or 1333MHz DDR3 memory                        •   Two USB 2.0 ports
  • Twelve DIMM sockets for support up to 144GB of                 • One DB-9 serial port (RS-232)
      memory
•   Storage (per blade)                                            • One VGA port
    • One 3.5" SATA2 drive bay or two 2.5" SATA2 drive             • Optional ConnectX DDR InfiniBand CX4
        bays                                                         connector

    •   Support RAID level 0-1 with Linux software RAID        •   Electrical Requirements (per module)
        (with 2.5" drives)                                         • High-efficiency power supply (greater than 80%)
    •   Drives shock mounted into enclosure to prevent             • Output Power: 400W
        vibration related failures
                                                                   • Universal input voltage 100V to 240V
    •   Support for high-performance solid state drives
                                                                   • Frequency: 50Hz to 60Hz, single phase


                                                          13
compute blade - 1BX5501




           14
compute blade - 1BA2301
• Processor (per node)                                    • Management (per node)
  • Two AMD Opteron 2300 or 2400 Series processors          • Integrated IPMI 2.0 module
        (4 core or 6 core processors)
                                                            • Dedicated RJ45 LAN for management network
    •   Next generation “Istanbul” or "Shanghai"
                                                          • I/O connections (per node)
        microarchitectures
    •   Integrated memory controller per processor
                                                            • One open PCI-Express expansion slot running at
                                                                  16x
    • 45nm process technology                                 •   Two independent 10/100/1000Base-T (Gigabit)
• Chipset (per node)                                              RJ-45 Ethernet interfaces
  • nVIDIA MCP55-Pro                                          •   Two USB 2.0 ports
• Memory (per node)                                           • One DB-9 serial port (RS-232)
  • 667MHz or 800MHz DDR2 memory                              • One VGA port
  • Eight DIMM sockets for support up to 64GB of              • Optional ConnectX DDR InfiniBand CX4
      memory                                                    connector
•   Storage (per node)                                    •   Electrical Requirements (per node)
    • One 3.5" SATA2 drive bay or two 2.5" SATA2 drive        • High-efficiency power supply (greater than 80%)
      bays
                                                              • Output Power: 400W
    • Support RAID level 0-1 with Linux software RAID
                                                              • Universal input voltage 100V to 240V
      (with 2.5" drives)
    • Drives shock mounted into enclosure to prevent
                                                              • Frequency: 50Hz to 60Hz, single phase
      vibration related failures
    • Support for high-performance solid state drives



                                                     15
compute blade - 1BA2301




           16
availability and pricing
• Both 1BX5501 and 1BA2301 are available and
  shipping now
• Systems available online for remote testing
• For price and custom configuration contact your
  Account Representative
 • (866) 802-8222
 • sales@advancedclustering.com
 •   http://www.advancedclustering.com/go/blade


                         17

Mais conteúdo relacionado

Mais procurados

Challenges in mixed signal
Challenges in mixed signal Challenges in mixed signal
Challenges in mixed signal chiportal
 
Nvidia’s tegra line of processors for mobile devices2 2
Nvidia’s tegra line of processors for mobile devices2 2Nvidia’s tegra line of processors for mobile devices2 2
Nvidia’s tegra line of processors for mobile devices2 2Sukul Yarraguntla
 
Sun fire x2250 technical training presentation
Sun fire x2250 technical training presentationSun fire x2250 technical training presentation
Sun fire x2250 technical training presentationxKinAnx
 
Placas base evolucion
Placas base evolucionPlacas base evolucion
Placas base evoluciongatarufo
 
Si Technology Whitepaper
Si Technology WhitepaperSi Technology Whitepaper
Si Technology Whitepaperstepk99
 
ds188-XA-Zynq-7000-Overview
ds188-XA-Zynq-7000-Overviewds188-XA-Zynq-7000-Overview
ds188-XA-Zynq-7000-OverviewAngela Suen
 
Lex Roadmap 2008 Q4
Lex Roadmap 2008 Q4Lex Roadmap 2008 Q4
Lex Roadmap 2008 Q4fanless.ru
 
B Series Sealed Server
B Series Sealed ServerB Series Sealed Server
B Series Sealed ServerAPMLinked_In
 
Hpe Proliant DL325 Gen10 Server Datasheet
Hpe Proliant DL325 Gen10 Server DatasheetHpe Proliant DL325 Gen10 Server Datasheet
Hpe Proliant DL325 Gen10 Server Datasheet美兰 曾
 
AMD Radeon™ RX 5700 Series 7nm Energy-Efficient High-Performance GPUs
AMD Radeon™ RX 5700 Series 7nm Energy-Efficient High-Performance GPUsAMD Radeon™ RX 5700 Series 7nm Energy-Efficient High-Performance GPUs
AMD Radeon™ RX 5700 Series 7nm Energy-Efficient High-Performance GPUsAMD
 
MYS-6ULX Single Board Computer for Industry 4.0 and IoT Applications
MYS-6ULX Single Board Computer for Industry 4.0 and IoT ApplicationsMYS-6ULX Single Board Computer for Industry 4.0 and IoT Applications
MYS-6ULX Single Board Computer for Industry 4.0 and IoT ApplicationsLinda Zhang
 
Radius portable briefcase style workstation
Radius portable briefcase style workstationRadius portable briefcase style workstation
Radius portable briefcase style workstationrwachsman
 

Mais procurados (17)

Challenges in mixed signal
Challenges in mixed signal Challenges in mixed signal
Challenges in mixed signal
 
Nvidia’s tegra line of processors for mobile devices2 2
Nvidia’s tegra line of processors for mobile devices2 2Nvidia’s tegra line of processors for mobile devices2 2
Nvidia’s tegra line of processors for mobile devices2 2
 
Sun fire x2250 technical training presentation
Sun fire x2250 technical training presentationSun fire x2250 technical training presentation
Sun fire x2250 technical training presentation
 
NVIDIA Tegra K1
NVIDIA Tegra K1 NVIDIA Tegra K1
NVIDIA Tegra K1
 
Placas base evolucion
Placas base evolucionPlacas base evolucion
Placas base evolucion
 
Si Technology Whitepaper
Si Technology WhitepaperSi Technology Whitepaper
Si Technology Whitepaper
 
ds188-XA-Zynq-7000-Overview
ds188-XA-Zynq-7000-Overviewds188-XA-Zynq-7000-Overview
ds188-XA-Zynq-7000-Overview
 
Lex Roadmap 2008 Q4
Lex Roadmap 2008 Q4Lex Roadmap 2008 Q4
Lex Roadmap 2008 Q4
 
B Series Sealed Server
B Series Sealed ServerB Series Sealed Server
B Series Sealed Server
 
Nd Evo Plus
Nd Evo PlusNd Evo Plus
Nd Evo Plus
 
Hpe Proliant DL325 Gen10 Server Datasheet
Hpe Proliant DL325 Gen10 Server DatasheetHpe Proliant DL325 Gen10 Server Datasheet
Hpe Proliant DL325 Gen10 Server Datasheet
 
Shuttle Product Overview
Shuttle Product OverviewShuttle Product Overview
Shuttle Product Overview
 
AMD Radeon™ RX 5700 Series 7nm Energy-Efficient High-Performance GPUs
AMD Radeon™ RX 5700 Series 7nm Energy-Efficient High-Performance GPUsAMD Radeon™ RX 5700 Series 7nm Energy-Efficient High-Performance GPUs
AMD Radeon™ RX 5700 Series 7nm Energy-Efficient High-Performance GPUs
 
M7 vig400manual
M7 vig400manualM7 vig400manual
M7 vig400manual
 
IBM System x3755 M3 Product Guide
IBM System x3755 M3 Product GuideIBM System x3755 M3 Product Guide
IBM System x3755 M3 Product Guide
 
MYS-6ULX Single Board Computer for Industry 4.0 and IoT Applications
MYS-6ULX Single Board Computer for Industry 4.0 and IoT ApplicationsMYS-6ULX Single Board Computer for Industry 4.0 and IoT Applications
MYS-6ULX Single Board Computer for Industry 4.0 and IoT Applications
 
Radius portable briefcase style workstation
Radius portable briefcase style workstationRadius portable briefcase style workstation
Radius portable briefcase style workstation
 

Semelhante a Compute Blades

The Power of HPC with Next Generation Supermicro Systems
The Power of HPC with Next Generation Supermicro Systems The Power of HPC with Next Generation Supermicro Systems
The Power of HPC with Next Generation Supermicro Systems Rebekah Rodriguez
 
atmega 128 and communication protocol
atmega 128 and communication protocolatmega 128 and communication protocol
atmega 128 and communication protocolRashmi Deoli
 
Cy7 introduction
Cy7 introductionCy7 introduction
Cy7 introductionKunhui Wu
 
Nextserver Evo
Nextserver EvoNextserver Evo
Nextserver Evodellarocco
 
Recent Developments in Donard
Recent Developments in DonardRecent Developments in Donard
Recent Developments in DonardPMC-Sierra Inc.
 
High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications Rebekah Rodriguez
 
Sparc t4 2 system technical overview
Sparc t4 2 system technical overviewSparc t4 2 system technical overview
Sparc t4 2 system technical overviewsolarisyougood
 
Dream Pc 2009
Dream Pc 2009Dream Pc 2009
Dream Pc 2009kyochi
 
IBM RoadRunner Architecture
IBM RoadRunner ArchitectureIBM RoadRunner Architecture
IBM RoadRunner ArchitectureDeepu Joseph
 
Sandy bridge platform from ttec
Sandy bridge platform from ttecSandy bridge platform from ttec
Sandy bridge platform from ttecTTEC
 
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-GeneOpenStack Korea Community
 
Maxwell siuc hpc_description_tutorial
Maxwell siuc hpc_description_tutorialMaxwell siuc hpc_description_tutorial
Maxwell siuc hpc_description_tutorialmadhuinturi
 
QsNetIII, An HPC Interconnect For Peta Scale Systems
QsNetIII, An HPC Interconnect For Peta Scale SystemsQsNetIII, An HPC Interconnect For Peta Scale Systems
QsNetIII, An HPC Interconnect For Peta Scale SystemsFederica Pisani
 
PowerEdge Rack and Tower Server Masters AMD Processors.pptx
PowerEdge Rack and Tower Server Masters AMD Processors.pptxPowerEdge Rack and Tower Server Masters AMD Processors.pptx
PowerEdge Rack and Tower Server Masters AMD Processors.pptxNeoKenj
 
Aewin network security appliance network management platform_scb9650_dual xeo...
Aewin network security appliance network management platform_scb9650_dual xeo...Aewin network security appliance network management platform_scb9650_dual xeo...
Aewin network security appliance network management platform_scb9650_dual xeo...Sirena Cheng
 
Aurora Departmental HPC Systems
Aurora Departmental HPC SystemsAurora Departmental HPC Systems
Aurora Departmental HPC SystemsEurotech Aurora
 
Aewin network security appliance network management platform_scb6975
Aewin network security appliance network management platform_scb6975Aewin network security appliance network management platform_scb6975
Aewin network security appliance network management platform_scb6975Sirena Cheng
 

Semelhante a Compute Blades (20)

The Power of HPC with Next Generation Supermicro Systems
The Power of HPC with Next Generation Supermicro Systems The Power of HPC with Next Generation Supermicro Systems
The Power of HPC with Next Generation Supermicro Systems
 
atmega 128 and communication protocol
atmega 128 and communication protocolatmega 128 and communication protocol
atmega 128 and communication protocol
 
Pc 104 express w. virtex 5-2014_5
Pc 104 express w. virtex 5-2014_5Pc 104 express w. virtex 5-2014_5
Pc 104 express w. virtex 5-2014_5
 
Cy7 introduction
Cy7 introductionCy7 introduction
Cy7 introduction
 
Nextserver Evo
Nextserver EvoNextserver Evo
Nextserver Evo
 
Rp 70 Xrt Ds
Rp 70 Xrt DsRp 70 Xrt Ds
Rp 70 Xrt Ds
 
Recent Developments in Donard
Recent Developments in DonardRecent Developments in Donard
Recent Developments in Donard
 
High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications High-Density Top-Loading Storage for Cloud Scale Applications
High-Density Top-Loading Storage for Cloud Scale Applications
 
Sparc t4 2 system technical overview
Sparc t4 2 system technical overviewSparc t4 2 system technical overview
Sparc t4 2 system technical overview
 
Dream Pc 2009
Dream Pc 2009Dream Pc 2009
Dream Pc 2009
 
IBM RoadRunner Architecture
IBM RoadRunner ArchitectureIBM RoadRunner Architecture
IBM RoadRunner Architecture
 
Sandy bridge platform from ttec
Sandy bridge platform from ttecSandy bridge platform from ttec
Sandy bridge platform from ttec
 
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
[OpenStack Days Korea 2016] Track3 - OpenStack on 64-bit ARM with X-Gene
 
AMD K6
AMD K6AMD K6
AMD K6
 
Maxwell siuc hpc_description_tutorial
Maxwell siuc hpc_description_tutorialMaxwell siuc hpc_description_tutorial
Maxwell siuc hpc_description_tutorial
 
QsNetIII, An HPC Interconnect For Peta Scale Systems
QsNetIII, An HPC Interconnect For Peta Scale SystemsQsNetIII, An HPC Interconnect For Peta Scale Systems
QsNetIII, An HPC Interconnect For Peta Scale Systems
 
PowerEdge Rack and Tower Server Masters AMD Processors.pptx
PowerEdge Rack and Tower Server Masters AMD Processors.pptxPowerEdge Rack and Tower Server Masters AMD Processors.pptx
PowerEdge Rack and Tower Server Masters AMD Processors.pptx
 
Aewin network security appliance network management platform_scb9650_dual xeo...
Aewin network security appliance network management platform_scb9650_dual xeo...Aewin network security appliance network management platform_scb9650_dual xeo...
Aewin network security appliance network management platform_scb9650_dual xeo...
 
Aurora Departmental HPC Systems
Aurora Departmental HPC SystemsAurora Departmental HPC Systems
Aurora Departmental HPC Systems
 
Aewin network security appliance network management platform_scb6975
Aewin network security appliance network management platform_scb6975Aewin network security appliance network management platform_scb6975
Aewin network security appliance network management platform_scb6975
 

Compute Blades

  • 1. Pinnacle Compute Blades Intel Xeon and AMD Opteron 2-in-1U compute blades advanced clustering technologies www.advancedclustering.com • 866.802.8222
  • 2. what is a compute blade? • Features and benefits of a modular “blade” system, but designed specifically for the needs of HPC cluster environment • Consists of 2 pieces: • Blade Housing - enclosure to hold the compute modules, mounted into the rack cabinet • Compute Blade Module - complete independent high-end server that slides into the Blade Housing 2
  • 3. blade housing 1U enclosure that holds 2x Compute blades, standard 19” design - fits in any cabinet. 3
  • 4. blade module Complete independent compute node - contains CPUs, RAM, disk drives and power supply 4
  • 5. compute blade key points Each node is 2x Computing power independent - no in the same space impact on other nodes 80+ efficient power Nodes equipped with supplies, low power management engine: CPU and drive options IPMI and iKVM Each compute node is Mix and match modular, removable architecture in same and tool-less blade housing 5
  • 6. compute blade vs 1U twin Twin system • Nodes fixed into enclosure • Must take both nodes down even if servicing only 1 node • Single shared power supply Our Compute Blade • Individual removable nodes • Nodes can run outside of housing for testing and serviceability • Dedicated 80%+ efficient power supply • Mix and match CPU architectures 6
  • 7. blade product highlights • High density without sacrificing performance • High reliability - independent power supply and removable blade modules • Easy serviceability - each module is removable and useable without blade housing • Tool-less design for easy replacement of failed components • Multiple system architectures - available with both AMD Opteron and Intel Xeon 7
  • 8. compute blade - front 1 Power LED 4 Slide out ID label area 2 Power Switch 5 Quick release handles 3 HDD LED 8
  • 9. compute blade - inside blade modules independently slide out of the housing 1 Power supply 4 System memory 2 Drive bay (1x 3.5” or 2x 2.5”) 5 Processors 3 Cooling Fans 6 Low-profile expansion card 9
  • 10. compute blade - features • Easy swap tool-less fan housing • Fans and hard drives shock mounted to prevent vibration and failure • 80% efficient power supply per blade • Thumbscrew installation for easy replacement 10
  • 11. compute blade - density Standard 1U, dual CPU servers: Compute blade servers: Max: 42 servers per rack 336 cores Max: 84 servers per rack 672 cores 11
  • 12. compute blade models 1BX5501 1BA2301 Processor Dual Quad-Core Intel Xeon 5500 series Dual AMD Opteron 4 core 2300 or 6 core 2400 series Chipset Intel 5500 chipset with QPI interconnect NVIDIA MCP55 Pro with Hyper Transport System Memory Maximum of 12 DDR3 DIMMs or 96GB Maximum of 8 DDR2 DIMMs or 64GB Expansion Slot 1x PCI-e Gen 2.0 x16 1x PCI-e x16 LAN 2x 1Gbps RJ-45 ethernet ports 2x 1Gbps RJ-45 ethernet ports InfiniBand Optional onboard ConnectX DDR Optional onboard ConnectX DDR Dedicated LAN for IPMI 2.0 and Manageability Dedicated LAN for IPMI 2.0 iKVM Power supply 80%+ efficient power supply 80%+ efficient power supply 12
  • 13. compute blade - 1BX5501 • Processor (per blade) • Management (per blade) • Two Intel Xeon 5500 Series processors • Integrated IPMI 2.0 module • Next generation "Nehalem" microarchitecture • Integrated management controller providing iKVM • Integrated memory controller and 2x QPI chipset and remote disk emulation. interconnects per processor • Dedicated RJ45 LAN for management network • 45nm process technology • I/O connections (per blade) • Chipset (per blade) • One open PCI-Express 2.0 expansion slot running • Intel 5500 I/O controller hub at 16x • Memory (per blade) • Two independent 10/100/1000Base-T (Gigabit) RJ-45 Ethernet interfaces • 800MHz, 1066MHz, or 1333MHz DDR3 memory • Two USB 2.0 ports • Twelve DIMM sockets for support up to 144GB of • One DB-9 serial port (RS-232) memory • Storage (per blade) • One VGA port • One 3.5" SATA2 drive bay or two 2.5" SATA2 drive • Optional ConnectX DDR InfiniBand CX4 bays connector • Support RAID level 0-1 with Linux software RAID • Electrical Requirements (per module) (with 2.5" drives) • High-efficiency power supply (greater than 80%) • Drives shock mounted into enclosure to prevent • Output Power: 400W vibration related failures • Universal input voltage 100V to 240V • Support for high-performance solid state drives • Frequency: 50Hz to 60Hz, single phase 13
  • 14. compute blade - 1BX5501 14
  • 15. compute blade - 1BA2301 • Processor (per node) • Management (per node) • Two AMD Opteron 2300 or 2400 Series processors • Integrated IPMI 2.0 module (4 core or 6 core processors) • Dedicated RJ45 LAN for management network • Next generation “Istanbul” or "Shanghai" • I/O connections (per node) microarchitectures • Integrated memory controller per processor • One open PCI-Express expansion slot running at 16x • 45nm process technology • Two independent 10/100/1000Base-T (Gigabit) • Chipset (per node) RJ-45 Ethernet interfaces • nVIDIA MCP55-Pro • Two USB 2.0 ports • Memory (per node) • One DB-9 serial port (RS-232) • 667MHz or 800MHz DDR2 memory • One VGA port • Eight DIMM sockets for support up to 64GB of • Optional ConnectX DDR InfiniBand CX4 memory connector • Storage (per node) • Electrical Requirements (per node) • One 3.5" SATA2 drive bay or two 2.5" SATA2 drive • High-efficiency power supply (greater than 80%) bays • Output Power: 400W • Support RAID level 0-1 with Linux software RAID • Universal input voltage 100V to 240V (with 2.5" drives) • Drives shock mounted into enclosure to prevent • Frequency: 50Hz to 60Hz, single phase vibration related failures • Support for high-performance solid state drives 15
  • 16. compute blade - 1BA2301 16
  • 17. availability and pricing • Both 1BX5501 and 1BA2301 are available and shipping now • Systems available online for remote testing • For price and custom configuration contact your Account Representative • (866) 802-8222 • sales@advancedclustering.com • http://www.advancedclustering.com/go/blade 17