4. Cisco UCS Results
• 5400 UCS customers at the end of 2010
• 400%+ Growth year over year
• 30 World-breaking benchmarks
Cisco Unified Computing System
5. Clips from the “The Worst Predictions in
History” Video
Cisco Unified Computing System
http://www.youtube.com/watch?v=O0oIffAjuqI&feature=youtu.be
6. Market Share Data
#3 vendor in x86 blade servers WW with 10.5% revenue share
#3 vendor in total blade servers WW with 9.4% revenue share
#2 vendor in US x86 blades with 19.7% revenue share
Tied for #2 in North America x86 blades with 18.8% revenue share
IDC – Press Release:
http://www.idc.com/getdoc.jsp?
containerId=prUS22841411
Customer facing slideshare:
http://www.cisco.com/web/solutions/data_center/ucs
_marketshare.html
http://www.slideshare.net/MeredithSabye/ucs-impact-
of-innovation
http://www.flickr.com/photos/cisco_pics/5328636375
/in/set-72157626675863887/
Cisco Unified Computing System
7. WW UCS momentum is fueled
X86 Server Blade Market Share, Q1 CY 111
by game-changing
innovation
Cisco is quickly passing
UCS #3 with established players in
10.5% fastest growing segment
of x86 computing market 2
UCS After Two
US Short Years
UCS #2 with
19.7%
5400 UCS Customers WW
Cisco Unified Computing System
Source: 1 IDC Worldwide Quarterly Server Tracker, Q1 2011, May 2011 2
IDC Q4 CY10 Server Forecaster, 2010-2015 CAGR of x86
Blade Servers
8. WW X86 Server Blade Market Share
Customer adoption of
UCS is changing the
server industry
landscape
Cisco growth is out-pacing the
market
Market
appetite for
Innovation
UCS #3
and fuels UCS
climbing growth
Customers have shifted
over 10% of the global
x86 blade server
market to Cisco and
nearly 20% in the US
Demand for data center innovation has vaulted Cisco Unified Computing System
(UCS) to the #3 leader in the fast growing segment of the x86 server market
Cisco Unified Computing System
Source: IDC Worldwide Quarterly Server Tracker, Q1 2011, May 2011
9. Cisco Unified Computing System (UCS) has
moved the industry forward by unifying
compute, network, storage access and
virtualization into one cohesive system
Cisco data center customers report tangible
business results due to transformative
improvements in IT efficiency and agility
UCS is designed to solve key customer
challenges in the data center
Manual solution assembly
Inflexible infrastructure
Operational friction
Virtualization complexity
Inefficient scaling
Compliance and audit control
Cisco Unified Computing System
11. “Fabric computing is a fixture on the radar screen of many IT groups, driven by the
increased penetration of virtualization and prospects for cloud computing.”
—Gartner
Fabric computing has emerged as
the preferred infrastructure for data
center virtualization and cloud
computing, and Cisco is the market
leader in this industry transition
Cisco Unified Computing System
Gartner report: Fabric Computing Poised as a Preferred Infrastructure for Virtualization and Cloud
Computing, February 11, 2011, George J. Weiss and Andrew Butler Report. ID number:
G00210438.
12. Which vendor would you perceive to be the most
competent to deliver on a fabric–based strategy in
your enterprise?
Cisco
Dell
Egener
a
HP
IBM
VMwar
e
Other
Don’t Know/Not Sure
0 5 10 15 20 25 30 35 40 45
% of Respondents
Gartner report: Fabric Computing Poised as a Preferred Infrastructure for Virtualization and
Cloud Computing, February 11, 2011, George J. Weiss and Andrew Butler Report. ID
number: G00210438.
Cisco Unified Computing System
You can read the full Gartner report here: http://www.gartner.com/technology/media-
13. “After several years of being a highly consolidated market where the top
3 vendor accounted for over 80% of blade revenue, the recent entry of
Cisco has introduced a viable new competitor to the market.”
— IDC: Jed Scaramella IDC Worldwide Quarterly Server Tracker Press Release, May 24,
2011
“The Cisco Unified Computing System is a high-end, high-density, highly
scalable, awesomely powerful network, compute, virtualization, and
management backbone that re-architects the notion of the blade chassis
….”
— Windows IT Pro Tech Ed 2011 Best of Show, May 2011
“According to VARs, Cisco’s UCS is scaring the heck out of all of Cisco’s
data center rivals, even if they put on a good face in public and scoff at
UCS viability.”
— Computer Reseller News, December 15, 2010
“It's a paradigm shift in datacenter infrastructure whose time has come.”
—InfoWorld Technology of the Year Award, June 2010
Cisco Unified Computing System
14. The market has Choosing Cisco UCS is a
affirmed that as a trusted proven, reliable
Cisco has truly partner for platform for
changed the computing is enterprise and
game and is proving to be cloud
leading an the right path computing;
industry for many as the rapid UCS
transition that data center market traction
was long evolves is driven by
overdue! customers
seeking better
solutions to IT
challenges
Cisco Unified Computing System
16. The legacy blade solution is a
Chassis Level “Mini-Rack” Design
• Limited or no Unified I/O
• More hardware to manage
• Larger energy footprint
Multi-Chassis Manager
Multi-Chassis Manager
• More management software and
licenses
Interconnect Managers
Interconnect Managers
• Proprietary interconnects (VC)
Chassis Manager
Chassis Manager
• More NICs and HBAs per server
Blade Manager (iLO)
Blade Manager (iLO) • No network policy engine for VMs
Multiple Network, Fabric Modules
Multiple Network, Fabric Modules
• No network QoS
• OS based agents and CMS required
Multiple NICs, HBAs
Multiple NICs, HBAs
• Clustered CMS required for mgmt HA
Complex and fragile Life Cycle mgmt
Complex and fragile Life Cycle mgmt
• Must buy switches and mgmt devices
for every chassis.
• Complexity and cost is amplified as you
(3 Chassis, Rear scale
View)
Cisco Unified Computing System
17. Simple Building Block
Building Block: 2 x FI + chassis
Uplinks
2 x Unified Fabric to Existing LAN/SAN
Interconnects
(Embedded mgmt,
access layer for multiple
UCS chassis)
BLADE CHASSIS 2 x Fabric
REAR VIEW Extenders
(I/O MUX,
CMC)
6U chassis
(one or more)
BLADE CHASSIS
FRONT VIEW Stateless compute
blades w/ CNA, opt. HDDs
Cisco Unified Computing System
18. Unified Computing System Architecture
Single
Management
Point for Entire
Unified Fabric Domain
- Scale Up to 320 servers - Large memory blade - VN-Link
- Stateless compute engines - VIC - QoS
- Single high BW network access for all chassis - Unified fabric within domain
Industry Standard Components, open XML API
Cisco Unified Computing System
19. Cisco Unified Computing System
LAN SAN A
Mgmt Any IEEE Compliant LAN SAN B
Any ANSI T11 Compliant SAN
Any ANSI T11 Compliant SAN
One Logical Chassis to Manage*
LAN Connectivity
SAN Networking
Blade Chassis’
Server Blades
Rack Servers
Server Identity Management
Monitoring, Troubleshooting
etc.
*architectural limit of
Cisco Unified Computing System 320 servers with 160 servers supported as of 1.4(1)
20. UCS External Connectivity
LAN
CLOUD
NAS STORAGE
FC Storage
LAN Switch
LAN Switch SAN Switch
SAN Switch
UCS Fabric Interconnect
Access Layer LAN & SAN
Access Layer LAN & SAN
Unified Fabric (FCoE)
Unified Fabric (FCoE)
Cisco Unified Computing System
21. Cisco UCS:
A single, logical, expandable blade server chassis
Cisco Unified Computing System
22. System Interconnect Choices (rear)
Uplinks
FEX
Scalability Balance Bandwidth
(Up to 40 chassis; (Up to 20 chassis; (Up to 10 chassis;
20 Gbps) 40 Gbps) 80 Gbps)
Cisco Unified Computing System
23. UCS Components and Relationships
UCS Manager
Management resides in the Fabric Interconnect
UCS 6100 - Fabric Interconnect
Unified access layer interconnect and management
20 or 40 ports and option uplink modules
UCS 2104 – I/O Module
Inserts into Blade Chassis
CMC, port aggregator, I/O MUX, extension of FI
n
ai
UCS 5108 – Blade Chassis om
D
Blade inserts into the Chassis t
Blades are a logical par of the chassis en
Up to 40 chassis per environment em
UCS Blade Server n ag
Industry Standard components
a
M
2s, 4s Intel Nehalem, Westmere, e
DDR3 RAM, SAS, SSD n gl
Si
UCS Mezzanine
Adapters
VIC, Menlo (Q & E), Oplin
Cisco Unified Computing System
25. Simplified Management
Minimize errors, reduce risk, lower cost, minimize complexity
Legacy Blade Architecture Cisco UCS Solution
Single Embedded Device Manager
Single Embedded Device Manager
Mgmt Server
Mgmt Server+Plug-ins
Multi-Chassis
Multi-Chassis
Manager
Manager
Multiple switch
Multiple switch
Managers
Managers
Chassis
Chassis
Manager
Manager
Blade Manager
Blade Manager
(iLO)
(iLO)
Multiple Network,
Multiple Network,
Fabric Modules
Fabric Modules
Multiple
Multiple
NICs, HBAs
NICs, HBAs
Complex and fragile
Complex and fragile
Life Cycle mgmt
• Embedded device manager for family of UCS components
Life Cycle mgmt
• Enables stateless computing via Service Profiles
• Efficient scale: Same effort for 1 to 320 blades
OS based agents
OS based agents
per blade
per blade • Clustered Fabric Interconnects for HA
Cisco Unified Computing System
26. Real Customer Results
Agility is the Biggest Value in Virtualization
The draw to virtualization is saving money, but after a
year or so, surveys show that adopters
believe the key value is agility.
Thomas Bittman
Gartner
UCS amplifies the value of agility by virtualizing
the compute hardware 100%!
Cisco Unified Computing System
27. Compute as a Service
Software & Hardware Virtualization
VM VM VM VM VM
Software-Based Virtualization
HYPERVISOR
HYPERVISOR (OS and application layer virtualized)
Hardware State
Cisco Hardware State Virtualized
Cisco UCS virtualizes the Server Hardware State 100%
Hypervisor (or OS) is unaware of underlying hardware state abstraction
Cisco Unified Computing System
28. Service Profile: Virtual Server Hardware
Policy Driven Virtualized Server Hardware
FW,boot device, MAC, WWN, vLan, vSAN, UUID, and
QoS managed through policies, profiles, templates
Dynamic and Consistent Provisioning Databas
e
Easily deploy in minutes, not days or weeks
Rapid HW deploy, repair, change = maximum agility
ESX
RBAC, multiple levels of administration
Consistent server builds = minimized risk & errors
WWW
Service Profile: DataBase
Network1: DB_vlan1
Service Profile: ESX-Host Network1 QoS: Platinum DataBas
Network1: esx_prod MAC : 08:00:69:02:01:FC e
Network1 QoS: Gold Boot Order: SAN, LAN
MAC : 08:00:69:11:19:EQ FW: DataBaseSanBundle
WWN:
Boot Order: SAN, LAN
FW: ESXHostBundle
Service Profile: WebServer
Network1: www_prod
Network1 QoS: Gold
MAC : 08:00:69:10:78:ED
Boot Order: LOCAL
FW: WebServerBundle
Cisco Unified Computing System
29. Stateless Computing: UCS Service Profiles
Configurable HW State Examples
•RAID settings
•Disk scrub actions
•Number of vHBAs
•HBA WWN assignments
Server hardware state
•FC Boot Parameters
is fully configurable •HBA firmware
Preserved in software
known as a Service •FC Fabric assignments for HBAs
Profile.
•QoS settings
Service Profiles can •Border port assignment per vNIC
•NIC Transmit/Receive Rate SAN
then be dynamically
Limiting
assigned to specific
blade hardware for
runtime. •VLAN assignments for NICs
•VLAN tagging config for NICs
•Number of vNICs LAN
•PXE settings
•NIC firmware
•Advanced feature settings
•Remote KVM IP settings
•Call Home behavior
•Remote KVM firmware
•Server UUID
•Serial over LAN settings
•Boot order
•IPMI settings
•BIOS scrub actions
Cisco Unified Computing System
•BIOS firmware
•BIOS Settings
30. Server Identity
Management
Comparison
Multi
Chassis
Access
Layer
Unified Fabric
FC
Chassis
Modules
Enet Unified Fabric
FC
Adapters
Enet Unified Fabric
Server
Blades
Cisco Unified Computing System
31. Unified Computing System
The Right Solution at the Right Time
Next Gen Market Direction:
Legacy (HP, IBM, Dell) UCS
Management
and Control
Primary
SAN A
Networ
k
SAN B Secondar
y
Network
Server = Application Server = Resource
Inefficient Complex High Cost Fragile Efficient Agile Transformative
Cisco Unified Computing System
33. I/O Consolidation – Unified Fabric in UCS
(Universal I/O, Mixed Workloads, Wire Once, minimal access points)
Legacy Server Access Connectivity I/O Consolidation with FCoE
LAN SAN A SAN B LAN SAN A SAN B
Ethernet FC FC
Traditional
Traditional FC
Ethernet
Ethernet FC
FCoE
(at server
access layer
where most (Nth Server)
savings for I/O
consolidation
resides)
Cisco Unified Computing System
34. VN-Link: VM Level Network Transparency
Problems:
VMotion
• VMotion may move VMs
across physical ports—policy
must follow
• Impossible to view or apply
policy to internally switched
traffic
• Cannot correlate traffic on
physical links—from multiple
VLAN
101
VMs
VN-Link (Problems Solved):
•Extends network to the VM
•Consistent services
•Coordinated, coherent
management
Cisco Unified Computing System
35. Virtual Interface Card (VIC)
• True wire once architecture – highly dynamic
• Network policy and visibility to VM (VN-Link in hardware)
• Hypervisor bypass support – increases performance
• Reduce NIC and HBA cards (get up to 58 virtual PCI devices)
Switch
Legacy
Server With Virtualization Adapter
Hypervisor Hypervisor
Soft
Switch
Virtual Virtual Virtual Virtual Virtual Virtual
Machine Machine Machine Machine Machine Machine
Cisco Unified Computing System 35
36. Optimizing Memory for Intel EP processors
Typical Memory Cisco UCS Memory
Each DIMM
Fixed
the CPU
number of
sees
Xeon 5500 DIMMs Xeon 5500
is made of 4
can be
standard
addressed
DIMMs
by the CPU
Typical System
Either Intel Xeon 5500/5600 CPUs
• 12 DIMMs @ • Max 384GB per Blade
1066MHz - 1333 mhz in all configurations
• Max 96GB in M2 models!
Or
• 18 DIMMs @ Benefit
800MHz •4x capacity
• Max 144GB •Lower costs
at lower •Standards DIMMs, CPUs, OS
performance
Cisco Unified Computing System
37. QoS – Key Element for Multi-Tenancy
SA
N
LA
N
MGMT SA
N
QoS parameters can be
GG G
G S S G
G G
G
configured at a per system
Fabric
FabricAA
Interconnect
Interconnect
Fabric
Fabric A
A
Interconnect
Interconnect class level, or a per vNIC level.
G
G GG G
G GG G
G G
G
All traffic belongs to 1 of 6 System
Compute Chassis
Compute Chassis
Fabric
Fabric R I C C I R
Fabric
Fabric
Classes - Four are user configurable while
Extender R R
Extender
x8 x8 x8 x8
Extender
Extender the other two are for FCoE and standard
Ethernet
No packet drops - Priority Flow
MM B
PP B
PP Control (PFC) uses per priority pause
Adapter Adapter Adapter
Adapter Adapter Adapter
mechanism to guarantee no frames are
X X X X X X dropped in lossless priorities.
x86 Computer
x86 Computer x86 Computer
x86 Computer
Segregation of resources into
pools and organizations with
Compute Blade
Compute Blade
(Half slot)
(Half slot)
Compute Blade
Compute Blade
(Full slot)
(Full slot)
their own policies
Allows bandwidth allocation for
No other compute vendors have different classes of traffic
anything like this…
Cisco Unified Computing System
39. Enclosure, Interconnect, & Blades (Front)
Redundant, Hot Swap Power Redundant, Hot Swap
Supply Fan
1U or 2U Fabric
Interconnect
Half width server blade
Up to eight per enclosure
Hot Swap SAS drive
(Optional)
Full width server blade
Up to four per enclosure
Mix blade types
6U Enclosure
Cisco Unified Computing System
Ejector Handles Redundant, Hot Swap Power Supply
40. Rear View of Enclosure and Interconnect
10GigE Ports Expansion Bay
(server or upstream GigE (for uplinks to
networks existing GigE or FC
networks)
Redundant
Hot Swap
Fan Module
Redundant
Fabric
Extender
Fan Handle
Cisco Unified Computing System
43. Adapter Offerings
Ethernet Only Standard CNA VIC
10 Gbe Ethernet Existing Driver VM I/O
Stacks Virtualization and
Consolidation
Emulex
10GbE/FCoE 10 GbFCoE
Eth
FC FC Eth
Intel Niantic User
Broadcom Definable
57711 (iSCSI vNICs
10GbE FC
offload) 0 1 2 3 127
PCIe Bus PCIe x16
Cisco Unified Computing System
~900 UCS customers since FCS June 2010 (based on Q3FY10 numbers) Especially related to cloud deployments- i.e. SAVVIS, SunGard, Telstra Nexus y/y Growth Rates disclosed Q3FY10 Earnings: Nexus 2000: Up 431% Nexus 5000: Up 315% Nexus 7000: Up 277% UCS 168% Q/Q revenue Growth “ UCS gaining traction as 20% of CIOs have already evaluated the platform up from 10% in the Jan 2010 survey” -- Morgan Stanley, March 2010, CIO Survey reports
Q2 CY 09- NH EP 5500 Xeon Q1CY10- Westmere EP 5600 Xeon/NH EX Q3CY10- OOW/Vmworld Q4CY10- continuation Q1CY11- Westmere EP 5600 Refresh
Key points here: Simplified cabling Simplified architectural choices Scalability = 20GB/chassis or blade Balance = 40GB/chassis or blade Bandwidth = 80GB/chassis or blade Non-blocking, cut-through, lossless connectivity. (DCE, FCOE) From the perspective of the IT staff, the “California System” looks like a standard rack of servers. But the wiring of the components is much simpler and easier to manage, track and deploy. This approach creates a cleaner, simpler model for managing data center assets that stands in sharp contrast to antiquated PC architectures. As a result, IT organizations will find it easier to set and maintain hardware policies They will also be able to better physically secure the environment. The overall environment supports 63 percent more airflow than traditional servers, which leads to substantially less heating and cooling costs. Each component of the system can be managed by California Manager to the point where specific power and cooling thresholds can be set. The California System also provides easier access to disks residing on each server blade. Power supplies are hot-swapable. Each California System Should have two Fabric Extender to maximize availability. Taken all together, the California system greatly reduces all the points of management compared to traditional blade server environments.
Transcript : So in a nutshell, the pictorial representation of the unified computing system is this. California day one will be the ability for Cisco to deliver 320 compute nodes that are connected over a pair of upstream unified fabric switches that have embedded on them a single management domain for the compute element, the virtualization element and the networking element. So we will be able to deliver the customer stateless servers that have been custom-built using industry-standard x86 processors, industry-standard memory architectures and the ability to support virtualization functions, that virtualization functions day one have been optimized to run with VMware. But we're not stopping there. We're also collaborating with the likes of Microsoft on supporting their virtualization architecture. And the same we are doing with Oracle. The same we are doing with Red Hat from a Linux perspective.
Key points here: Simplified cabling Simplified architectural choices Scalability = 20GB/chassis or blade Balance = 40GB/chassis or blade Bandwidth = 80GB/chassis or blade Non-blocking, cut-through, lossless connectivity. (DCE, FCOE) From the perspective of the IT staff, the “California System” looks like a standard rack of servers. But the wiring of the components is much simpler and easier to manage, track and deploy. This approach creates a cleaner, simpler model for managing data center assets that stands in sharp contrast to antiquated PC architectures. As a result, IT organizations will find it easier to set and maintain hardware policies They will also be able to better physically secure the environment. The overall environment supports 63 percent more airflow than traditional servers, which leads to substantially less heating and cooling costs. Each component of the system can be managed by California Manager to the point where specific power and cooling thresholds can be set. The California System also provides easier access to disks residing on each server blade. Power supplies are hot-swapable. Each California System Should have two Fabric Extender to maximize availability. Taken all together, the California system greatly reduces all the points of management compared to traditional blade server environments.
When you take a step back, the ability to dynamically provision server, storage and networking assets is really the Holy Grail of enterprise computing It’s really the inability to dynamically provision these assets that creates all the inflexibility associated with enterprise IT today. But once you change the underlying architecture and, more importantly, the way it is managed, you create the opportunity to fundamentally change the way people think about enterprise computing. That in turn changes the way senior IT leaders can strategically think about IT and their role in the enterprise.
Transcript : So if you look at the picture on the left today, right. Once again not to pick on anybody, but we'll pick on HP. If you take a C-7000 Blade System, it looks something like this in terms of management, on the left hand side. So you have a Blade Manager, and you probably know this as iLO or iLO2, you have a Chassis Manager, which is called Onboard Administrator. You have a Connectivity Manager, which is called Virtual Connect Manager, and then you have a multi-chassis Connectivity Manager. Virtual Connect Enterprise Manager, that manages multiple domains of Virtual Connect. So you have four of these. HP does a pretty decent job of gluing these together with ICE or SIMM or whatever other glue they have. If you actually have a Systems Management product that you're deploying in your datacenter and you have to integrate with it, you will actually have to write to four different APIs. And it's a pain, believe me. With us, we have a single point of management. A single Embedded Device Manager with a single API. And all management is done through that. As a result, if you have an upstream Systems Management product that needs integration with us, all they need to do is write to a single API. And it's also XML, which is very easy to code to. As a result, so what does it all mean for your end customer? Reduced management costs. Believe it or not, deploying Systems Management Software is a time consuming and a costly process. If we can reduce costs there, it's huge cost savings. And the second one is, it's very easy to integrate with existing frameworks and that is really, really important. If we make it really hard to integrate with existing frameworks, it's going to be a problem trying to get a bigger footprint in the datacenter for us.
Contact slide author (M. Sean McGee; seanmcgee@cisco.com) for slide updates or explanation if needed.
The Unified Network Fabric sets the stage for automating large swaths of the enterprise computing experience. In truth, we have used IT to automate every process in existence except the process of IT. Cisco is creating a unified network fabric that substantially reduces the amount of time currently needed to maintain and manually configure systems In so doing, it creates the opportunity for IT organizations to concentrate more activities that add more value to the business as opposed to spending most of their time maintaining systems. And it has the potential to lift staff morale because it allows IT professionals to see themselves as something more than glorified digital maintenance workers.
Here’s what IT organizations deal with today Parallel LAN/SAN Infrastructure Inefficient use of Network Infrastructure 5+ connections per server – higher adapter and cabling costs Adds downstream port costs; cap-ex and op-ex Each connection adds additional points of failure in the fabric Longer lead time for server provisioning Multiple fault domains – complex diagnostics Management complexity – firmware, driver-patching, versioning
With these goals in mind, we’re now going to walk you through the individual components of the California system that Cisco has integrated to create a unified model for data center operations The components leverage industry standard hardware in combination with Cisco advanced management technology that is embedded across the entire system. Most importantly, Cisco is providing a unified approach to managing those assets that will increase collaboration across multiple IT disciplines while also working towards reducing the total cost of managing the data center.
Author’s Original Notes: Overall throughts – designed for enterprise “lights out” datacenter environments. Similarities to Nexus 5000 technology - but very different (details later). Dual PS - redundant Simple design – easy to install, replace Power Supply wattage – set the stage for next deck
Author’s Original Notes: 10G ports – for connecting Chassis’ Expansion Bays (cover the expansion cards shortly) Large Fans – lots of coverage on the rear of the system 2 Fabric Extenders for High Availability and redundancy Dual P
Notes should mention that the 9216i and the
UCS Blade Server Industry-standard architecture UCS Adapters Choice of multiple Converged Network Adapters and Virtual Adapters The “Unified Computing System” is based on a standard set of components that most IT staff are very familiar with. The intelligence for managing the overall system is based on a pentium-class processor that Cisco has embedded in the fabric interconnect The UCS Manager software that manages the entire system communicates with firmware embedded in every device in the system. It is important to note that there are three adapters. The first one is a standard 10GB Ethernet adapter and the second is a 10GB Fiber over Ethernet adapter. The most important one is the third one, which is referred to as Palo. That adapter support the virtualization of the network connections, which will be discussed more in depth later in the presentation.
Cisco offers a number of adapter options to give IT organizations a choice between standard configurations and new state-of-the-art adapters that support “virtual” connections. The Palo adapter ultimately serves optimize I/O performance in virtual environments. The single biggest stumbling block in the mainstream adoption of virtualization in production environments has been concerns about I/O performance. The ability to provide granular control over I/O performance across any number of virtual links mitigates those performance concerns. We will have three different adapter families and four different adapters. Cost: Intel Oplin based adapter, supports FCOE software driver. There is currently an Open Source FCOE driver available. Compatibility – 2 Adapters: Standard CAN architecture modified to fit our mezzanine form factor. Based on Emulex or Qlogic FC ASIC (LP11xx and QL2642). Ethernet is standard Intel 10GE Oplin Don't require specific CNA drivers. Palo – Virtualized Adapter Explained in detail in later slide We don't require the whole California to be one kind of adapter. They can be different on the double-width blade, but it isn’t recommended.