2. HP ProCurve Data Center Solutions 2008 2HP ProCurve Confidential
What’s driving data center transformation?
Customer
priorities
and
expected
outcomes
Business pressures
• New Apps & Services – growth
• Integrate acquisitions
• Support business innovation
• Compliance
• Reduce costs and risks
• Mission critical
Data center constraints
• Communications cost structure
• Power and cooling
• Facilities
• Personnel
• Relocations
• Legacy infrastructure
• Workflow
3. 3
Tools to address data center challenges
Legacy Data Center Next Generation Data Center
Standardization
Virtualization
Automation
Reduced maintenance costs, better asset utilization, lower power consumption,
and higher system uptime – resulting in better business outcomes
Heterogeneous environment
Application-specific clusters
Manual element configuration Fast, reliable deployments
Shared, flexible resources
Pods – Ruthless Standards
4. Types of DC Networks
“Back Office”:
• typical 3-tier business applications
• diverse application requirements
• these customers buy tools from vendors
High-performance clustering
• east-west performance with line rate
• in some cases, reliable multicast is very important
• Infiniband is prevalent today
“Front Office”:
• big on-line web presence
• lots of server load balancing & application acceleration
“Cloud”:
• huge web-based service infrastructure
• think Google, MSN
• totally self-sufficient, cost-driven customers
Each Imposes
Different Priorities
And Demands
5. 10Gb growth at the edge brings new
topology approaches
• 10G low-cost connectivity will evolve repeatedly for the next 3 years
• 10GbaseT not practical yet – power & latency excessive, non-standard cables
• Home-runs directly to core switches carry huge optics cost with 10G
• Fiber & transceiver costs
• core switches – higher $ per port – many more ports
• choice between lack of flexibility or low capacity utilization
• Using TOR/Pod-based switches greatly reduces 10G costs
• CX-4 today, SFP+ direct connect next year – 7M practical, 10M theoretical passive limit
• Oversubscribe in the pod – reduce costs of more complex infrastructure
10G Home run approach (up to $4k more per server!)
10G Pod-based approach
= cheap copper cabling
= expensive fiber cabling
Expensive
Ports, fixed
capacity
Lower cost
Ports,
Flexible
growth
Expensive
fiber
Cheap
Copper in
pods
6. Pod-based design – split the network into
separate lifecycles
• Network access & aggregation components on same lifecycle as servers
• De-couple IT processes
• More network bandwidth and energy contained lower in the network
• “Crop rotation” used to replace old with new technologies over time
• power/cooling contained in pods, allowing adoption of future efficiencies
“Pod 7” “Pod 8”“Pod 6” “Pod 9”
7. Coupled with Facilities Design:
Configure Once, Allocate on Demand – Division of Labor
- De-couple IT processes (PS – we still need silos ;-)
- Minimize change
- Standardize & automate to the maximum possible extent
The “DNA” approach:
-Standard VLANs, Addressing, ACLs at every port
-Server admins assign and manage server context
-Assigned context “activates” network personality
POD Types:
1. Server
1. Normal Power
2. High Power
2. Storage (3B)
1. SAN
2. Tape/Backup/Archive
3. Network Services
4. Utility (Floor Mount
Computing)
5. Legacy
6. Vacant
7. Development/Test
Network substrate configured
the same regardless of future
pod usage.
8. Very-Large Layer 2 domains
Value propositions:
• Allow very flexible mapping of virtual LANs
• Allow dynamic plumbing of applications
• Multipathing at L2
• Scalability versus spanning tree
• Simpler configuration and troubleshooting than spanning tree
Potential futures:
• Trill
• Shortest-Path Bridging
• Extended Meshing approaches
VLL2 network
9. Fibre Channel / Ethernet Convergence?
FCoE Value Propositions:
-Replace current NICs and HBAs in servers with fewer consolidated, specialized NICs
(logic goes that 10G has plenty of headroom for consolidation – at least for now)
-Homogenize server I/O to enable more standard server configurations
-Allow for greater flexibility to “match make” any LUN to any server
-Solve interoperability problems with Fibre Channel via the Ethernet ecosystem
Problems:
1) T11 standards are not finalized, issues remain
2) A “new Ethernet” will be required for storage; convergence will not be as easy as VOIP
3) Network complexity will increase significantly
4) No new entrants are likely to emerge for FC director products
5) Interoperability problems will probably get worse, not better
FC / Ethernet converged network
10. Possible Convergence Stages –
Today
= tcp/ip links
= FC native links
Data Center
Distribution
SLB
Firewall
LUN
farm
FC
SAN
IP
Data Network
FC
switches
E-net
switches
server
farm
11. Step 1? – Standardize NIC types
(still dedicated SANs, but using Ethernet)
= tcp/ip links
= FC native links
= FCoE SAN links
Data Center
Distribution
SLB
Firewall
LUN
farm
FC
SAN
IP
Data Network
FCoE
switches
E-net
switches
server
farm
Value Proposition
1) Homogenize Server I/O
2) Homogenize Switch Types
(This is likely how FCoE will
end up, as this is exactly how
iSCSI is deployed today)
12. Step 2? – homogenize server I/O –
Universal NICs with TOR splitting
= tcp/ip links
= FCoE SAN links
= FC native links
= Converged Server NICs
SLB
Firewall
LUN
farm
FC
SAN
IP
Data Network
converged
Ethernet
switches
server
farm
F_ports
Value Prop (beyond step 1)
1) Consolidate Server I/O
2) Reduce number of NICs/HBAs
3) Aggregate server cabling
4) Homogenize Server Types
N_ports
Issues:
• complex switch management
• standards not complete
• all NICs must be 10Gb for benefit
• it’s still Fibre Channel!
• ‘010 or later for volume
• 10Gb won’t be overkill by then
(once you need 4 NICs per server,
why do it this way?)
13. Step 3? – Ethernet enhances SAN flexibility –
Managed Virtual HBAs “on-a-rope”
= tcp/ip links
= FCoE SAN links
= FC native links
= Converged Server NICs
SLB
Firewall
LUN
farm
FC
SAN
IP
Data Network
FC
SAN
N_ports
N_ports N_ports
N_ports
Ethernet/FC
switches
converged
Ethernet
switches
Value Prop (beyond step 2)
• Virtualize any server to any LUN
• …while preserving FC SANs
14. Step 4? – Fully Converged Network?
2015? FC Ethernet, or Ethernet FC?
= tcp/ip links
= FCoE SAN links
= FC native links
= Converged Server NICs
SLB
Firewall
LUN
farm
Legacy FC
SAN
Converged
Network
VE_ports
VF_ports
E_ports
E_ports
VN_ports
Issues:
• It’s still Fibre Channel
• Fibre Channel is not growing
• Different security model
• Different flow model
• Different mgmt model
• Higher cost structure
• Same 2 director vendors
• More interoperability problems
• Complexity of merged networks
15. Meanwhile – iSCSI is real and growing
fast today
• iSCSI growing 30-40% YOY, FC is flat
• HP just acquired Left-Hand Networks
• turns a general-purpose server or virtual machine into an iSCSI storage system
• offers compelling feature set in very low-cost solution
• Very recent growth is driven by VMware
• Use case is “vanilla storage for vmotion-able servers”
• Ethernet switch requirements
• good buffering
• flow control capability
• jumbo frames
• (emerging) 10G density
• Typically customers separate iSCSI switches from data TCP/IP
16. Virtualization – where is the network
going?
• Both ends of the network are starting to look alike!
• virtual switches are much like access points
• Port-centric edge management model is challenged
• constraining
• labor-intensive
• static
• costly infrastructure
Virtual
Servers
Virtual
Servers
Data
Center
Campus
VM soft switches:
“Access Points?”
17. Virtualization & Agility –
HP Strategy
-Make forwarding decisions in the network
-Use simple mechanisms to make sure all traffic can be classified
-Leverage the “Lite AP” architectural framework
-Allow real switches to do ACLs, lookups, QOS, classification
-Don’t force NICs to be bridges
-Allows the NICs to focus on the host/IO membrane
-Use standards-based mechanisms to track VM’s connections
-Use MAC addresses and 802.1X tokens
-Out-of-band service, authentication, tracking, and automation approaches
-Actively drive new standards that help solve the problem
-ProCurve vice chairs, participates in multiple standards bodies
Virtual
Machines
Server
farms
18. DC Network Switch Thermal Design:
Side-to-side cooling Front-to-back cooling
Cool Aisle
Hot Aisle
Side-cooled switch
draws rising warm
air from inside rack
Side-cooled switch
draws re-circulated hot
air from inside rack
Extra heat is leakage
that must be moved
Hot air
exhausted
inside rack
Cool Aisle
Hot Aisle
F2B cooled switch
draws air directly
from the cool aisle
F2B cooled switch
exhausts air directly
Into hot aisle
No extra heat is built
up inside the rack
Side-to-side Cooled Switches Front-to-Back Cooled Switches
19. Common Network Automation
use cases today
Snapshot and store
device configuration
information
Real-time change
detection for all
activities
Keystroke level audit
trail
Auto-discover the
network and
capture detailed
audit trail of all
device changes
Catalog &
Diagramming
Mass configuration
changes
Software updates
Bare-metal
provisioning
ACL deployments
Automate changes
across thousands
of devices
Deploying
Large Scale Changes
Out of the box
reports on ITIL, PCI,
HIPAA & more
Enforce best
practices and
security standards
Easily remediate
violations
Enforce, audit and
report on
compliance
Maintaining
Compliance
Automate multi-step
processes across
disparate systems
Automate validation of
complex network
changes
Ensure best practices
are followed
Integrated process
automation to
automate network
operations
Process-powered
Network Automation
20. Data Center
Network
Network complexity in data centers is exploding…
• many functional types of networks
• virtual servers
• application acceleration
• multiple firewall/NAT layers
• HA / load balancing
• multi-tier applications
• explosion of VLANs
• workload mobility
• compliance
• change management
• high availability
…which intertwines too deeply with server provisioning…
VMs
Where does the “server” end
and the “network” begin?
21. Data Center
Network
Intelligent
Switches
Intelligent
Switches
Intelligent
Switches
Also, Physical Issues Exist with Virtual Networking:
1) Unlike discrete items like CPU cycles or LUNs, the network is a single shared resource.
2) The network cannot be fully virtualized, due to laws of physics and the fact that it’s shared.
(i.e. bandwidth & connectivity cannot be created out of thin air, and a network that offers
infinite configurable bandwidth between all endpoints is not cost-effective)
22. Data Center
Network
Intelligent
Switches
Intelligent
Switches
Intelligent
Switches
However…
- we can “carve up” the network into pre-allocated virtual connections, and allocate them as if
they were discrete items like servers and LUNs.
- by choosing how many virtual connections to offer where, we can virtualize the network
while still obeying laws of physics;
Pre-allocated virtual
chunks
26. federate
26 12/11/2008
Datacenter Connection Management
2) Select an
available
connection
4) Configure Server
according to
subscription policies
1) Set up policies
Place new connections
in the inventory
Network
Admin
Server
Admin
3) Subscribe to
the connection
Network
Infrastructure
(dynamically deployed for
each connection)
7) Automate other
Network Policies
via events
Subscription,
Registration or
IP allocation
event
UCMDB
Compliance
Checking
Fault
Management
Capacity
Management
info
info
info
new server or VM
5) Register Server
@ L2
6) Enforce L3
registration
Policy enforcement
responses
Policies for
Router, Firewall,
Load Balancer,
DLP, IDS, etc
Connection Inventory
27. Example: 3-tier data center network is
represented as…
Serial SCSI VLAN (iSCSI or FCoE)
Internet VLANs
Web/App VLANs
App/DB VLANs
Organization “a” Organization “b”
28. Serial SCSI VLAN (iSCSI or FCoE)
Internet VLAN 1
Web/App VLAN 1
App/DB VLANs
Organization “a” Organization “b”
Zone 1
Internet VLAN 2
Internet VLAN 3 Internet VLAN 4
Web/App VLAN 2
Web/App VLAN 3 Web/App VLAN 4
Deployed Servers
-org_a.webserver1
-org_a.webserver2
-org_a.webserver3
-org_a.webserver4
-org_a.webserver5
Future Servers
-org_a.webserver6
-org_a.webserver7
-org_a.webserver8
Deployed Servers
-org_b.webserver1
-org_b.webserver2
-org_b.webserver3
-org_b.webserver4
-org_b.webserver5
-org_b.webserver6
Future Servers
-org_b.webserver7
-org_b.webserver8
Example: 3-tier data center network is
represented as pools of connections
29. Pools of connections
- example inventory status
Internet VLAN 1
Web/App VLAN 1 (app)
Organization “a” Organization “b”
Zone 1
Internet VLAN 2
Internet VLAN 3 (dmz) Internet VLAN 4
Web/App VLAN 2
Pod 1 Connection Status Server ID VLAN IP MAC Policy Forms
org_a.webserver.dmz.1 In Use org_a.webserver1 Internet VLAN 3
org_a.webserver.app.1 In Use org_a.webserver1 Web/App VLAN 1
org_a.webserver.dmz.2 In Use org_a.webserver2 Internet VLAN 3
org_a.webserver.app.2 In Use org_a.webserver2 Web/App VLAN 1
org_a.webserver.dmz.3 In Use org_a.webserver3 Internet VLAN 3
org_a.webserver.app.3 In Use org_a.webserver3 Web/App VLAN 1 .
org_a.webserver.dmz.4 In Use org_a.webserver4 Internet VLAN 3 .
org_a.webserver.app.4 In Use org_a.webserver4 Web/App VLAN 1
org_a.webserver.dmz.5 In Use org_a.webserver5 Internet VLAN 3
org_a.webserver.app.5 In Use org_a.webserver5 Web/App VLAN 1
org_a.webserver.dmz.6 AVAILABLE
org_a.webserver.app.6 AVAILABLE
org_a.webserver.dmz.7 AVAILABLE
org_a.webserver.app.7 AVAILABLE
org_a.webserver.dmz.8 AVAILABLE
org_a.webserver.app.8 AVAILABLE
org_b.webserver.dmz.1 In Use org_b.webserver1 Internet VLAN 4
org_b.webserver.app.1 In Use org_b.webserver1 Web/App VLAN 2
. . .
. . .
30. Long-term Vision – Virtualize SANs also
iSCSI
LUN
farm
FC
LUN
farm
Production
“C: drives”
VMware
VMFS LUNs
Virtual
Tape devices
Oracle DB
LUNs
Test/dev
LUNs
iSCSI
SAN
Exchange
LUNs
LUNs for
NAS heads
J2EE/SAP
LUNs
FC
SAN
Ethernet
• Physical servers are exposed to resources only upon subscription
• Inventories are adjusted per capacity management
• Physical servers are not confined to any particular SANs
• SANs can be smaller, more specialized, more manageable, with faster & simpler recovery
Server Farm
LU
N
/server
binding
31. Evolution to Virtualization
Application Silos
Dedicated Infrastructure –
Designed, Procured, & Built
Separately for each Application
Virtual
Resource
Virtual
Resource
Virtual
Resource
Virtual
Resource
Application Stacks
Configured one App at a Time
Shared Infrastructure
Design-to-Order
Infrastructure
Configure-to-Order
Infrastructure
32. Evolution to Infrastructure Clouds
Virtual
Resource
Virtual
Resource
Virtual
Resource
Virtual
Resource
Application Stacks
Configured one App at a Time
Shared Infrastructure
Allocate-to-Order
Infrastructure
Configure-to-Order
Infrastructure
Application Stacks
Pools of Pre-Configured
Standard Resources
33. Infrastructure
Standards
Menu
Each Silo provides a portfolio of well-known
“products” that are standard in that IT shop:
Server Menu:
- “small web server”
- “medium web server”
- “standard exchange server”
- “small J2EE server”
- “large ESX server”
- “NFS cluster node”
- “standard .Net app server”
- “SAP financials container”
- “large database node”
Network Menu:
- “DMZ connection”
- “App-to-DB” connection”
- “Backup network connection”
- “Mgmt network connection”
- “Vmotion network connection”
Storage Menu:
- “small VMFS LUN”
- “High-perf DB LUN”
- “J2EE LUN”
- “small Windows C: clone”
- “small NAS LUN”
- “test/dev LUN”
- “Exchange LUN”
34. Infrastructure Standards are Layered
Small
.web Server
iSCSI
Network
Connection
VM
container
Type 2
Large ESX
Server
BL585
Server Blade
w/Flex10 NIC Vmotion
Network
connection
Pods of blade chassis, servers, storage, and networking
VMFS
LUN
Archive
Network
Connection
DMZ
Network
Connection
AppServer
Network
Connection
35. LUN Inventory
Network Connection
Inventory
Virtual Connect
Domain
Small Web
Server 1
J2EE
Server 2
J2EE
Server 1
Inventory
Managed by
Network Team
Server
Inventory
Inventory
Managed by
Server Team
Inventory
Managed by
Storage Team,
Applications
HP Vision – Adaptive Infrastructure Clouds
Virtual Connect provides
a Firewall between
change domains
Demand
Supply
36. LUN Inventory
Network Connection
Inventory
Virtual Connect
Domain
Small Web
Server 1
J2EE
Server 2
J2EE
Server 1
Server
Inventory
Applications
Thin-Provision
Subscriptions –
Pre-approve
“Standard”
Changes
Forecast Demand
Provision or
Change Servers –
“Subscribe”
Allocate &
Activate
Subscriptions
On-Demand
1.
2.
3.
4.
Infrastructure Service Lifecycle
37. LUN Inventory
Network Connection
Inventory
Virtual Connect
Domain
Small Web
Server 1
J2EE
Server 2
J2EE
Server 1
Server
Inventory
Applications
Domains are Separated & Simplified
Server Change &
Configuration
Domain
Network
Change &
Configuration
Domain
Storage
Change &
Configuration
Domain
Firewall
between
Domains
38. Lifecycle of an Application
Requirements
Design
Build
Deploy
Operate
Optimize
39. Lifecycle of Applications Using Infrastructure
Clouds (Standard Menus)
Requirements
Design
Build
Deploy
Operate
Optimize
Connection
Inventory
Infrastructure
Standards
Menu
Browse Menu vs.
Requirements
Select best
Fit standard
elements
Subscribe to
infrastructure
from
inventoryConfigure specifics
on top of standard
infrastructure
Monitor &
Maintain
Remediate
As
Necessary