The journey to Cloud is not linear. Realistically, most environments will have workloads that continue to run on both physical and virtualized infrastructures for some time. Join Cisco’s Data Centre Experts, as they outline the key technologies transforming the Data Centre, enabling an intelligent infrastructure which will support physical, virtualized and cloud applications as part of Cisco’s Unified Data Centre Architecture.
2. Agenda
• Building a Data Center Baseline: Where we are coming from
• Near term directions and technologies in the DC
• Where are we going in the DC as an industry
• How is Cisco positioning our DC products to “Go where the puck is going to be,
not where it’s been”
4. Baseline of the Legacy DC Infrastructure - Keys
• Infrastructure Deployment and Operational Models
• Structured DC Design for Compute, LAN, Storage, Security, and Facilities
• Services Capacity and Geographical Capabilities
• Business Need to Service Delivery Process and Timing Capabilities
5. Baseline of the Legacy DC Infrastructure
• Linear Deployment of Capacity and Resource Needs
• Straightforward model of resource consumption
• Each Item has its own SW and FW Images, and Configurations
• Each Item also has its own Operations and Maintenance
App
OS
Physical
Server
Corp
App
OS
Physical
Server
App
OS
Physical
Server
DBDB
Finance
DB
App
OS
Physical
Server
Mktg
App
OS
Physical
Server
Storage
Engineering
App
OS
Physical
Server
App
OS
Physical
Server
DBDB
HR
6. Baseline of the Legacy DC Infrastructure
• Design is Imposed on Every Rack within DC
• Infrastructure designed for easy and repeatable integration
• Layers of Management software holding the system together – integrated with
other software packages
• Rigid and inter-twined models to upgrade and maintain system-level designs
• Multiple tools and points of configuration
• Structured Design for Compute, LAN, SAN, Security, Environmentals, etc.
• Manifested as rack-by-rack or row-by-row capability and limits
Management Software Layers
PEOPLE, PROCESS, TECHNOLOGY
7. Baseline of the Legacy DC Network Infrastructure
• The Data Centre Switching Design was based
on the hierarchical switching we used
everywhere
Three tiers: Access, Aggregation and Core
L2/L3 boundary at the aggregation
Images on devices, Configuration on devices
Smaller scale, single purpose servers
Dedicated structure cabling built into racks upfront (state)
Add in services and you were done
• What has changed? Most everything
Sheer Volume and Growth Rates
Fragmentation and DC Space Efficiency
Hypervisor Layer
Cloud and IaaS, PaaS, SaaS
Highly Elastic Consumption
Programmatic Usage Needs
Differentiated Service Needs
Layer 2
Layer 3
Access
Core
Services
Aggregation
8. Baseline of the Legacy DC Infrastructure
• Ongoing Operations Managed at
Points of Intersection
• Items Within Single Domain
Usage Option Set
Training Requirements
Provisioning and Growth
Less Customization
• Between these Domains
Wide Array of Options
Highly Customized – more meetings
Best Practices
Disparate Points of Management
Higher Staff and Training Costs
Server
Storage
Network
Security
9. Baseline of the Legacy DC Infrastructure
• Today L3 knowledge on services
devices
• Different levels of support
• Configuration serially done
• Define service providing devices as IP
end-points
• Virtually connect contexts as if they
are directly connected on LAN
segments
Client
Layer 3
Infrastructure
10. Baseline of the Legacy DC Infrastructure
• Segmentation Between DC Locations
• Often a Cold Site Due to Networking and Storage Restrictions
• Attempts at active/active Initially were to Statically Host Services
Asynchronous
Storage
WAN to DC
Campus to DC
DC return
DCI
DC-1 DC-2
11. Baseline of the Legacy DC Infrastructure
• Business Need timing Impacts – Whiteboard to Service Realization
TRADITIONAL
INDUSTRY
APPROACH -
SIMPLIFIED
Whiteboard
Design
Produce cut
sheets to
teams
Identify
Needs
Provision
Storage
Identify
Server
Class
Identify
Network
Needs
Mask
LUNs
Identify
Server
Instance
Share
WWN’s
Configure
SAN Edge
Determine
DC
Placement
Share
Location
Zone
Fabric
Configure
Network
Edge
Facilities
Stage
Server
Firmware
Updates
BIOS
Policy
Settings
RAID
Settings
Image OS
Coordinate
Ends of
Cables
Coordinate
Ends of
Cables
Join
Systems
Mgmt
Domain
Update
Security
Policy
Install
Application
13. Current DC Directions, Projects, and Goals - Keys
• Abstracting, Converging, and Virtualizing more of the DC Infrastructure for Deployment
and Operational Benefits
• Easing the Restrictions of Structured Design
• Embracing Service Delivery Independent of Location – Including Campus Needs
• Tighter Coupling of Provisioning and Delivery for Accelerated Deployments
14. Current DC Directions, Projects, and Goals
• Virtualizing the Server and
its I/O brings Deployment
and Operations Advantages
• Compute, Network, and
Storage Virtualization at the
Hardware layer is another
Enabler
• Each Item still has its own
Operations and
Maintenance – but
converging
• Each Layer handling HA
individually
App
OS
Physical
Machine
App
OS
Physical
Machine
Finance
App
OS
Virtual
Machine
Mktg
App
OS
Physical
Machine
Engineering
App
OS
Virtual
Machine
App
OS
Virtual
Machine
HR
Cloud Infrastructure Service
Storage
App
OS
Corp
Virtual
Machine
Storage
DB Service
Queue
Physical and Virtual Infrastructure Service
15. Current DC Directions, Projects, and Goals
• Virtualizing the DC Services Layer for Deployment and Operations Advantages
Distributed
• Manual Provisioning
• Flow Engineering to
Integrate Services
• Limited scaling
• Rack-wide VM
mobility
Fabric Based
Cloud
• Policy-based
Provisioning
• Services Provided
Everywhere within
Fabric**
• Scale Physical and
Virtual/Cloud
• DC-wide/Cross-DC VM
Mobility
ComputeCompute Storage Storage Services Services
L2,
L3
Fabric
Cloud
16. Current DC Directions, Projects, and Goals
• Converging the Infrastructure Silos for Deployment and Operations Advantages
• Makes Programmatic I/O Possible without Server Virtualization Need
• The key is how this is accomplished??
Individual
Ethernets (vNICs)
DCB Ethernet
Storage (vHBAs)
Blade Management
Channels (BMC Connections)
x86 Server
NIC
NIC
NIC
Mgmt
Mgmt
HBA
HBA
PCIe Tree
x86 Server
PCIe Tree
17. Current DC Directions, Projects, and Goals
• Increasing Mobility of Services within multiple DCs
L2 Domain Elasticity:
vPC, L2MP/TRILL
OTV LAN extensions
OTV
VN-link
notifications
IP localization:
LISP
VM-awareness:
VN-link
Port Profiles
Storage Elasticity:
FCIP, IO Acceleration
FCoE, Inter-VSAN routing
Device Virtualization:
VDCs,
VRF enhancements
OTV
OTV
OTV
Compute resources are part of the cloud, location is transparent to the user
18. Current DC Directions, Projects, and Goals
• Movement to Scale-Out HA models vs. Scale-Up
• Scale Up:
5 – 9’s uptime
Big Iron
Advanced RAS Features
No Outages in Maintenance Activities
Analogy: Large roadway that can only have lane closures – Overbuild excess lanes for this
• Scale Out:
4 – 9’s uptime
Lower Cost x86
Intermediate RAS Features
Evacuate Portions of Infrastructure (Maintenance Modes)
Analogy: Many smaller roads - Detour traffic and work on sections of roadway incrementally
19. Current DC Directions, Projects, and Goals
Clients
Desktop Virtualization S/W
Virtualized Data Center
Cisco
WAAS
Hypervisor
Cisco
ACE
Desktop O/S
Cisco
ASA
Cisco
MDS9000
Family
App App Data
Storage
Unified
Network Services
Unified
Computing
Unified
Fabric
WAN
Partner Solution
Elements
• Movement of Campus
Desktops into DC
• Combined joint partner
solutions with industry leaders
• Cisco Validated Designs &
Services to accelerate
customer success
Cisco Data Center Business Advantage Framework
Virtualization
20. Current DC Directions, Projects, and Goals
• Movement of Mobile Compute into the DC (Millions)
A shift from desktop-centric end-user compute to mobile devices,
and intelligent endpoints - multiple networks, multiple locations
Source: IDC
21. Current DC Directions, Projects, and Goals
• Outsourcing of Application
• Outsourcing of Platform Management
• Outsourcing of Infrastructure Sections
• Moving Applications into Public Cloud
Cost Models vs. Application Re-Writes
Security of Content
• Moving Tiers of Application into Public Cloud
Interconnections and Replicating Services
Private Channels
Data Center Infrastructure
Company
Business
Unit A
Business
Unit B
Services/
Policies
Services/
Policies
Unit A Web Servers
Cloud Provider Infrastructure
Unit B Web Servers
Unit A Logic Servers
Unit B Logic Servers
Unit A Logic Servers Unit B Logic Servers
Unit A DB Servers Unit B DB Servers
Unit A&B Security
Unit A Security Unit B Security
22. Current DC Directions, Projects, and Goals
• Tight Coupling of Needs to Provisioning – Self Service
• Emerging Capability to Allow Self Service
23. Current DC Directions, Projects, and Goals
• Tight Coupling of Needs to Provisioning – Self Service Typically Mandates VM’s
• Bare Metal Model Unchanged – Used for Virtualization Provider Rollouts
• Management and Troubleshooting of Overlays not shown here
• Single Admin with Full Control on Infrastructure
AFTER
INFRASTRUCTURE
PRE-PROVISIONING
ON HYPERVISOR
Whiteboard
Design Produce
cut sheets – but
fewer teams
involved at time
of need
Identify
Needs
Identify
Virtual DC
Deploy VM
from OVF
Update
Security
Policy
Update
Virtualized
Services
Needs
Image OS Join
Systems
Mgmt
Domain
Install
Application
Configure
Network
Edge
Identify
Shared
Data Store
25. The Data Center Evolution
App 1 App 2
Physical
Virtual
Programmable
• One app per server
• Static environment with
manual provisioning
• One app per VM
• Mobile environment with
dynamic provisioning
• Any app, anywhere
• Elastic environment with
automated provisioning
Months to provision Days to provision Minutes to provision
26. Coming Industry Direction inside the DC - Keys
• Creating a Programmatic and Application Controlled Infrastructure – Without overlaying
everything with numerous software stacks
• Eliminating the Restrictions of Structured Design with a Stateless Edge
• Just-in-time capacity expansion with Pre-Integrated Stacks
• Service delivery capabilities and location becomes and attribute requested by application
itself
• Full Integration of Business Need definition to Provisioning and Delivery
27. Industry Directions inside the Data Center
• Stateless Infrastructures and OPEX Costs
SERVER-RELATED SPEND (CAPEX+OPEX)
WW Spending on Servers, Power & Cooling,and Mgmt. / AdministrationOVERALL SPEND DISTRIBUTION
Source: Gartner—Cisco IT, “Data Center Cost
Portfolio”
Source: IDC, “New Economic Model for the Datacenter”
High
OPEX
IDC, 2011
CustomerSpending($B)
28. Industry Directions inside the Data Center
• Making the Infrastructure Inherently as Stateless as Possible
Queue
App
OS
Physical
Machine
App
OS
Physical
Machine
Finance
App
OS
Virtual
Machine
Mktg
App
OS
Physical
Machine
Engineering
App
OS
Virtual
Machine
App
OS
Virtual
Machine
HR
Storage
App
OS
Corp
Virtual
Machine
Storage
DB Service
Cloud Infrastructure Service
Pool of shared resources
Self-service portalAPI-driven services Selective application mgmt
29. Industry Directions inside the Data Center
• Creating a Standardized API to Orchestrate with User Tools
Consolidate
Assets
Virtualize the
Environment
Automate
Service
Delivery
Standardize
Operations
Increased Agility, Efficiency and Simplicity
Increased Cloud Readiness
30. Industry Directions inside the Data Center
• Beginning with a Separation of Control Plane and Data Plane
• Easier Programmatic Control
• API with the Centralized Controller Architecture Common
Abstract extensions for optimized services to API methods
Move to centralized policies and pools of how resources are consumed
• Industry offerings vary in terms of where some middleware would live
Middleware to control advanced ASIC functionality, and other HW capabilities
Support of these API extensions
Expanding these into Open Source living on the Platforms
31. Software Defined Networking (SDN) – What is it?
Many Definitions
• Openflow
• Controller
• Openstack
• Overlays
• Network virtualization
• Automation
• APIs
• Application oriented
• Virtual Services
• Open vSwitch
32. Software Defined Networking (SDN)
What Is Software Defined Network
(SDN)?
“…In the SDN architecture, the control and data
planes are decoupled, network intelligence and
state are logically centralized, and the underlying
network infrastructure is abstracted from the
applications…”
Source: www.opennetworking.org
What is OpenStack?
Opensource software for building public
and private Clouds; includes Compute (Nova),
Networking (Quantum) and Storage (Swift)
services.
Source: www.openstack.org
What is Overlay Network?
Overlay network is created on existing network
infrastructure (physical and/or virtual) using a network
protocol. Examples of overlay network protocol are:
GRE, VPLS, OTV, LISP and VXLAN
What Is OpenFlow?
Open protocol that specifies interactions between
de-coupled control and data planes
Note: OF is not mandatory for SDN
Note: North-bound Controller APIs are vendor-specific
Note: Applicable to SDN and non-SDN networks
Note: Applicable to SDN and non-SDN networks
Note: SDN is not mandatory for network programmability
nor automation
33. Software Defined Networking (SDN)
Software API
Application Control
Controller Overlays
ComputeCompute Storage Storage Services Services
L2,
L3
Fabric
Applications
• Policy-based Provisioning
• Scale Physical & Virtual/Cloud
• DC-wide/Cross-DC VM Mobility
ComputeCompute Storage Storage Services Services
L2,
L3
Fabric
Controller
ComputeCompute Storage Storage Services Services
L2,
L3
Fabric
Overlay
Networks
• Policy-based Provisioning
• Multiple Tunnels (Visibility?)
• Scaling (Overlay disjoint from Physical)
• Writing to single ONEpk API
• Infrastructure Controlled by
Applications
• Wide-reach, beyond Data Center
34. What Really Does a Controller Do?
• These are not new
Wireless Controllers to centrally manage Access Points
Controlling Bridge in all FEX architectures
VM Managers – things like Auto-Deploy
• More than just Control Plane/Data Plane Separation
All modern modular devices have that separation in a sense
• Expanding the scope
Span entire DC segments (or inter-DC) at scale
Host the Running Images and Components (FEX, Auto Deploy, etc.)
Host the Device Configurations (FEX, Auto Deploy, WiFi, etc.)
Model Driven Imposition of detailed configuration lines no longer in user space
Administrators configure the models end to end now
API’s to allow the end using processes, portals, etc. to configure the models
API’s for the reporting and subscription to monitoring and event subscription
35. Industry Directions inside the Data Center
• One Direction is to Eliminate Control
Plane completely on Element
• White Box Servers
• Merchant Silicon on networking
• “White Box” Network and Storage gear
carry little management
• Goal is to effectively open source code of
devices to users
• Different views by Market Segment
Controller
SW
HW
Platform Vendor Provided OS & MW 3rd Party Agents
Internal Processing Architecture
General Purpose Elements
Optimized Elements
Baseline Control OS – Auto Pushed
Device Configuration – Auto Pushed
Methods for Reporting
Customer
App
Vendor
App
ISV
App
Element
36. Industry Directions inside the Data Center
• Moving to Direct Control of Policies and
Pooling of the Underlying Infrastructure
• Feature Rich Integration
• Controller to Element Closed in its Nature
• Northbound Methods Open
• UCS Model Fits here Today
Controller
HW
Platform Vendor Provided OS & MW 3rd Party Agents
Internal Processing Architecture
General Purpose Elements
Optimized Elements
Element
37. Industry Directions inside the Data Center
• Underlying Hardware will Still Matters – Cannot Rely on Software Libraries/Overlays Only
• White Box Servers with little management
Do we have precedents here?
One example: Are their advantages in processor families for workloads (VT-x, VT-d, TXT for VM boot, ) or is
cheapest OK?
• “White Box” Network and Storage gear with little management
Same example: Will all networking ASICs lose any value, will cheapest device be OK?
• Goal is to effectively open source code of devices to users
Do we want the ability to open for some customizations, or do we want to fully write our complete control
plane?
Key item here – What Optimizations are Needed and how to put in API?
• Different views by Market Segment
Providers will be more capable of developing this IP and its lifecycle – will others want this vs. Off the Shelf?
Example: Do we want to expose the core OS for loading of custom code, or offer API to control?
38. Industry Directions inside the Data Center
• Create “Application Profiles” to be Consumed above the Hypervisor or even OS level
• Needs from DC Fabric for IO and Services Mapped to Policy
Policies Mapped to Application Hosts (PM and/or VM)
Requested directly by these applications via API
Storage Services
Directory Services
Web Presence
SAML
39. Industry Directions inside the Data Center
• Pre-Integrated Stacks for Just In Time Growth and Stability
• Earlier Slide Talked to Management at Vendor Product Intersection
This is key area and reason for these models
• API’s Include Metering and the Pay Per Use Growing Trend
• Integrated Stack Optionally Includes Orchestration (UIM, CIAC, Cloupia, etc.)
• Orchestration over Scaled-Out Pods
40. Industry Directions inside the Data Center
• Ease in Provisioning and Moving Workloads between DCs, and Off Premise to Provider
Time To Monetize
• Innovate new products and
services at an accelerated rate
Time To Develop
• Real-time bridging of application
developers to users/Customers
Apps and Apps
• Buy Apps
• Buy Services
• Leverage from Partners
• Develop
Scale and Scale
• Users, Devices, Locations
ReduceCostPerBizMetric
BizContinuity,DR,Security,Mission
Critical
More Applications
• IDG: On an average enterprises will add 46
new apps in 2013
Services orientation
• Apps in sync with infrastructure
Cross Cloud services integration
• Private and Public clouds
Flexible architecture
• Scale as needed
• (Pay as needed)
Prepare for the Unpredictable
41. Industry Directions inside the Data Center
• Scale-Out Service Availability with In
Production Maintenance
• Change windows that can span many days
• Not just VM’s but Fabric Service Providers
App
OS
App
OS
App
OS
Client
42. Industry Directions inside the Data Center
• Whiteboarding a Business Need to Deployment – Making this Automatic
• Tight Coupling of Needs to Provisioning – Self Service for VMs and Bare Metal
• All Provisioning is done on Policy Basis – Before any Rollouts
• Admin expertise is retained with Control of their segment on Infrastructure
AFTER
INFRASTRUCTURE
PRE-PROVISIONING
(Physical and Virtual)
Whiteboard Design
Produce Final
Design
Identify
Needs
End User: Identify on
basis of security
needs: Physical DC,
or Cloud Preferences
End User: Deploy PM or
VM from audited Policy
Template with Security
Admin
Image OS Join
Systems
Mgmt
Domain
Install
Application
End User: Identify
Policy driven network
edge, Pre-
coordinated with
Network Admin
End User: Identify
Storage Needs,
Pre-coordinated
with Storage
Admin
44. How Cisco is Meeting These Demands - Keys
• Unification and Virtualization of the Server Infrastructure and DC Edge
• Unification and Virtualization of the DC Fabric
• Unification and Virtualization of the Orchestration of DC Services Layer
• Unification of the Control and Programmatic usage of this Infrastructure
• Easing Deployment with Pre-Integrated Solutions
• Linking these New Infrastructure Capabilities Directly to Emerging Provisioning Models
Software Defined Networking (SDN)
Cisco Open Network Environment (ONE)
Project Daylight and Open Source
Private Cloud
Public Cloud
InterCloud
OpenStack
45. Unification and Virtualization of the Server
Infrastructure and DC Edge
Network interface
card(NIC) configuration:
MAC address, VLAN,
and QoS settings; host bus
adapter
HBA configuration: worldwide
names
WWNs), VSANs, and bandwidth
constraints; and firmware
revisions
Processor Memory
HBAs
NICs
Configuration
Today’s Server: Fixed Set of Resources
Hypervisor
Application
Server Architecture for Virtualization and Cloud:
Networked Pools of Computing
Application
Network interface
card(NIC) configuration:
MAC address, VLAN,
and QoS settings; host bus
adapter
HBA configuration: worldwide
names
WWNs), VSANs, and bandwidth
constraints; and firmware
revisions
API
46. Unification and Virtualization of the Server
Infrastructure and DC Edge
XML API
STANDARD
API’S
Fabric Extenders
(I/O modules)
Industry
Standard APIs
Blade Form Factor
Rack Form Factor
FABRIC INTERCONNECTS
UCS Manager
COMPUTE
Automation
Intelligence
Convergence
48. Cisco UCS and the Software Defined Server
Blade AG:
BIOS
RAID
CPLD
Boot Method
BMC Setup
Alerting
Etc.
NIC AG:
# NICs
Networks to Tie in
QOS and Security Policy
# HBAs
VSANs to Tie in
QoS and Security Policy
Etc.
XML API
Management
Information Tree
(MIT)
Root
Server
I/ORAID BIOS
NIC1 HBA1
I/O
Edge
Eth Port
Config:
NIC1
FC Port
Config:
HBA1
“Front End”
“Back End”
Port AG:
Ether Port Networks
QoS Policy
Security Policy
Linkages to Server NICs
Network Segments
Etc.
Fabric AG:
Storage Segments
VSAN Mappings
F Port Trunking
F Port Channeling
Zoning **
Etc.
Other AG’s:
VMM AG
Host Agent AG
Etc.
49. Cisco UCS is a Software Defined Server
• We start with a data model that includes the existence, identity, and configuration of a
server and its various sub-components
Deep model of very fundamental components within servers
• We grow this data model to include upstream I/O needs to include the configuration of
the upstream devices ports connecting to this server
Coupling of the “other end” of the cables that connect these servers to the DC
• We include policies to define groupings of these servers, priorities, security segments,
and many others
To offer differentiated services, for example different x86 processor types
• We probe newly added hardware, to classify them into service level tiers by capability
To ease not only the customer consumption of services, but also provider capacity growth
• We map these modeled servers and all surrounding component needs to these service
level tiers when the actual services are required
50. Unification and Virtualization of the DC Fabric
Fabric
AppAppAppApp
Integrated Intelligent Infrastructure
Fabric-Based ArchitectureAppliance Architectures
App
Specialized Infrastructure
App App App
App
Management Software
App App App
Commodity Server, Network and Security
Virtual Overlay ArchitectureLegacy Architectures
Operational Flexibility, Scalability
Application,Capacity,Throughputand
Performance
51. Unification and Virtualization of the DC Fabric
• Network Function Virtualization (NfV)
• Compute, storage and memory
interconnected by a network fabric
• Creates abstracted pool of resources
within and between data centers
• Integrated and simplified management
• APIs to integrate with any application,
any time
ComputeCompute Storage Storage Storage Storage
Secure, Scalable
Data Center Fabric Architecture
Programmable
Monitoring
Apps
Networking
Apps
End User
Apps
Mission Critical
Apps
Built-in Manageability
Policy-based Provisioning
DC-wide/Cross-DC VM Mobility
Characteristics
“Definition of fabric-based
infrastructure:
Compute, storage, memory and I/O
components joined through a fabric
interconnect, and the software to configure
and manage them” - Gartner
52. Unification and Virtualization of the Orchestration of DC
Services Layer
• Network Services Virtualization (NsV)
Fabric Based
Cloud
• Policy-based
Provisioning
• Services Provided
Everywhere within
Fabric**
• Scale Physical and
Virtual/Cloud
• DC-wide/Cross-DC VM
Mobility
ComputeCompute Storage Storage Services Services
L2,
L3
Fabric
Cloud
Application Driven
• Service-centric Provisioning
• Fabric Service Pointers for
Virtual and Bare Metal
(vPath, etc.)
• Flexible – Anywhere,
Anytime
• Cross-cloud VM Mobility
ComputeCompute Storage Storage Services Services
L2,
L3
Programmable Provisionable
Monitoring
Apps
Provisioning
Apps
Networking
Apps
End-
User
Apps
Integrated Fabric and Cloud
World of Many Clouds
53. Unification of the Control and Programmatic usage of
this Infrastructure
Automated
Self-Service
Provisioning
Architect Design Where Can
We Put It?
Procure Install Configure Secure Is It Ready?
Manual
Capacity
On-Demand
Policy-Based
Provisioning
Built-In
Governance
FROM 8 WEEKS TO 15 MINUTES
54. Easing Deployment with Pre-Integrated Solutions
Smart
Solutions
Applications
Operating
System and
Hypervisor
Management
Vertical
Solution
Focus
Enterprise Apps Databases Business Analytics
/ Big Data
Virtual Desktop
RISC Migration
VXI
Vblock
FLEXPOD
RetailHealthcare Manufacturing
Financial
Services
55. Software Defined Networking (SDN) and Cisco
• We have a Software Defined Compute and Network Edge today in UCS
• Open Networking Environment (ONE) to Bring to Fabric
• ONE Programming Kit (ONE PK)
To program the infrastructure
• Physical and Virtual
Compute
LAN Segments
Storage Segments
Security Services
Client Services
• New Models for Controller?
Can the Controller itself move from a HA pair of appliances – to a N-Wise fabric service also? Yes
58. Research/
Academia
Experimental
OpenFlow/SDN
components for
production
networks
Massively Scalable
Data Center
Customize with
Programmatic
APIs to provide
deep insight into
network traffic
Customer Insights: Network Programmability
Cloud
Automated
provisioning and
programmable
overlay,
OpenStack
Scalable
Multi-Tenancy
Network Flow
Management
Network
“Slicing”
59. Research/
Academia
Experimental
OpenFlow/SDN
components for
production
networks
Massively Scalable
Data Center
Customize with
Programmatic
APIs to provide
deep insight into
network traffic
Service Providers
Policy-based
control and
analytics to
optimize and
monetize
service delivery
Customer Insights: Network Programmability
Cloud
Automated
provisioning and
programmable
overlay,
OpenStack
Scalable
Multi-Tenancy
Network Flow
Management
Network
“Slicing”
Agile Service
Delivery
60. Private Cloud
Automation
Research/
Academia
Experimental
OpenFlow/SDN
components for
production
networks
Massively Scalable
Data Center
Customize with
Programmatic
APIs to provide
deep insight into
network traffic
Service Providers
Policy-based
control and
analytics to
optimize and
monetize
service delivery
Enterprise
Virtual workloads,
VDI, Orchestration
of security profiles
Customer Insights: Network Programmability
Cloud
Automated
provisioning and
programmable
overlay,
OpenStack
Diverse Programmability Requirements Across Segments
Most Requirements are for Automation & Programmability
Scalable
Multi-Tenancy
Network Flow
Management
Network
“Slicing”
Agile Service
Delivery
61. Evolution of the Intelligent Network
Preserve What’s Working Evolve for Emerging Requirements
Evolve the Network for the Next Wave of Application Requirements
• Resiliency
• Scale and Security
• Rich feature-set
• Operational Simplicity
• Programmability
• Application aware
• Business Intelligence from
Network
• Business driven Network
Adaptation
+
62. Open Network Environment (ONE)
Policy Analytics
Orchestration
Program for
Optimized
Experience
Harvest
Network
Intelligence
Network
64. Control Plane
Data Plane
Vendor-
specific APIs
Applications
1 Programmable APIs
Control Plane
Data Plane
Vendor
Specific
(e.g. onePK)
CLI,
SNMP,
Netflow,
…
Applications
(Network Mgmt,
Monitoring, …)
65. Control Plane
Data Plane
Controller
Data Plane
Applications
Vendor-
specific APIs
OpenFlow,
PCEP,
I2RS
2a Classic SDN
Vendor
Specific
(e.g. onePK)
Vendor-
specific APIs
Applications
1 Programmable APIs
Control Plane
Data Plane
Vendor
Specific
(e.g. onePK)
CLI,
SNMP,
Netflow,
…
Applications
(Network Mgmt,
Monitoring, …)
66. Control Plane
Data Plane
Controller
Data Plane
Applications
Vendor-
specific APIs
OpenFlow,
PCEP,
I2RS
2a Classic SDN
Vendor
Specific
(e.g. onePK)
Controller
Data Plane
Applications
Vendor-
specific APIs
OpenFlow,
PCEP,
I2RS
Control Plane
2b Hybrid “SDN”
Vendor-
specific APIs
Applications
1 Programmable APIs
Control Plane
Data Plane
Vendor
Specific
(e.g. onePK)
Vendor
Specific
(e.g. onePK)
CLI,
SNMP,
Netflow,
…
Applications
(Network Mgmt,
Monitoring, …)
67. Control Plane
Data Plane
Controller
Data Plane
Applications
Vendor-
specific APIs
OpenFlow,
PCEP,
I2RS
2a Classic SDN
Vendor
Specific
(e.g. onePK)
Controller
Data Plane
Applications
Vendor-
specific APIs
OpenFlow,
PCEP,
I2RS
Control Plane
2b Hybrid “SDN”
Applications
Virtual Switch
Overlays
Overlay
Protocols
(e.g. VXLAN)
Vendor-
specific APIs
3 Overlay Networks
Control Plane
Data Plane
Overlays
Vendor-
specific APIs
Applications
1 Programmable APIs
Control Plane
Data Plane
Vendor
Specific
(e.g. onePK)
Vendor
Specific
(e.g. onePK)
Openstack and Network Overlays Apply to All Models (Physical/Virtual)
Custom Features Can Be Built
CLI,
SNMP,
Netflow,
…
Applications
(Network Mgmt,
Monitoring, …)
68. Approach 1 Approach 2 Approach 3
Implementing Customer Use Cases
Apps
Controller
OpenFlow
Device
Device w/
OpenFlow
Device
Apps Apps
APIs
Network Network
Cisco Approach: Flexibility to Choose—The Power of “AND”
Physical
and
Virtual
Network Overlays
Other
Agents
Tightly-coupled HW & SW
Investment Protection
Loosely-coupled HW & SW
New Use Cases
Logical/overlay Networks
VM Mobility; Scalable Multi-tenancy
69. Open Network Environment (ONE)
a
Industry’s Most Comprehensive Portfolio
Hardware + Software Physical + Virtual Network + Compute
Multi-layer API
Networ
k
Programmatic
APIs
Controllers
and Agents
Virtual
Overlays
Virtual Overlay
(w/ Controller)Controller
Applications
70. Any Cisco
Router or
Switch
Applications
That YOU
Create
onePK
Flexible development environment to:
• Innovate
• Extend
• Automate
• Customize
• Enhance
• Modify
ONE Programming Kit (ONE PK)
71. Open Network Environment – Flexibility to Choose
ABILITYTOSPANLAYERS
Packet classifiers
Marking
Copy/Punt Inject
Statistics
Quantum API
Interface descriptions
L2 network provisioning
L3 and IP Addr. Mgmt. - coming
RICHNESS OF FEATURES
Element
Element
Capabilities
Configuration
Management
Interface/Ports
Events
Location
Information
Utilities
Syslog Events and
Queries
AAA Interface
Netflow Events
DHCP Events
Discovery
Network Element
Discovery
Service Discovery
Topology
Discovery
Developer
Debug Capabilities
Tracing Interfaces
Management
Extensions
Policy
Interface Policy
Interface
Feature Policy
Forwarding
Policy
Flow Action
Policy
Routing
Protocol
Change
Events
RIB Table
Queries
Developer portal
ISVs
Training & Certification
72. OpenDaylight and Cisco
• Extension to Cisco ONE Controller
• Open Source version of this Controller
• Wide Vendor Backing
73. Public Cloud for DC Services
• Greatly Simplified Drawing
• Applications Need to
Consume these API’s
74. InterCloud Characteristics
Data Center
Private Cloud
Virtual Private Cloud
Cloud Services
Benefits: Network consistency, security consistency, policy consistency
Operating model: Do-it-yourself or provider-managed service
Use cases: Bursting, DR, upgrade/migration
75. InterCloud with Nexus 1000V
Other
Tenants
Provider Cloud
(any hypervisor)
Enterprise Cloud
(Private/Managed/Hosted Cloud)
Private
Cloud
L2 Virtual
Private Cloud
Nexus 1000V InterCloud
Secure Hybrid Cloud
Consistent Policy,
Management & Operation
VM VM VM
VSG
Nexus1000V vSwitch
VM VM VM
VSG
Nexus1000V vSwitch
76. InterCloud with Nexus 1000V
TRADITIONAL
INDUSTRY
APPROACH
1 2 3 4 5 6 7 8 9 10
Create
Template
from VM
Image
Create VM
Instance
From
Template
Document
Network,
L4-7 Policy
Shutdown
App
Export
VM
Convert
VM to
Provider
Format
Start VM
in Cloud
Deploy
Site-to-Site
Tunnel
Select
VM to
Migrate
Select
VM to
Migrate
Migrate
VM
Select
Destination
Cloud
21
Re-
Configure
Provider
Security
Reconfigure
Network
Policies
CISCO
APPROACH
• Simplified Operations
• Rapid Provisioning
• Accelerated Time-to-Market
77. IT and Service Fulfillment – Brokering and Delivery
• Coordination of the
Business Users
Needs
• Where things are
physically residing
defined within SLA at
service request time
PUBLIC
PRIVATE
user IT
SaaS
Hybrid
user IT
SaaS
Current Future
Business Partner and
Broker of Services
IaaS
Provider of Infrastructure
App 1
App 2
App 3
Service Portfolio
78. OpenStack and Cisco
Cisco Plug-In
UCS, Nexus, OverDrive
Extensions for QoS
Nicira Plug-In
Open vSwitch
Other Plug-In
80. Cisco Evolution of the Data Center
LAN SAN
Compute
Storage
Application
Compute
Storage
Application
LAN SAN
Compute
Storage
Application
Compute
Storage
Application
V
M
V
M
V
M
V
M
Converge LAN
and SAN,
Physical and
Virtual
Unified Fabric
LAN SAN
Compute
Storage
Application
Compute
Storage
Application
V
M
V
M
V
M
V
M
Integrate
Compute and
Storage to create
pools of
resources
Unified Fabric
Unified Computing LAN SAN
Compute
Storage
Application
Compute
Storage
Application
V
M
V
M
V
M
V
M
Open API and
Programmable
Interface into the
Fabric
Unified Fabric
Unified Computing
Unified Management
Cisco ONE
Physical
Server Centric
Virtual
VM Centric
Programmable
Application Centric
81. Cisco Evolution of the Data Center
Cisco
Networking
Unified Fabric
with Nexus
Nexus 1000V
Unified
Computing
Systems
FabricPath and
OTV
Cisco Cloupia and
Intelligent Automation
Nexus 1000V
InterCloud
Cisco ONE
Physical
Server Centric
Virtual
VM Centric
Programmable
Application Centric
82. The Evolving DC - Summary Points (Vision->Action)
• Look at Adding Capacity, Consumption, while Minimizing Human Involvement
• Create a Standardized and Stateless Infrastructure
• Add Programmatic Capacity to this Infrastructure
One model is with extensive software libraries and API’s, on Low Cost feature-poor Hardware
Another is with Optimized Hardware and these API’s for Infrastructure Programmability
De-couple the controller from the “Merchant Silicon” idea
• Separation of Control and Data Planes is not Enough
Multiples of Each over a Secure Unified Infrastructure
• Pre-Integrated Stacks that Include the Application are Growing Rapidly
The OS may not always be coupled with a VM Service Container Construct
• Service Location is a Service Level Metric – Not a Roadblock
Virtualized Servers can be another SLA item
• Reduction in the Concept of an Outage Window
Drive to Zero but with full Maintenance Capacity In-Hours
• Elimination of Layers between Business Need definition to Provisioning and Delivery
83. Complete Your Paper
“Session Evaluation”
Give us your feedback and you could win
1 of 2 fabulous prizes in a random draw.
Complete and return your paper
evaluation form to the room attendant
as you leave this session.
Winners will be announced today.
You must be present to win!
..visit them at BOOTH# 100