3. Hitachi & Brocade have partnered to
enable their customers’ cloud strategies
with joint solutions that offer:
Highest levels of performance, scalability,
reliability, and efficiency
Tested and proven architectures and technology
7. END to end networkinG for virtualization
VDX 6720
ETHERNET FABRIC
10Gb
DCB IP/NAS/FCoE/iSCSI
NAS/iSCSI
Server
DCX 8510 / 6510 / 6505
FIBRE CHANNEL FABRIC
8Gb/16gb FC
Back-End FC
11. Virtual Machine Mobility
CHALLENGES TODAY
Limited sphere of mobility to
single rack with L3
VM migration can break
network/application access
L3 to L2
Agg. Layer Mapping services to all physical
STP Distributed
! ! ! ! vSwitch
ports undermines network and
security best practices.
Distributed Virtual Switch
consumes server resources
? ? ?
?
Limited insight into where VMs
are running
12. VM Mobility with VDX 6746 & VCS
AUTOMATIC MIGRATION OF PORT PROFILES (AMPP)
Distributed intelligence of
VM location throughout VCS
fabric MAC ID
Granular VLAN, ACL, and
MAC ID
MAC ID
QoS policies assigned per MAC ID
MAC ID
VM MAC ID
Zero network re- VDX 6746
configuration required when
a VM is moved across the
VCS fabric
MAC ID
MAC ID MAC ID
VDX 6746
13. Virtual Cluster Switching (VCS) Technology
Brocade is strongly promoting Ethernet Fabric,
which is new network architecture for evolving
virtualization / cloud environments:
Logically flat L2 network: self aggregation,
topology flexibility, optimized for large-scaled
virtualization
High throughput & High Performance:
eliminated STP, multi-paths, automatic selection
of the shortest path of the traffic, automatic
scaling of bandwidth.
High availability & High scalability: self-
recovery of fabrics, isolation of disorders.
Automatic and simple administration: self
mutual recognition of the switches, addition and
deletion of switches without configuration,
manage several switches as a single chassis.
Coordination with server virtualization: port
configuration subsequent to the migration of
VMs, automatic discovery of devices and
automatic creation & configuration of port
profiles.
Convergence of LAN and SAN: supports
conventional Ethernet and the new DCB protocol.
14. New 10GE DCB switch MODULE
Industry’s 1st Ethernet Fabric enabled Brocade VDX 6746
embedded 10Gb DCB switch module
flexible connectivity options for cloud
architectures
enhance hierarchical network architectures
deploy flatter scale-out fabrics; or
converge networks
Brocade VDX 6746 supports
24 x 10GE ports (16 internal, 8 external)
Non-blocking, cut through architecture, wire
speed
Dual speed (1GE/10GE) external ports
1 Gb RJ-45 connectors and 10 Gb twinax
copper or optical connectors
Brocade Virtual Cluster Switching (VCS)
DCB and multi hop FCoE
Automatic Migration of Port Profiles (AMPP)
Rich set of Layer 2 features
Hitachi Chassis Management Support
15. Brocade Embedded
10GbE DCB IP Switch
Single Rack Use Case Brocade Embedded 8Gbps
FC Switch
LAN
FC
SAN
(A)
Top of Rack
VDX 6720s
FC
SAN
(B) NAS/iSCSI/FCoE
16. VCS and FCoE use cases
FCoE storage
(Multi hop)
NAS FC storage *
VDX 6720
Top of Rack
Switch
iSCSI DCB
VCS FCoE storage
(Single Hop)
VCS license allows creation Cluster
of a Ethernet Fabric that
scales past two switches VDX 6746
Embedded
Switch
FCoE license
enables single and
Creation of a VCS fabric Multi Hop FCoE
between 2 embedded
switches is free
Server Blade
Logical
CB500 Chassis
vLAG
17. Hitachi and Brocade SAN Solutions
LONG STANDING OEM PARTNERSHIP
Highly available and scalable solutions providing cloud-optimized
performance and unified network management
SAN Consolidation Virtualization Converged D/C Solutions
Virtual
Storage
SAN
VM VM VM
DCX
8510
Family
FICON Infrastructure Business Continuity/DR Data-at-Rest Encryption
Server Clear
z196 Text
Brocad
e DCX BES
8510 or
IP WAN FS8-18
$%&(
LUN
@+%
18. Hitachi Converged Platform for Oracle Database
Pre-tested Integrated Open Systems for Oracle Databases
High performance architecture
balances servers and storage
Tailored approach uses best of
breed components
Open and flexible support for Brocade 5300 SAN Switch
multiple OS, DB, VM versions Dense Expansion Trays
Tight Oracle management
integration via adapters Hitachi Unified Storage
Cost effective modular solution Hitachi Compute Blade 2000
X57A1 Blades
HDS property. Not for External Distribution.
20. HNAS Open Compute Block
Competitive answer to NetApp FlexPod
Leverage test bed to test & validate
several application use cases
VMware VDI will be the first application
to be tested with a target completion for
HDS’ SKO
GTM focus will be aimed at channel
partners
21.
22. Campus Deployment Example
VDI Clients ICXs / FCXs
Over 1GbE In Wiring Closets
per Floor
10 GbE
Uplinks
LAN
Aggregation
1 GbE
10 GbE
HDS HNAS VDI Solution POD*
In Data Center
(Compute + HNAS + Brocade Networking +
Storage)
Brocade Data Center Fabrics are already powering a number of Hitachi cloud architectures such as:Hitachi Converged Data Center Solutions for: Microsoft Hyper-V Cloud Fast Track Exchange 2010 Oracle DatabasesVideo SurveillanceHitachi Infrastructure-as-a-service Managed ServicesBrocade Fibre Channel and Ethernet Networking products are also embedded within Hitachi’s Compute Blade Platforms and perfectly complement Hitachi Data System’s entire storage portfolio, from File & Content to Block. <click>Please review the whitepaper developed by Enterprise Strategy Group on “Converged Data Center Solutions” on brocade.com/hds.
Brocade Data Center Fabrics are already powering a number of Hitachi cloud architectures such as:Hitachi Converged Data Center Solution for Microsoft Hyper-V Cloud Fast Track and Exchange 2010 Hitachi Infrastructure-as-a-service Managed Services that Sean Moser mentioned in his video.Brocade Fibre Channel and Enternet Networking technologies are also embedded in Hitachi Content and NAS platformsPlease review the whitepaper developed by Enterprise Strategy Group on “Converged Data Center Solutions” on brocade.com/hds.
So lets look at how a software defined virtual network, or a software overlay might work over a classic network.Well first off, you may have to deal with spanning tree. Why is this bad? Passive 1G links may have been fine in the past, but passive 10G links? What about passive 40G links? Or 100G links? Or what about multi tenancy? If you only have one path then all your software networks and overlays will all be traveling on the same road. Imagine if there was only one freeway to every location, all others were shut down “to avoid” confusion. This is the nature of an active/passive network used in most data centers today. Lose a device? Well now all traffic will have to move to the next best path. Convergence of routing may be needed, and some failover time might impact user experience. Now replace that device. How much manual configuration is needed on the new device, how much is needed to the adjacent devices that the new device needs to talk to? How many humans might you need to do this all in parallel? What about an overloaded link? What then? Well the same old story as 1999, humans go out to the datacenter, get on each box at either ends of the overloaded link. Configure LAGs on both sides, add the needed cables, test and verify functionality. And you need a downtime for this event as traffic on the original link may be disrupted when adding the new links. So before users get relief from a slow connection, they might have to lose all connections. If your goal was to give the user an amazing always on, dynamic, smooth and fast network that adapts to changes you may not have just given them the best impression of the promise of SDN.
Now this is more like it. TRILL brings you active/active paths. Now all your links are active. Every freeway open and clear! And it gets better because you also have ECMP. Equal Cost Multi-Pathing. This means that if you have 20 possible paths from one location to another, the network will automatically pick the best 16 paths and load balance your traffic over those 16 paths. If you are running this over a Brocade Ethernet Fabric it could get even better. Lets say you connect 4 cables between two Brocade switches, those switches are going to figure out what ASIC port groups each cable is on, and then they are going to measure the lengths of those cables. If two links are found to both be in the same ASIC group on both switches, and are the same length? Brocade Trunking engages giving you not flow based 5-tuple load balancing as with ECMP, but frame based nearly perfect distribution of traffic. If it finds 4, or 6 or 8? You can end up with a lot of very clean ASIC based frame level frame weaving. And if you have 16 that are all optimal? You will get 2 Brocade Trunks of 8 links each and then an ECMP balance between the two trunk groups! ECMP and Brocade Trunking work together perfectly. Oh, and did I mention this is all automatic? No configuration needed! You don’t even need to log into the switch! These switches are of the “smarter than the average bear” variety. What about replacing a bad switch? No problem. Wire up the new device, power it up and you’re done. The new device will automatically join the fabric, will automatically configure all the Trunks and LAGs to give you the best performance and download any other information needed to be part of the fabric. Configuration? Not really much to do. LAGs? Automatic. Trunks? Automatic? Moving a port profile to a new port because a VM moved? Automatic. Have a book you’ve been wanting to read but have been too busy taking care of your network? Fabrics are good for Book Clubs. So you need to add 20Gs of bandwidth to a saturated 10G link, just plug in the cables. That’s ALL! The switches will figure out the rest for you. And it’s so smooth you won’t even need a downtime. Intelligence. Real network intelligence. Things like AMPP give you MAC based port profiles that get automatically distributed to all switches and follow clients on their own. The ASICS will automatically try to build as many Brocade Trunks as possible. VCS will take care of teaching a new device how to be part of the network. The eNS (Ethernet Name Server) will distribute and sync MAC tables between all the switches so every switch knows where every device is. TRILL and ECMP will give you zero config routing and active/active paths that can use up to 16 paths for that content distribution server. All of this automagic. So the SDN sitting on top of all this? Smooth and steady. Had all the paths possible to take advantage of. Can move flows around with the widest selection of paths. Can distribute flows from multiple users over many different possible paths. The users should see nearly permanent uptime and have the feeling they are are the only ones on the network. Fabrics give SDNs the nimble and agile foundation to match the nimble and agile nature of SDNs.
Key PointsVM mobility can occur within a cluster of physical servers that are in the same IP subnet and Ethernet VLAN.As described in the review of STP limitations, the sphere of VM migration can be further constrained. The solution for flexible VM mobility is a more scalable and available Layer 2 network with higher network bandwidth utilization.For a VM to migrate from one server to another, many server attributes must be the same on the origination and destination servers. This extends into the network as well, requiring VLAN, Access Control List (ACL), Quality of Service (QoS), and security profiles to be the same on both the source and destination access switch ports. If switch port configurations differ, either the migration pre-flight will fail or network access for the VM will break Organizations could map all settings to all network ports, but that would violate most networking and security best practices. The distributed virtual switch in vSphere 4 addresses some of these issues, but at the cost of consuming physical server resources for switching, added complexity in administering network policies at multiple switch tiers, and a lack of consistent security enforcement for VM-to-VM traffic.
And we’re going to connect that VM system to another switch. Because the MAC address is already approved, its presence is accepted within the fabric. Now let’s show what happens when Webserver 1 is unplugged for maintenance. Let’s remember that human error causes 70% of all problems on the network. So the technician working on System 1 goofs. This system now becomes a rogue server. When the technician tries to plug Webserver 1 back in, Switch #4 doesn’t recognize it. In this way, you can keep problems associated with one system from causing problems elsewhere on the network. But when the problem is fixed, you can enjoy the benefits of the Brocade automatic migration port profile.
The Brocade VDX 6746 Switch is a state-of-the-art 10 Gigabit Ethernet embedded switch for the Hitachi CB500 blade server platform. It enables Hitachi CB500 to support flexible connectivity options for cloud architectures, including lossless Ethernet fabrics. VDX 6720 provides CB500 customers the choice to enhance their hierarchical network architectures, deploy flatter scale-out fabrics or converge networks when deploying virtualization and cloud IT infrastructures. It is designed to increase scalability and enhance VM mobility, further simplifying management and significantly reducing operational costsDesigned for Hitachi CB500, VDX 6746 is a 10 Gigabit Ethernet embedded switch with 16 internal ports and 8 external ports. Embedded design dramatically reduces cabling, power and cooling requirements compared to external stand-alone switches.Wire-speed switch with non-blocking cut through architecture provides industry-leading performance and ultra-low latencyDual speed (1Gbps/10Gbps) capable external ports are well suited for datacenters migrating to next generation 10Gbps high performance architecturesProvides flexible connectivity options including 1 Gbps RJ-45 connectors and 10 Gbpstwinax copper or optical connectorsSimplifies network architectures and enables cloud computing by delivering Brocade Virtual Cluster Switching (VCS) technology and enabling Ethernet fabrics VCS technology deploys scale-out fabrics instead of a hierarchical network to flatten the network design, and manages the entire fabric as a single Logical Chassis to reduce complexityCompared to classic Ethernet architectures, Ethernet fabrics allow all paths to be active, increasing network performance, utilization, and resiliencyDatacenter Bridging and multi-hop FCoE capabilities enable lossless unified storage connectivity, and storage and LAN traffic convergence to reduce connectivity costAutomatic Migration of Port Profiles (AMPP) feature simplifies virtualized server management by enabling seamless Virtual Machine (VM) mobility Offers a rich set of Layer 2 features and can be deployed into classic 1Gb and 10Gb architectures, preserving existing network designs and cablingIntegrates with Hitachi Chassis Management software enabling end-to-end management of CB500
Hitachi and Brocade have a long standing OEM partnership providing highly available and scalable Fibre Channel SAN solutions with unique capabilities. Brocade and Hitachi have installed over 600,000 SAN ports worldwide and over 1500 Directors in mission-critical environments. Brocade's SAN products are a perfect complement to Hitachi’s Unified Storage and Virtual Storage Platforms. For example:Brocade’s new inter-chassis link technology enables a greater degree of SAN consolidation for large Hitachi SANs through increased port density and bandwidthCombining Brocade’s virtualization-optimized, adapter-to-storage connectivity and management solutions with Hitachi’s storage virtualization capabilities enable end-to-end virtualization.
Hitachi Converged Platform for Oracle DatabaseHigh Performance – A balanced server and storage platform with higher performance over existing systemsUsing SSD I/O cards database can scale based onthe card’s abilities.RAS – Enterprise class reliability, availability, scalability (RAS)OpenArchitecture - supporting new and legacy Oracle applications (9i, 10g and 11g support)As well as multiple operating system and virtual machine versionsFlexibility – Deploy Oracle and non-Oracle workloads on Linux or Windows flavor of choiceSmall, medium and large configurations suggested for ease of orderingEase of Management –Tight integration with Oracle stackOracle Enterprise Manager, Recovery Manager, Virtual Machine adapters from HDSCost Effectiveness – Avoid vendor lock-in, pay for needed performance onlyNote: HDS does not resell Oracle software, none is included in this platformOracle ExadataHigh performance for extreme workload – Ideal for very large databasesUnique technology and Tight integration - Smart flash cache, Smart scan, interleaved grid disks, columnar hybrid compressionSingle point of service and support DisadvantagesClosed proprietary platform - only Oracle 11gInflexible – only Oracle LinuxExpensive – often pay for more than what is neededVendor lock-in
Legacy IP infrastructure causing performance issue on the front-end.IP opportunity for HDS to deliver Brocade high performance Ethernet Products
Ethernet FabricBrocade pioneered the development, architecture, and deployment of network fabric technology in the data center. Brocade’s SAN fabric technology is successfully proven in over 90% of the Global 1000 data centers. Now Brocade is bringing the same level of innovation to the data center LAN, combining Ethernet and Brocade fabric technology.STP is not necessary because the Ethernet fabric appears as a single logical switch to connected servers, devices, and the rest of the network. The Ethernet fabric is an advanced multi-path network utilizing an emerging standard called TRILL (Transparent Interconnect of Lots of Links). Unlike STP, with TRILL, all paths in the network are active and traffic is distributed across those equal cost paths automatically. In this optimized environment, traffic automatically takes the shortest path for minimum latency without any manual configuration.Events like added, removed, or failed links are not disruptive to the Ethernet fabric and do not require all traffic in the fabric to stop. If a single link fails, traffic is automatically rerouted to other available paths in under a second. Single component failures do not require the entire fabric topology to reconverge, ensuring all traffic is not affected by an isolated issue. The fabric is lossless and low latency.The Ethernet fabric is designed to include advanced Ethernet technology for higher utilization, greater performance, and to be network convergence ready. With Data Center Bridging (DCB) capabilities built-in, the Ethernet fabric is lossless, making it ideal for FCoE and iSCSI storage traffic and will enable LAN and SAN convergence for Tier 2 and 3 applications. Distributed IntelligenceWith VCS, all configuration and end device information is automatically distributed to each member switch in the fabric. The Ethernet fabric is self forming. When two VCS-enabled switches are connected, the fabric is automatically created and the switches learn the common fabric configuration.The Ethernet fabric does not dictate any specific topology, so it does not restrict over-subscription ratios. This allows the architect to create a topology that best meets application requirements.The fabric is aware of all members, devices, and VMs. When a server connects to the fabric for the first time, all switches in the fabric learn about that server. This allows for fabric switches to be added or removed and for physical or virtual servers to be relocated, without the fabric needing to be manually reconfigured.Unlike switch stacking technologies, the Ethernet fabric is masterless. This means that no single switch stores configuration information or controls fabric operations.Distributed Intelligence supports a more virtualized access layer. Instead of distributed software switch functionality to exist in the virtualization hypervisor, access layer switching is done in the switch hardware, improving performance, ensuring consistent and correct security policies, and simplifying network operations and management. Automatic Migration of Port Profiles (AMPP) supports VM migrations to another physical server, ensuring that the source and destination network ports will have the same configuration for the VM. This is key technology that helps enable Brocade Virtual Access Layer (VAL) capabilities.Logical ChassisAll switches in an Ethernet fabric are managed as if they were a single Logical Chassis. To the rest of the network, the fabric looks no different than any other layer 2 switch. The network just sees the fabric as a single switch, no matter if the fabric contains a little as 48 ports, or thousands of ports.The Ethernet fabric is designed to scale over 1000 ports per Logical Chassis. Consequently, VCS removes the need for separate aggregation switches because the fabric is self-aggregating. This enables the network architecture to be flattened, dramatically reducing cost and management complexity.Each physical switch in the fabric is managed as if it were a port module in a chassis. This allows for fabric scalability without manual configuration. When you add a port module to a chassis, you do not have to configure that module, and a switch can be added to the Ethernet fabric just as easily.The logical chassis functionality drastically reduces management of small-form-factor edge switches. Instead of managing each top-of-rack switch or switches in blade server chassis individually, they are managed as one Logical Chassis.