SlideShare uma empresa Scribd logo
1 de 22
Baixar para ler offline
Sponsored by




NETWORKING REDEFINED



                                                          eGuide
We’re living in an era of server consolidation, virtualization, green initiatives and cloud computing—initiatives throwing the data
center network into a state of flux. Is legacy infrastructure, typically comprising multiple switching tiers running proprietary
protocols, capable of handling next-generation, dynamic application demands? Or is time for a network overhaul built on the
concepts of open, virtual switching, unified fabrics and bandwidths of 10 Gigabit Ethernet and beyond? In these articles,
Network World examines how the data center network is evolving into a more simplified, open infrastructure.




IN THIS eGUIDE
2 Data Center                 5 10G Ethernet              8 Remaking the              13 Standards                 16 A Bridge to              20 Data Center               22 Networking
Derby Heats Up                Shakes Net Design           Data Center                 for Soothing                 Terabit Ethernet            as Ethernet                  Resources
Handicapping the crowd-       to the Core                 Low-latency switches are    Headaches in the             With 40/100G Ethernet       Switch Driver
ed field, from the odds-on    Shift from three- to two-   the foundation for build-   Data Center                  products on the way,        How next-generation data
favorites to the long shots   tier architectures driven   ing a unified-fabric data   Emerging IEEE specifica-     Ethernet experts look       center initiatives shape
                              by need for speed, server   center                      tions aim to address         ahead to Terabit Ethernet   the LAN switching market
                              virtualization, unified                                 serious management issues    standards and products
                              switching fabrics                                       raised by the explosion of   by 2015
                                                                                      virtual machines




                                     Data Center Derby       10G Ethernet Shakes        Remaking the         Soothing Data Center           A Bridge to            Data Center as
                                                                                                                                                                                         Resources
                                         Heats Up                Net Design              Data Center             Headaches                Terabit Ethernet      Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                        Sponsored by




  DATA CENTER DERBY HEATS UP
  By Beth Schultz • Network World


                                                                                                                                   •HP releasing the BladeSystem Matrix, a converged
  Handicapping the crowded field, from the odds-on                                                                                  software, server, storage and network platform.
  favorites to the long shots                                                                                                      •IBM deepening its relationship with Brocade, deciding
                                                                                                                                    to sell Brocade’s Foundry switches and routers under
  Network thoroughbred Cisco jumps into the blade server             Cisco’s blade servers are part of its data center plat-        the IBM banner.
  market. Server stallion HP adds security blades to its Pro-     form, called the Unified Computing System (UCS), which           •Juniper unveiling Stratus Project, a multiyear under-
  Curve switches. IBM teams up with Brocade. Oracle buys          includes storage, network and virtualization resources.           taking through which it will partner with server, stor-
  Sun. And everybody courts that prize filly VMware.              Cisco’s platform includes VMware’s vSphere technology             age and software companies to develop a converged
      In this era of server consolidation and virtualization,     and partnerships with BMC Software, EMC, Intel, Micro-            data center fabric.
  green initiatives and cloud computing, the data center is       soft and Oracle.                                                 •Oracle buying Sun for its hardware and software, then
  in flux and all the major vendors are jockeying for position,      But Cisco’s entry into the data center fray has kicked         grabbing Virtual Iron for its Xen-based hypervisor.
  galloping in with new products, strategies and alliances.       up some dust among its longtime server partners HP and
      “What you see right now is everybody shoring up and         IBM, and forced all of the major players to respond in          “Everything is pointing to a unified fabric,” says John
  getting as many offerings as they can to provide all the        some way. “Cisco has been so successful in the network       Turner, director of network and systems at Brandeis Univer-
  hardware in the data center. Cisco, for example, wants to       space, all the other vendors have to take it seriously at    sity in Waltham, Mass.
  make it so you can be a complete Cisco shop, including          the data center level,’’ says Anne Skamarock, a research        “We’re in a transition, and it’s very important not to just
  all your servers,” says Mitchell Ashley, principal consultant   director at Focus Consulting.                                buy who you bought from before. This is a great time to evalu-
  with Converging Networks and a Network World blogger.              The resultant flurry of activity has included:            ate your vendors, ask about long-term road maps and part-


                                                                                           2 of 22


                                   Data Center Derby         10G Ethernet Shakes        Remaking the        Soothing Data Center          A Bridge to              Data Center as
                                                                                                                                                                                                Resources
                                       Heats Up                  Net Design              Data Center            Headaches               Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                                     Sponsored by




                                                                 “This is a great time to evaluate your vendors, ask about long-term road maps
                  THE DOOR IS                                    and partnerships, see how integrated they are. I wouldn’t make any decisions
                 ALWAYS OPEN                                     hastily if I were in IT.”
                                                                                                           — Zeus Kerravala, analyst, Yankee Group

  nerships, see how integrated they are,” says Yankee Group            mind, says Philip Buckley-Mellor, a designer with BT Vision,         trix-like orchestrated provisioning system. The HP BladeSystem
  analyst Zeus Kerravala. “I wouldn’t make any decisions hastily       a provider of digital TV service in London. Yet Buckley-Mellor       Matrix packages and integrates servers, networking, storage,
  if I were in IT.”                                                    admits he can’t imagine BT Vision’s future data center with-         software infrastructure and orchestration in a single platform.
       This industry shakeup also could provide an opportunity for     out HP at the core.                                                      “We already have most of the Matrix pieces ... so orches-
  some long-shot vendors to make a move on the leaders. Kerrav-           Buckley-Mellor expects most of Vision’s data center opera-        trating new servers into place is the next logical step,” Buck-
  ala puts Brocade in this category because of its storage and net-    tions to run on HP’s latest blades, the Intel Nehalem multicore      ley-Mellor says.
  work strengths, Citrix Systems for virtualization, F5 Networks for   processor-based G6 servers. The infrastructure will be virtualized
  networking, and Liquid Computing for fabric computing. “These        using VMware as needed. HP’s Virtual Connect, a BladeSystem          Place your wagers
  could be the dark horses,” he says.                                  management tool, is an imperative.                                   Gartner analyst George Weiss says Cisco and HP unified
       Turner agrees that opportunities are available for the right       “The ability to use Virtual Connect to re-patch our re-           compute platforms run pretty much neck and neck. How-
  vendors. “I’m happy with my Cisco network. I’m thrilled with         sources with networks and storage live, without impacting            ever, IBM, HP’s traditional blade nemesis in the data center,
  it. No, I’m wowed by it. But that doesn’t mean there isn’t an        any other service, without having to send guys out to site,          has more work to do in creating the fabric over which the
  opportunity for another vendor to come in, pique my inter-           without having the risk of broken fibers, has shaved at least        resources are assembled, he adds.
  est, gain my respect and get in here,” Turner says. “This is an      50%, and potentially 60% to 70%, off the time it takes to               “IBM can do storage, and the server component in
  opportunity to take a big leap. Companies are going to be            deploy a new server or change the configuration of existing          blades, and the networking part through Cisco or Bro-
  doing big refreshes.”                                                servers,” Buckley-Mellor says.                                       cade, so from a user perspective, it seems a fairly inte-
       These changing times for IT infrastructure require an open         Within another year or so, he expects Vision to move to a Ma-     grated type of architecture. But it’s not as componentized


                                                                                                    3 of 22


                                      Data Center Derby           10G Ethernet Shakes           Remaking the           Soothing Data Center           A Bridge to               Data Center as
                                                                                                                                                                                                              Resources
                                          Heats Up                    Net Design                 Data Center               Headaches                Terabit Ethernet         Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                                    Sponsored by




  as what Cisco and HP have,” Weiss says.                            which have their own next-generation data center strate-             have to be able to coordinate activities, like provisioning
     “But with Virtual Connect and networking solutions like         gies—will have leads because they’ve already got deep cus-           and scaling, across the three domains. We have to keep
  ProCurve [switches], and virtualization software, virtualiza-      tomer relationships.                                                 them operating together to achieve business goals,” Anto-
  tion management, blade-based architecture, all of the ele-            “IT organizations will look to vendors for their strategies and   nopoulos says.
  ments Cisco is delivering are within HP’s grasp and to a large     determine how they’ll utilize those capabilities vs. going out and      From that perspective, a unified compute-network-stor-
  extent HP already delivers. It may not be everything, but,         exploring everything on the market and figuring out what new         age platform makes sense—one way to get orchestration is
  there may be things HP delivers that Cisco doesn’t, like a         things they’ll try and which they’ll buy,” Ashley says.              to have as many resources as possible from a single ven-
  command of storage management,” he explains.                                                                                            dor, he says. “Problem is, you can only achieve that within
     Buckley-Mellor sees one technology area in which Cisco          Cover your bets                                                      small islands of IT or at small IT organizations. Once you get
  is a step ahead of HP—converged networking, a la Fibre             In planning for their next-generation data centers, IT executives    to a dozen or more servers, chances are even if you bought
  Channel over Ethernet (FCoE). Cisco’s Nexus 7000 data              should minimize the number of vendors they’ll be working with.       them at the same time from the same vendor, they’ll have
  center switch supports this ANSI protocol for converging           At the same time, it’s unrealistic to not consider a multivendor     some differences,” he adds.
  storage and networking and the UCS will feature FCoE in-           approach from the get-go, says Andreas Antonopoulos, an ana-            Skamarock equates these emerging unified data center
  terconnect switches.                                               lyst with Nemertes Research.                                         platforms to the mainframes of old. “With the mainframe, IT
     “There are no two ways about it, we’re very interested in          “They’ll never be able to reduce everything down to one ven-      had control over just about every component. That kind of con-
  converged networking,” Buckley-Mellor says. Still, he’s not        dor, so unless they’ve got a multivendor strategy for integration,   trol allows you to do and make assumptions that you can’t
  too worried. “That technology needs to mature and I’m sure         they’re going to end up with all these distinct islands, and that    when you have a more distributed, multi-vendor environment.”
  HP will be there with a stable product at the right time for us.   will limit flexibility,” he says.                                       That means every vendor in this race needs to contin-
  In the meantime, Virtual Connect works great and saves me             He espouses viewing the new data center in terms of orches-       ue to build partnerships and build out their ecosystems,
  an ocean of time,” he adds.                                        tration, not integration.                                            especially in the management arena.•
     All this is not to say that Cisco and HP are the only horses       “Because we’ll have these massive dependencies among
  in the race for the next-generation data center. But they,         servers, network and storage, we need to make sure we                Schultz is a longtime IT writer and editor. You can reach her at
  as well as companies like IBM and Microsoft—each of                can run these as systems and not individual elements. We             bschultz5824@gmail.com


                                                                                                  4 of 22


                                     Data Center Derby          10G Ethernet Shakes           Remaking the           Soothing Data Center           A Bridge to                Data Center as
                                                                                                                                                                                                             Resources
                                         Heats Up                   Net Design                 Data Center               Headaches                Terabit Ethernet          Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                                 Sponsored by




  10G ETHERNET SHAKES NET DESIGN TO THE CORE
  By Jim Duffy • Network World


                                                                                                                                          and support for the new storage protocols. Networking in
  Shift from three- to two-tier architectures driven by need                                                                              the data center must evolve to a unified switching fabric.”
  for speed, server virtualization, unified switching fabrics                                                                                A three-tier architecture of access, aggregation and
                                                                                                                                          core switches has been common in enterprise networks
  The emergence of 10 Gigabit Ethernet, virtualization and             tency, lossless architecture that lends itself to a two-tier ap-   for the past decade or so. Desktops, printers, servers and
  unified switching fabrics is ushering in a major shift in            proach. Storage traffic cannot tolerate the buffering and laten-   LAN-attached devices are connected to access switches,
  data center network design: three-tier switching architec-           cy of extra switch hops through a three-tier architecture that     which are then collected into aggregation switches to
  tures are being collapsed into two-tier ones.                        includes a layer of aggregation switching, industry experts say.   manage flows and building wiring.
     Higher, non-blocking throughput from 10G Ethernet                    All of this necessitates a new breed of high-performance,          Aggregation switches then connect to core routers/
  switches allows users to connect server racks and top-of-rack        low-latency, non-blocking 10G Ethernet switches now hitting        switches that provide routing, connectivity to wide-area
  switches directly to the core network, obviating the need for an     the market. And it won’t be long before these 10G switches         network services, segmentation and congestion manage-
  aggregation layer. Also, server virtualization is putting more ap-   are upgraded to 40G and 100G Ethernet switches when                ment. Legacy three-tier architectures naturally have a
  plication load on fewer servers due to the ability to decouple       those IEEE standards are ratified in mid-2010.                     large Cisco component–specifically, the 10-year-old Cata-
  applications and operating systems from physical hardware.              “Over the next few years, the old switching equipment           lyst 6500 switch–given the company’s dominance in en-
     More application load on less server hardware requires            needs to be replaced with faster and more flexible switch-         terprise and data center switching.
  a higher-performance network.                                        es,” says Robin Layland of Layland Consulting, an adviser             Cisco says a three-tier approach is optimal for segmen-
     Moreover, the migration to a unified fabric that converges        to IT users and vendors. “This time, speed needs to be             tation and scale. But the company also supports two-tier
  storage protocols onto Ethernet also requires a very low-la-         coupled with lower latency, abandoning spanning tree               architectures should customers demand it.


                                                                                                   5 of 22


                                      Data Center Derby           10G Ethernet Shakes          Remaking the          Soothing Data Center           A Bridge to             Data Center as
                                                                                                                                                                                                        Resources
                                          Heats Up                    Net Design                Data Center              Headaches                Terabit Ethernet       Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                                Sponsored by




      “We are offering both,” says Senior Product Manager           service providers. Network performance has to be non-block-           “The result of all the queues is that it can take 80 micro-
  Thomas Scheibe. “It boils down to what the customer               ing, highly reliable and faultless with low and predictable la-   seconds or more to cross a three-tier data center,” he says.
  tries to achieve in the network. Each tier adds another two       tency for broadcast, multicast and unicast traffic types.             New data centers require cut-through switching–which
  hops, which adds latency; on the flipside it comes down              “New applications are demanding predictable perfor-            is not a new concept–to significantly reduce or even elimi-
  to what domain size you want and how big of a switch              mance and latency,” says Jayshree Ullal, CEO of Arista Net-       nate buffering within the switch, Layland says. Cut-through
  fabric you have in your aggregation layer. If the customer        works, a privately held maker of low-latency 10G Ethernet         switches can reduce switch-to-switch latency from 15 to
  wants to have 1,000 10G ports aggregated, you need a              top-of-rack switches for the data center. “That’s why the         50 microseconds to 2 to 4, he says.
  two-tier design big enough to do that. If you don’t, you          legacy three-tier model doesn’t work because most of the              Another factor negating the three-tier approach to data
  need another tier to do that.”                                    switches are 10:1, 50:1 oversubscribed,” meaning different        center switching is server virtualization. Adding virtualization
      Blade Network Technology agrees: “Two-tier vs. three-         applications are contending for limited bandwidth which           to blade or rack-mount servers means that the servers them-
  tier is in large part driven by scale,” says Dan Tuchler, vice    can degrade response time.                                        selves take on the role of access switching in the network.
  president of strategy and product management at Blade                This oversubscription plays a role in the latency of today’s       Virtual switches inside servers takes place in a hypervi-
  Network Technologies, a maker of blade server switches            switches in a three-tier data center architecture, which is 50    sor and in other cases the network fabric is stretched to
  for the data center. “At a certain scale you need to start        to 100 microseconds for an application request across the         the rack level using fabric extenders. The result is that the
  adding tiers to add aggregation.”                                 network, Layland says. Cloud and virtualized data center          access switching layer has been subsumed into the serv-
      But the latency inherent in a three-tier approach is inade-   computing with a unified switching fabric requires less than      ers themselves, Lippis notes.
  quate for new data center and cloud computing environments        10 microseconds of latency to function properly, he says.             “In this model there is no third tier where traffic has
  that incorporate server virtualization and unified switching         Part of that requires eliminating the aggregation tier in a    to flow to accommodate server-to-server flows; traffic is
  fabrics that converge LAN and storage traffic, experts say.       data center network, Layland says. But the switches themselves    either switched at access or in the core at less than 10
      Applications such as storage connectivity, high-perfor-       must use less packet buffering and oversubscription, he says.     microseconds,” he says.
  mance computing, video, extreme Web 2.0 volumes and the              Most current switches are store-and-forward devices                Because of increased I/O associated with virtual switching
  like require unique network attributes, according to Nick Lip-    that store data in large buffer queues and then forward it        in the server there is no room for a blocking switch in between
  pis, an adviser to network equipment buyers, suppliers and        to the destination when it reaches the top of the queue.          the access and the core, says Asaf Somekh, vice president


                                                                                               6 of 22


                                    Data Center Derby          10G Ethernet Shakes          Remaking the         Soothing Data Center           A Bridge to                Data Center as
                                                                                                                                                                                                         Resources
                                        Heats Up                   Net Design                Data Center             Headaches                Terabit Ethernet          Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                              Sponsored by




  of marketing for Voltaire, a maker of Infiniband and Ether-
  net switches for the data center. “It’s problematic to have so
                                                                                                                   FORK IN THE ROAD
  many layers.”                                                         Virtualization, inexpensive 10G links and unified Ethernet switching fabrics are catalyzing a migration from three-tier
      Another requirement of new data center switches is to             Layer 3 data center switching architectures to flatter two-tier Layer 2 designs that subsume the aggregation layer into
                                                                         the access layer. Proponents say this will decrease cost, optimize operational efficiency, and simplify management.
  eliminate the Ethernet spanning tree algorithm, Layland says.
  Currently all Layer 2 switches determine the best path from
                                                                                                  Three tier                                                  Two tier
  one endpoint to another using the spanning tree algorithm.          Core                                                               Core
      Only one path is active, the other paths through the fabric
  to the destination are only used if the best path fails. The
                                                                      Aggregation
  lossless, low-latency requirements of unified fabrics in virtu-
  alized data centers requires switches using multiple paths
                                                                                                                                         Access/
  to get traffic to its destination, Layland says. These switches                                                                        Aggregation
  continually monitor potential congestion points and pick the        Access
  fastest and best path at the time the packet is being sent.
      “Spanning tree has worked well since the beginning of
  Layer 2 networking but the ‘only one path’ [approach] is not
  good enough in a non-queuing and non-discarding world,”           to acquire more servers.                                         maintain and manage.
  Layland says.                                                        And a unified fabric means a server does not need sepa-          “If you have switches with adequate capacity and
      Finally, cost is a key factor in driving two-tier architec-   rate adapters and interfaces for LAN and storage traffic.        you’ve got the right ratio of input ports to trunks, you don’t
  tures. Ten Gigabit Ethernet ports are inexpensive–about           Combining both on the same network can reduce the num-           need the aggregation layer,” says Joe Skorupa, a Gartner
  $500, or twice that of Gigabit Ethernet ports yet with 10         ber and cost of interface adapters by half, Layland notes.       analyst. “What you’re doing is adding a lot of complexity
  times the bandwidth. Virtualization allows fewer servers to          And by eliminating the need for an aggregation layer of       and a lot of cost, extra heat and harder troubleshooting
  process more applications, thereby eliminating the need           switching, there are fewer switches to operate, support,         for marginal value at best.” •


                                                                                              7 of 22


                                    Data Center Derby          10G Ethernet Shakes        Remaking the          Soothing Data Center            A Bridge to              Data Center as
                                                                                                                                                                                                      Resources
                                        Heats Up                   Net Design              Data Center              Headaches                 Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                                  Sponsored by




  REMAKING THE DATA CENTER
  By Robin Layland • Network World


                                                                                                                                         are as compelling as the virtualization story. Storage has
  Low-latency switches are the foundation for building a unified-fabric data center                                                      been moving to IP for years, with a significant amount of
                                                                                                                                         storage already attached via NAS or iSCSI devices. The
  A major transformation is sweeping over data center switch-        porting server virtualization along with merging the separate       cost-savings and flexibility gains are well-known.
  ing. Over the next few years the old switching equipment           IP and storage networks is just too great. Supporting these             The move now is to directly connect Fibre Channel stor-
  needs to be replaced with faster and more flexible switches.       changes is impossible without the next evolution in switching.      age to the IP switches, eliminating the separate Fibre Chan-
      Three factors are driving the transformation: server vir-      The good news is that the switching transformation will take        nel storage-area network. Moving Fibre Channel to the IP
  tualization, direct connection of Fibre Channel storage to         years, not months, so there is still time to plan for the change.   infrastructure is a cost-saver. The primary way is by reducing
  the IP switching and enterprise cloud computing.                                                                                       the number of adapters on a server. Currently servers need
      They all need speed and higher throughput to succeed but       The drivers                                                         an Ethernet adapter for IP traffic and a separate storage
  unlike the past it will take more than just a faster interface.    The story of how server virtualization can save money is well-      adapter for the Fibre Channel traffic. Guaranteeing high
  This time speed needs to be coupled with lower latency, aban-      known. Running a single application on a server commonly            availability means that each adapter needs to be duplicated,
  doning spanning tree and supporting new storage protocols.         results in utilization in the 10% to 30% range. Virtualization      resulting in four adapters per server. A unified fabric reduces
  Without these changes, the dream of a more flexible and            allows multiple applications to run on the server within their      the number to two since the IP and Fibre Channel or iSCSI
  lower-cost data center will remain just a dream. Networking        own image, allowing utilization to climb into the 70% to 90%        traffic share the same adapter. The savings grow since halv-
  in the data center must evolve to a unified switching fabric.      range. This cuts the number of physical servers required; saves     ing the number of adapters reduces the number of switch
      Times are hard, money is tight; can a new unified-fabric re-   on power and cooling and increases operational flexibility.         ports and the amount of cabling. It also reduces operational
  ally be justified? The answer is yes. The cost-savings from sup-      The storage story is not as well-known, but the savings          costs since there is only one network to maintain.


                                                                                                 8 of 22


                                     Data Center Derby          10G Ethernet Shakes           Remaking the          Soothing Data Center           A Bridge to               Data Center as
                                                                                                                                                                                                           Resources
                                         Heats Up                   Net Design                 Data Center              Headaches                Terabit Ethernet         Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                           Sponsored by




      The third reason is internal or enterprise cloud comput-     the rest of the network the current data center switches        onds to go from the server to the switch. Each switch-to-
  ing. In the past when a request reached an application, the      provide very low latency, discard very few packets and          switch hop adds 15 microseconds and can add as much
  work stayed within the server/application. Over the years,       support 10 Gigabit Ethernet interconnects. The problem is       as 40 microseconds. For example, assume two servers are
  this way of design and implementing applications has             that these new challenges need even lower latency, better       at the “far” end of the data center. A packet leaving the
  changed. Increasingly when a request arrives at the server,      reliability, higher throughput and support for Fibre Chan-      requesting server travels to the top of rack switch, then the
  the application may only do a small part of the work; it dis-    nel over Ethernet (FCoE) protocol.                              end-of-row switch and onward to the core switch. The hops
  tributes the work to other applications in the data center,         The first challenge is latency. The problem with the         are then repeated to the destination server. That is four
  making the data center one big internal cloud.                   current switches is that they are based on a store-and-         switch-to-switch hops for a minimum of 60 microseconds.
      Attaching storage directly to this IP cloud only increases   forward architecture. Store-and-forward is generally asso-      Add in the 10 microseconds to reach each server and the
  the number of critical flows that pass over the switching        ciated with applications such as e-mail where the mail          total is 80 microseconds. The delay can increase to well
  cloud. A simple example shows why low latency is a must.         server receives the mail, stores it on a disk and then later    over 100 microseconds and becomes a disaster if a switch
  If the action took place within the server, then each storage    forwards it to where it needs to go. Store-and-forward is       has to discard the packet, requiring the TCP stack on the
  get would only take a few microseconds to a nanosecond to        considered very slow. How are layer 2 switches, which are       sending server to time out and retransmit the packet.
  perform. With most of the switches installed in enterprises      very fast, store-and-forward devices?                               Latency of 80 microseconds each way was acceptable
  the get can take 50 to 100 microseconds to cross the cloud,         Switches have large queues. When a switch receives           in the past when response time was measured in seconds,
  which, depending on the number of calls, adds significant        a packet, it puts it in a queue, and when the message           but with the goal to provide sub-second response time, the
  delays to processing. If a switch discards the packet, the       reaches the front of the queue, it is sent. Putting the pack-   microseconds add up. An application that requires a large
  response can be even longer. It becomes critical that the        et in a queue is a form of store-and-forward. A large queue     chunk of data can take a long time to get it when each get
  cloud provides very low latency with no dropped packets.         has been sold as an advantage since it means the switch         can only retrieve 1,564 byes at a time. A few hundred round
                                                                   can handle large bursts of data without discards.               trips add up. The impact is not only on response time. The
  The network and switch problem                                      The result of all the queues is that it can take 80 micro-   application has to wait for the data resulting in an increase
  Why can’t the current switching infrastructure handle vir-       seconds or more for a large packet to cross a three-tier data   in the elapsed time it takes to process the transaction.
  tualization, storage and cloud computing? Compared with          center. The math works as follows. It can take 10 microsec-     That means that while a server is doing the same amount


                                                                                              9 of 22


                                    Data Center Derby         10G Ethernet Shakes         Remaking the         Soothing Data Center          A Bridge to              Data Center as
                                                                                                                                                                                                   Resources
                                        Heats Up                  Net Design               Data Center             Headaches               Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                        Sponsored by




                                                       The first big change in [new generation] switches is the way the switch forwards packets.
           HOW DO THEY                                 Instead of a store-and-forward design, a cut-through design is used, which significantly
              DO THAT?                                 reduces or eliminates queuing inside the switch. A cut-through design can reduce switch
                                                       time [of] 15 to 50 microseconds to two to four microseconds.

  of work, there is an increase in the number of concurrent        plex and expensive to implement. It is only now with the        A current problem with the multi-path approach is that
  tasks, lowering the server overall throughput.                   very low-latency requirement that switch manufacturers       there is no standard on how they do it. Work is underway
     The new generation of switches overcomes the large            can justify spending the money to implement it.              within standard groups to correct this problem but for the
  latency of the past by eliminating or significantly reducing         The second big change is abandoning spanning tree        early versions each vendor has their own solution. A signif-
  queues and speeding up their own processing. The words           within the data center switching fabric. The new genera-     icant amount of the work falls under a standard referred
  used to describe it are: lossless transport; non-blocking; low   tion of switches uses multiple paths through the switching   to as Data Center Bridging (DCB). The reality is that for
  latency; guaranteed delivery; multipath and congestion man-      fabric to the destination. They are constantly monitoring    the immediate future mixing and matching different ven-
  agement. Lossless transport and guaranteed delivery mean         potential congestion points, or queuing points, and pick     dor’s switches within the data center is not possible. Even
  they don’t discard packets. Non-blocking means they either       the fastest and best path at the time the packet is being    when DCB and other standards are finished there will be
  don’t queue the packet or have a queue length of one or two.     sent. Currently all layer 2 switches determine the “best”    many interoperability problems to work out, thus a single
     The first big change in the switches is the design of the     path from one endpoint to another one using the span-        vendor solution may be the best strategy.
  way the switch forwards packets. Instead of a store-and-         ning tree algorithm. Only one path is active, the other         Speed is still part of the solution. The new switches are
  forward design, a cut-through design is generally used,          paths through the fabric to the destination are only used    built for very dense deployment of 10 Gigabit and prepared
  which significantly reduces or eliminates queuing inside         if the “best” path fails. Spanning tree has worked well      for 40/100 Gigabit. The result of all these changes reduces
  the switch. A cut-through design can reduce switch time          since the beginning of layer 2 networking but the “only      the trip time mentioned from 80 microseconds to less than
  from 15 to 50 microseconds to two to four microseconds.          one path” is not good enough in a non-queuing and non-       10 microseconds, providing the needed latency and through-
  Cut-through is not new, but it has always been more com-         discarding world.                                            put to make fiber channel and cloud computing practical.


                                                                                           10 of 22


                                    Data Center Derby         10G Ethernet Shakes        Remaking the        Soothing Data Center         A Bridge to              Data Center as
                                                                                                                                                                                               Resources
                                        Heats Up                  Net Design              Data Center            Headaches              Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                        Sponsored by




  Virtualization curve ball                                     problem with this approach is the coordination required          on the same port. The answer is to eliminate the spanning
  Server virtualization creates additional problems for the     between the two groups and the level of knowledge of the         tree restriction of not allowing a message to be sent back
  current data center switching environment. The first prob-    networking required by the server group. Having the net-         over the port it came from.
  lem is each physical server has multiple virtual images,      work group maintain the soft switch in the server creates
  each with its own media access control (MAC) address.         the same set of problems.                                        Spanning tree and virtualization
  This causes operational complications and is a real prob-         Today, the answer is to learn to deal with confusion and     The second curve ball from virtualization is ensuring that
  lem if two virtual servers communicate with each other.       develop procedures to make the best of the situation and         there is enough throughput to and from the server and
  The easiest answer is to put a soft switch in the VM, which   hope for the best. A variation on this is to use a soft switch   that the packet takes the best path through the data cen-
  all the VM vendors provide. This allows the server to pres-   from the same vendor as the switches in the network. The         ter. As the number of processors on the physical server
  ent a single MAC address to the network switch and per-       idea is that coordination will be easier since the switch        keeps increasing, the number of images increase, with
  form the functions of a switch for the VMs in the server.     vendor built it and has hopefully made the coordination          the result that increasingly large amounts of data need
      There are several problems with this approach. The        easier. Cisco is offering this approach with VMware.             to be moved in and out of the server. The first answer is
  soft switch needs to enforce policy and access control            The third solution is to have all the communications         to use 10 Gigabit and eventually 40 or 100 Gigabit. This
  list (ACL); make sure VLANs are followed and implement        from the virtual server sent to the network switch. This         is a good answer but may not be enough since the data
  security. For example, if one image is compromised, it        would simplify the switch in the VM since it would not           center needs to create a very low-latency, non-blocking
  should not be able to freely communicate with the other       have to enforce policy, tag packets or worry about secu-         fabric with multiple paths. Using both adapters attached
  images on the server if policy says they should not be        rity. The network switch would perform all these functions       to different switches allows multiple paths along the en-
  talking to each other.                                        as if the virtual servers were directly connected to the         tire route, helping to ensure low latency.
      If they were on different physical servers the network    servers and this was the first hop into the network.                 Once again spanning tree is the problem. The solution
  would make sure policy and security procedures were fol-          This approach has appeal since it keeps all the well         is to eliminate spanning tree, allowing both adapters to
  lowed. The simple answer is that the group that maintains     developed processes in place and restores clear account-         be used. The reality is the new generation layer 2 switches
  the server and the soft switch needs to make sure all the     ability on who does what. The problem is spanning tree           in the data center will act more like routers, implementing
  network controls are followed and in place. The practical     does not allow a port to receive a packet and send it back       their own version of OSPF at layer 2.


                                                                                          11 of 22


                                  Data Center Derby        10G Ethernet Shakes         Remaking the          Soothing Data Center          A Bridge to             Data Center as
                                                                                                                                                                                               Resources
                                      Heats Up                 Net Design               Data Center              Headaches               Terabit Ethernet       Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                                Sponsored by




  Storage                                                             or next and thus have time to plan an orderly transformation.     treme, Force 10 and Voltaire have switches that can.
  The last reason new switches are needed is Fibre Channel                The transformation can also be taken in steps. For ex-           The second part is whether the vendor can overcome the
  storage. Switches need to support the ability to run stor-          ample, one first step would be to migrate Fibre Channel           spanning tree problem along with support for dual adapt-
  age traffic over Ethernet/IP such as NAS, ISCSI or FCoE.            storage onto the IP fabric and immediately reduce the             ers and multiple pathing with congestion monitoring. As is
  Besides adding support for the FCoE protocol they will also         number of adapters on each server. This can be accom-             normally the case vendors are split on whether to wait until
  be required to abandon spanning tree and enable greater             plished by replacing just the top-of-the-rack switch. The         standards are finished before providing a solution or pro-
  cross-sectional bandwidth. For example Fibre Channel re-            storage traffic flows over the server’s IP adapters and to        vide an implementation based on their best guess of what
  quires that both adapters to the server are active and carry-       the top-of-the-rack switch, which sends the Fibre Channel         the standards will look like. Cisco and Arista Networks have
  ing traffic, something the switch’s spanning tree algorithm         traffic directly to the SAN. The core and end-of-rack switch      jumped in early and provide the most complete solutions.
  doesn’t support. Currently the FCoE protocol is not finished        do not have to be replaced. The top-of-the-rack switch            Other vendors are waiting for the standards to be complet-
  and vendors are implementing a draft version. The good              supports having both IP adapters active for storage traf-         ed in the next year before releasing products.
  news is that it is getting close to finalization.                   fic only with spanning tree’s requirement of only one ac-            What if low latency is a future requirement, what is the
                                                                      tive adapter applying to just the data traffic. Brocade and       best plan? Whenever the data center switches are sched-
  Current state of the market                                         Cisco currently offer this option.                                uled for replacement they should be replaced with switch-
  How should the coming changes in the data center affect your            If low latency is needed, then all the data center switches   es that can support the move to the new architecture and
  plan? The first step is to determine how much of your traffic       need to be replaced. Most vendors have not yet implement-         provide very low latency. This means it is very important
  needs very low latency right now. If cloud computing, migrat-       ed the full range of features needed to support the switch-       to understand the vendor’s plans and migration schemes
  ing critical storage or a new low-latency application such as       ing environment described here. To understand where a             that will move you to the next-generation unified fabric.
  algorithmic stock trading is on the drawing broad, then it is       vendor is; it is best to break it down into two parts. The
  best to start the move now to the new architecture. Most en-        first part is whether the switch can provide very low latency.    Layland is head of Layland Consulting. He can be reached
  terprises don’t fall in that group yet but they will in this year   Many vendors such as Arista Networks, Brocade, Cisco, Ex-         at robin@layland.com.




                                                                                                 12 of 22


                                     Data Center Derby           10G Ethernet Shakes          Remaking the         Soothing Data Center           A Bridge to              Data Center as
                                                                                                                                                                                                       Resources
                                         Heats Up                    Net Design                Data Center             Headaches                Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                       Sponsored by




  STANDARDS FOR SOOTHING HEADACHES
  IN THE DATA CENTER
  By Jim Duffy • Network World


                                                                                                                                “There needed to be a way to communicate between the
  Emerging IEEE specifications aim to address serious management issues                                                       hypervisor and the network,” says Jon Oltsik, an analyst
  raised by the explosion of virtual machines                                                                                 at Enterprise Systems Group. “When you start thinking
                                                                                                                              about the complexities associated with running dozens of
  Cisco, HP and others are waging an epic battle to gain       interface cards (NIC) and blade servers and put it back        VMs on a physical server the sophistication of data center
  control of the data center, but at the same time they are    onto physical Ethernet switches connecting storage and         switching has to be there.”
  joining forces to push through new Ethernet standards        compute resources.                                               But adding this intelligence to the hypervisor or host
  that could greatly ease management of those increasingly        The IEEE draft standards boast a feature called Virtual     would add a significant amount of network processing
  virtualized IT nerve centers.                                Ethernet Port Aggregation (VEPA), an extension to physical     overhead to the server, Oltsik says. It would also dupli-
     The IEEE 802.1Qbg and 802.1Qbh specifications are         and virtual switching designed to eliminate the large number   cate the task of managing media access control address
  designed to address serious management issues raised         of switching elements that need to be managed in a data        tables, aligning policies and filters to ports and/or VMs
  by the explosion of virtual machines in data centers         center. Adoption of the specs would make management            and so forth.
  that traditionally have been the purview of physical serv-   easier for server and network administrators by requiring        “If switches already have all this intelligence in them, why
  ers and switches. In a nutshell, the emerging standards      fewer elements to manage, and fewer instances of element       would we want to do this in a different place?” Oltsik notes.
  would offload significant amounts of policy, security and    characteristics—such as switch address tables, security and      VEPA does its part by allowing a physical end station
  management processing from virtual switches on network       service attribute policies, and configurations—to manage.      to collaborate with an external switch to provide bridg-


                                                                                        13 of 22


                                  Data Center Derby       10G Ethernet Shakes         Remaking the        Soothing Data Center          A Bridge to               Data Center as
                                                                                                                                                                                               Resources
                                      Heats Up                Net Design               Data Center            Headaches               Terabit Ethernet         Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                       Sponsored by




  ing support between multiple virtual end stations and          around mid-2011, according to those involved in the IEEE      based on the VN-Tag specification created by Cisco and
  VMs, and external networks. This would alleviate the           effort, but pre-standard products could emerge late this      VMware to have a policy follow a VM as it moves. This
  need for virtual switches on blade servers to store and        year. Specifically, bg addresses edge virtual bridging: an    multichannel capability attaches a tag to the frame that
  process every feature—such as security, policy and ac-         environment where a physical end station contains mul-        identifies which VM the frame came in on.
  cess control lists (ACLs)—resident on the external data        tiple virtual end stations participating in a bridged LAN.        But another extension was required to allow users to
  center switch.                                                 VEPA allows an external bridge—or switch—to perform in-       deploy remote switches—instead of those adjacent to the
                                                                 ter-VM hairpin forwarding of frames, something standard       server rack—as the policy controlling switches for the vir-
  Diving into IEEE draft standard details                        802.1Q bridges or switches are not designed to do.            tual environment. This is where 802.1Qbh comes in: It
  Together, the 802.1Qbg and bh specifications are de-              “On a bridge, if the port it needs to send a frame on is   allows edge virtual bridges to replicate frames over mul-
  signed to extend the capabilities of switches and end sta-     the same it came in on, normally a switch will drop that      tiple virtual channels to a group of remote ports. This will
  tion NICs in a virtual data center, especially with the pro-   packet,” says Paul Congdon, CTO at HP ProCurve, vice          enable users to cascade ports for flexible network design,
  liferation and movement of VMs. Citing data from Gartner,      chair of the IEEE 802.1 group and a VEPA author. “But         and make more efficient use of bandwidth for multicast,
  officials involved in the IEEE’s work on bg and bh say 50%     VEPA enables a hairpin mode to allow the frame to be          broadcast and unicast frames.
  of all data center workloads will be virtualized by 2012.      forwarded out the port it came in on. It allows it to turn        The port extension capability of bh lets administrators
     Some of the other vendors involved in the bg and bh         around and go back.”                                          choose the switch they want to delegate policies, ACLs,
  work include 3Com, Blade Network Technologies, Bro-               VEPA does not modify the Ethernet frame format but only    filters, QoS and other parameters to VMs. Port extenders
  cade, Dell, Extreme Networks, IBM, Intel, Juniper Net-         the forwarding behavior of switches, Congdon says. But        will reside in the back of a blade rack or on individual
  works and QLogic. While not the first IEEE specifications      VEPA by itself was limited in its capabilties. So HP com-     blades and act as a line card of the controlling switch,
  to address virtual data centers, bg and bh are amend-          bined its VEPA proposal with a Cisco’s VN-Tag proposal for    says Joe Pelissier, technical lead at Cisco.
  ments to the IEEE 802.1Q specification for virtual LANs        server/switch forwarding, management and administration           “It greatly reduces the number of things you have to
  and are under the purview of the organization’s 802.1          to support the ability to run multiple virtual switches and   manage and simplifies management because the control-
  Data Center Bridging and Interworking task groups.             multiple VEPAs simultaneously on the endpoint.                ling switch is doing all of the work,” Pelissier says.
     The bg and bh standards are expected to be ratified            This required a channeling scheme for bg, which is             What’s still missing from bg and bh is a discov-


                                                                                          14 of 22


                                   Data Center Derby        10G Ethernet Shakes        Remaking the        Soothing Data Center          A Bridge to              Data Center as
                                                                                                                                                                                              Resources
                                       Heats Up                 Net Design              Data Center            Headaches               Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                                    Sponsored by




                                                                      Cisco and HP are leading proponents of the IEEE effort despite the fact that
                       OF LIKE MINDS                                  Cisco is charging hard into HP’s traditional server territory while HP is ramping
                                                                      up its networking efforts. ...


  ery protocol for autoconfiguration, Pelissier says. Some           Cisco, HP say they’re in synch                                        “This isn’t the battle it’s been made out to be,” Pelissier says.
  in the 802.1 group are leaning toward using the existing           Cisco and HP are leading proponents of the IEEE effort de-            Though Congdon acknowledges he initially proposed
  Logical Link Discovery Protocol (LLDP), while others, includ-      spite the fact that Cisco is charging hard into HP’s tradition-     VEPA as an alternative to Cisco’s VN-Tag technique, the two
  ing Cisco and HP, are inclined to define a new protocol for        al server territory while HP is ramping up its networking ef-       together present “a nice layered architecture that builds
  the task.                                                          forts in an attempt to gain control of data centers that have       upon one another where virtual switches and VEPA form
     “LLDP is limited in the amount of data it can carry and how     been turned on their heads by virtualization technology.            the lowest layer of implementation, and you can move all
  quickly it can carry that data,” Pelissier says. “We need some-       Cisco and HP say their VEPA and VN-Tag/multichannel and          the way to more complex solutions such as Cisco’s VN-Tag.”
  thing that carries data in the range of 10s to 100s of kilobytes   port extension proposals are complementary despite reports that       And the proposals seem to have broad industry support.
  and is able to send the data faster rather than one 1,500 byte     they are competing techniques to accomplish the same thing:           “We do believe this is the right way to go,” says Dhritiman
  frame a second. LLDP doesn’t have fragmentation capability         reducing the number of managed data center elements and de-         Dasgupta, senior manager of data center marketing at Juniper.
  either. We want to have the capability to split the data among     fining a clear line of demarcation between NIC, server and switch   “This is putting networking where it belongs, which is on net-
  multiple frames.”                                                  administrators when monitoring VM communications.                   working devices. The network needs to know what’s going on.”•




                                                                                                 15 of 22


                                     Data Center Derby          10G Ethernet Shakes           Remaking the          Soothing Data Center            A Bridge to                Data Center as
                                                                                                                                                                                                               Resources
                                         Heats Up                   Net Design                 Data Center              Headaches                 Terabit Ethernet          Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                         Sponsored by




  A BRIDGE TO TERABIT ETHERNET
  By Jim Duffy • Network World


                                                                                                                                    According to the 802.3ba task force, bandwidth re-
  With 40/100G Ethernet products on the way, Ethernet experts look ahead to                                                      quirements for computing and core networking applica-
  Terabit Ethernet standards and products by 2015                                                                                tions are growing at different rates, necessitating the defi-
                                                                                                                                 nition of two distinct data rates for the next generation of
  IT managers who are getting started with--or even pushing     for 10G fixed Ethernet switches doubled in 2008, accord-         Ethernet. Servers, high-performance computing clusters,
  the limits of--10 Gigabit Ethernet in their LANs and data     ing to Infonetics. And there is pent-up demand for 40            blade servers, storage-area networks and network-at-
  centers don’t have to wait for higher-speed connectivity.     Gigabit and 100 Gigabit Ethernet, says John D’Ambrosia,          tached storage all currently make use of 1G and 10G Eth-
     Pre-standard 40 Gigabit and 100 Gigabit Ethernet           chair of the 802.3ba task force in the IEEE and a senior         ernet, with 10G growing significantly in 2007 and 2008.
  products--server network interface cards, switch uplinks      research scientist at Force10 Networks.                             I/O bandwidth projections for server and computing
  and switches—have hit the market. And standards-com-              “There are a number of people already who are using          applications, including server traffic aggregation, indicate
  pliant products are expected to ship in the second half of    link aggregation to try and create pipes of that capacity,”      that there will be a significant market potential for a 40G
  this year, not long after the expected June ratification of   he says. “It’s not the cleanest way to do things ... [but]       Ethernet interface, according to the task force. Ethernet
  the 802.3ba standard.                                         people already need that capacity.”                              at 40G will provide approximately the same cost balance
     The IEEE, which began work on the standard in late             D’Ambrosia says even though 40/100G Ethernet prod-           between the LAN and the attached stations as 10G Ether-
  2006, is expected to define two different speeds of Eth-      ucts haven’t arrived yet, he’s already thinking ahead to Tera-   net, the task force believes.
  ernet for two different applications: 40G for server con-     bit Ethernet standards and products by 2015. “We are going          Core networking applications have demonstrated the
  nectivity and 100G for core switching.                        to see a call for a higher speed much sooner than we saw the     need for bandwidth beyond existing capabilities and be-
     Despite the global economic slowdown, global revenue       call for this generation” of 10/40/100G Ethernet, he says.       yond the projected bandwidth requirements for computing


                                                                                          16 of 22


                                  Data Center Derby        10G Ethernet Shakes         Remaking the          Soothing Data Center          A Bridge to              Data Center as
                                                                                                                                                                                                 Resources
                                      Heats Up                 Net Design               Data Center              Headaches               Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                            Sponsored by




  applications. Switching, routing, and aggregation in data      previous 802.3 amendments, new physical layers specific            says. “By our own projections, the need for 100G was in
  centers, Internet exchanges and service provider peering       to either 40Gbps or 100Gbps operation will be defined.             the 2010 timeframe. We should have been done with the
  points, and high-bandwidth applications such as video on          By employing the existing 802.3 MAC protocol,                   100G [spec] probably in the 2007-08 timeframe, at the
  demand and high-performance computing, need a 100              802.3ba is intended to maintain full compatibility with the        latest. We actually started it late, which is going to make
  Gigabit Ethernet interface, according to the task force.       installed base of Ethernet nodes, the task force says. The         the push for terabit seem early by comparison. But when
      “Initial applications (of 40/100G Ethernet) are already    spec is also expected to use “proven and familiar media,”          we look at the data forecasts that we’re seeing, it looks
  showing up, in stacking and highly aggregated LAN links,       including optical fiber, backplanes and copper cabling,            to be on cue.”
  but the port counts are low,” says George Zimmerman, CTO       and preserve existing network architecture, management                Driving demand for 40/100G Ethernet are the same
  of SolarFlare, a maker of Ethernet physical layer devices.     and software, in an effort to keep design, installation and        drivers currently stoking 10G: data center virtualization
      Zimmerman says 10G is just now taking off in the ac-       maintenance costs at a minimum.                                    and storage, and high-definition videoconferencing and
  cess layer of large networks and will eventually move to          With initial interoperability testing commencing in late        medical imaging. Some vendors are building 40/100G
  the client side, creating the need for 40/100G in the dis-     2009, public demonstrations will emerge in 2010, and               Ethernet capabilities into their products now.
  tribution layer and the network core.                          certification testing will start once the standard is ratified,
      He says the application of 100 Gigabit Ethernet in the     says Brad Booth, chair of the Ethernet Alliance.                   Vendors prepare for 100 Gigabit Ethernet
  core is imminent, and is about two years away in the distri-      The specification and formation of the 40/100G task             Cisco’s Nexus 7000 data center switch, which debuted in ear-
  bution layer. “Both will be driven by and drive 10G adoption   force did not come without some controversy, however.              ly 2009, is designed for future delivery of 40/100G Ethernet.
  in the access and client end of the network, where today the   Participants in the Higher Speed Study Group (HSSG)                    “We have a little more headroom, which isn’t bad to
  numbers are still much smaller than the potential,” he says.   within the IEEE were divided on whether to include 40G             have when you look at future Ethernet speed transitions
                                                                 Ethernet as part of their charter or stay the course with          coming in the market,” says Doug Gourlay, senior director of
  Spec designed for seamless upgrades                            100 Gigabit Ethernet.                                              data center marketing and product management at Cisco.
  The 802.3ba specification will conform to the full-duplex         After about a month though, the HSSG agreed to work on          “We’re pretty early advocates of the 100G effort in the IEEE.
  operating mode of the IEEE 802.3 Media Access Control          a single standard that encompassed both 40G and 100G.                  “[But] the earliest you’ll see products from any com-
  (MAC) layer, according to the task force. As was the case in      “In a sense, we were a little bit late with this,” D’Ambrosia   pany that are credible deliveries and reasonably priced:


                                                                                            17 of 22


                                   Data Center Derby        10G Ethernet Shakes          Remaking the          Soothing Data Center           A Bridge to              Data Center as
                                                                                                                                                                                                    Resources
                                       Heats Up                 Net Design                Data Center              Headaches                Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                       Sponsored by




                                                 Latency is a bigger issue than most people anticipate. ... As you aggregate traffic into 10G ports,
                 TIME AND                        just the smallest difference in the clocks between ports can cause high latency and packet loss.
               TIME AGAIN                        At 40G, it’s an order of magnitude more important than it is for 10G and Gig.
                                                                                                        —Tim Jefferson, general manager, Spirent

  second half of 2010 onward for 40/100G,” he adds.             bit Ethernet modules planned for release in early 2010,        cause a lot of the innovations going on with Ethernet and
     Verizon Business offers 10G Ethernet LAN and Ethernet      says Tim Jefferson, general manager of the converged           a lot of the demand for all these changes in data centers
  Virtual Private Line services to customers in 100 U.S. met-   core solutions group at Spirent. Jefferson says one of the     are meant to address lower latencies,” Jefferson adds.
  ro markets. Verizon Business also offers “10G-capable”        caveats that users should be aware of as they migrate
  Ethernet Private Line services.                               from 10G to 40/100G Ethernet is the need to ensure pre-        Cabling challenges
     The carrier has 40G Ethernet services on its five-year     cise clocking synchronization between systems--especially      Another challenge is readying the cabling infrastructure
  road map but no specific deployment dates, says Jeff          between equipment from different vendors.                      for 40/100G, experts say. Ensuring the appropriate grade
  Schwartz, Group Manager, Global Ethernet Product Mar-            Imprecise clocking between systems at 40/100G--even at      and length of fiber is essential to smooth, seamless op-
  keting. Instead, Verizon Business has more 10G Ethernet       10G--can increase latency and packet loss, Jefferson says.     eration, they say.
  access services on tap.                                          “This latency issue is a bigger issue than most people         “The big consideration is, what’s a customer’s cabling
     “We want to get to 100G,” Schwartz says. “40G may be       anticipate,” he says. “At 10G, especially at high densities,   installation going to look like and what they’re looking for
  an intermediary step.”                                        the specs allow for a little variance for clocks. As you ag-   to be able to handle that,” Booth says. “They are probably
     Once Verizon Business moves its backbone architec-         gregate traffic into 10G ports, just the smallest difference   going to need to have a parallel fiber capability.”
  ture toward 40/100G, products and services will be fol-       in the clocks between ports can cause high latency and            “The recommendations we’re making to customers on
  lowing, he says.                                              packet loss. At 40G, it’s an order of magnitude more im-       their physical plant today are designed to take them from
     Spirent Communications, a maker of Ethernet testing        portant than it is for 10G and Gig.                            1G to 10G; 10G to a unified fabric; and then address
  gear, offers 40G Ethernet testing modules, with 100 Giga-        “This is a critical requirement in data centers today be-   future 40G,” Cisco’s Gourlay says.


                                                                                         18 of 22


                                  Data Center Derby        10G Ethernet Shakes         Remaking the        Soothing Data Center          A Bridge to              Data Center as
                                                                                                                                                                                              Resources
                                      Heats Up                 Net Design               Data Center            Headaches               Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                       Sponsored by




     The proposed physical interfaces (PHY) for 40G Ethernet   the Ethernet Alliance. The 100 Gigabit Ethernet rate will in-      The proposed PHYs for 40G Ethernet are 1 meter back-
  include a range to cover distances inside the data center    clude distances and media appropriate for data center, as       plane, 10 meter copper and 100 meter multimode fiber; and
  up to 100 meters, to accommodate a range of server form      well as service provider interconnection for intra-office and   10 meter copper, 100 meter multimode, and 10 kilometer
  factors, including blade, rack and pedestal, according to    inter-office applications, according to the organization.       and 40 kilometer single-mode fiber for 100 Gigabit Ethernet.•




                                                                                         19 of 22


                                  Data Center Derby       10G Ethernet Shakes         Remaking the         Soothing Data Center          A Bridge to              Data Center as
                                                                                                                                                                                               Resources
                                      Heats Up                Net Design               Data Center             Headaches               Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                       Sponsored by




  DATA CENTER AS ETHERNET SWITCH DRIVER
  By Jim Duffy • Network World


                                                                                                                                  And pricing pressure is expected to increase, accord-
  How next-generation data center initiatives shape the LAN switching market                                                   ing to a Nov. 19, 2009, Goldman Sachs survey of 100
                                                                                                                               IT executives on IT spending. With its 3Com buy, HP can
  2010 promises to be an interesting year in the enterprise    single large vendor more attractive to customers, he says.      now offer a core data center switch in addition to the en-
  LAN switching market.                                             Indeed, the LAN switching market is no longer “Cisco       terprise switches it sells at roughly half the price of com-
      With the exception of Avaya, next-generation data cen-   and the Seven Dwarves”–the seven companies all vy-              parable Cisco products, the survey notes. And with Juni-
  ter initiatives are driving the LAN switching market and     ing for that 25% to 30% share Cisco doesn’t own. The            per ramping up its IBM and Dell OEM channels, Cisco’s
  its consolidation. And they are all intended to compete      LAN switching market is now steered by Cisco, IBM, HP           market share will be squeezed if profit margins are to be
  more intensely with Cisco, which owns 70% of the Ether-      and Dell, and perhaps Brocade–data center networking,           maintained, the survey suggests.
  net switching market but still has an insatiable appetite    server and storage stalwarts looking to take their custom-         Another carrot for Juniper and its high-performance
  for growth.                                                  ers to the next-generation infrastructure of unified fabrics,   networking direction will be buying patterns. The Goldman
      “Big data center vendors are driving LAN switching de-   virtualization, and the like.                                   Sachs survey found that most respondents base their pur-
  cisions, and purchases,” says Zeus Kerravala, an analyst         Data center deployments of 10G Ethernet are help-           chase on price performance over architectural road map.
  with The Yankee Group. “Where innovation’s been needed       ing to drive the market, according to Dell’Oro Group. The          Where does all this jockeying among the top tier leave
  is in the data center.”                                      firm expects the global Ethernet switching market to grow       Extreme, Enterasys, Force10 and the rest of the pack?
      “Innovation is being driven in the data center,” says    modestly in 2010, to $16.3 billion from $15.6 billion in        They’ve always claimed price/performance advances over
  Steve Schuchart of Current Analysis. The drive to auto-      2009. This is down considerably though from the $19.3           Cisco but never gained any meaningful market share. And
  mate the data center is making the all-in-one buy from a     billion market in 2008, Dell’Oro notes.                         in terms of marriages, Enterasys united with Siemens En-


                                                                                         20 of 22


                                  Data Center Derby       10G Ethernet Shakes         Remaking the         Soothing Data Center          A Bridge to              Data Center as
                                                                                                                                                                                              Resources
                                      Heats Up                Net Design               Data Center             Headaches               Terabit Ethernet        Ethernet Switch Driver
NETWORKING REDEFINED                                                                                                                                       Sponsored by




                  IS THE GLASS                          The Dell’Oro Group expects the global Ethernet switching market to grow modestly
                                                        in 2010, to $16.3 billion from $15.6 billion in 2009. This is down considerably though
                    HALF FULL?                          from the $19.3 billion market in 2008.


  terprise Communications to go squarely after the secure        network that requires fewer resources to operate and ac-          Avaya says it will issue a Nortel/Avaya product road
  wired/wireless unified communications opportunity.             quire while offering unique capabilities to scale for future   map 30 days after the deal’s close.
      Force10 is merging with Turin Networks, a provider of      requirements and changing demands,” says Chief Mar-               “The best place for Nortel data is in HP/3Com or Bro-
  wireless backhaul, Carrier Ethernet and converged access       keting Officer Paul Hooper. “We achieve this through the       cade, a company looking to expand its customer base,”
  systems for service providers. Force10 seems to be gravi-      delivery of a consistent Ethernet portfolio, stretching from   he says.
  tating more and more to the carrier cloud, but is still a      the edge of the network to the core, all powered by a             The best place for everyone else is with a major OEM
  high-performance data center play–though one that was          single OS, ExtremeXOS. Extreme’s network platform also         partner, according to CurrentAnalysis’ Schuchart. And if
  left behind by the data center systems mainstays.              enables organizations to migrate their data centers from       they haven’t had much success selling on price/perfor-
      That leaves Extreme Networks virtually alone in LAN        physical to virtual to cloud networks. The benefit is that     mance, perhaps they should play the architectural road
  switching. The company has been extending its product          enterprises can smoothly transition from separate to con-      map card.
  line for data center-specific applications, such as virtual-   verged networks and carriers can adopt pure Ethernet-             “For companies that don’t have a deal or are not whol-
  ization and 10G Ethernet. But analysts say they will have      based services.”                                               ly owned by a compute vendor, next year’s going to be
  little relevance beyond Extreme’s installed base.                 Switching may not be a differentiator for Avaya either,     tough sailing for them,” Schuchart says. “There’s also a
      “What problem is Extreme solving that nobody else is?”     after the Nortel deal. Due to the price sensitive and hotly    fair amount of room out there for companies who have
  Kerravala asks. “There just isn’t a differentiator compel-     competitive nature of the LAN switching business, Ker-         best-of-breed products, although in a data center moving
  ling enough.”                                                  ravala believes Avaya will look to part with its acquired      towards virtualized automation, the standalone providers
      Extreme begs to differ. “Extreme Networks delivers a       Nortel data networking products.                               are going to have a harder time.”•


                                                                                          21 of 22


                                   Data Center Derby        10G Ethernet Shakes        Remaking the         Soothing Data Center          A Bridge to             Data Center as
                                                                                                                                                                                            Resources
                                       Heats Up                 Net Design              Data Center             Headaches               Terabit Ethernet       Ethernet Switch Driver
Hp ProCurve Networking Jan2010

Mais conteúdo relacionado

Destaque

To Be Served, or To Serve
To Be Served, or To ServeTo Be Served, or To Serve
To Be Served, or To ServeDon McClain
 
How NOT To Introduce Yourself
How NOT To Introduce YourselfHow NOT To Introduce Yourself
How NOT To Introduce YourselfBernard Marr
 
Motivation and social adjustment
Motivation and social adjustmentMotivation and social adjustment
Motivation and social adjustmentAraullo University
 
Social networking PPT
Social networking PPTSocial networking PPT
Social networking PPTvarun0912
 
The 15 Most Common Body Language Mistakes
The 15 Most Common Body Language MistakesThe 15 Most Common Body Language Mistakes
The 15 Most Common Body Language MistakesBernard Marr
 
Social Networking Presentation
Social Networking PresentationSocial Networking Presentation
Social Networking PresentationAnusorn Kansap
 

Destaque (9)

Introducing Yourself
Introducing YourselfIntroducing Yourself
Introducing Yourself
 
Wisdom Circles Presentation09
Wisdom Circles Presentation09Wisdom Circles Presentation09
Wisdom Circles Presentation09
 
To Be Served, or To Serve
To Be Served, or To ServeTo Be Served, or To Serve
To Be Served, or To Serve
 
How NOT To Introduce Yourself
How NOT To Introduce YourselfHow NOT To Introduce Yourself
How NOT To Introduce Yourself
 
Motivation and social adjustment
Motivation and social adjustmentMotivation and social adjustment
Motivation and social adjustment
 
Circles IX
Circles IXCircles IX
Circles IX
 
Social networking PPT
Social networking PPTSocial networking PPT
Social networking PPT
 
The 15 Most Common Body Language Mistakes
The 15 Most Common Body Language MistakesThe 15 Most Common Body Language Mistakes
The 15 Most Common Body Language Mistakes
 
Social Networking Presentation
Social Networking PresentationSocial Networking Presentation
Social Networking Presentation
 

Hp ProCurve Networking Jan2010

  • 1. Sponsored by NETWORKING REDEFINED eGuide We’re living in an era of server consolidation, virtualization, green initiatives and cloud computing—initiatives throwing the data center network into a state of flux. Is legacy infrastructure, typically comprising multiple switching tiers running proprietary protocols, capable of handling next-generation, dynamic application demands? Or is time for a network overhaul built on the concepts of open, virtual switching, unified fabrics and bandwidths of 10 Gigabit Ethernet and beyond? In these articles, Network World examines how the data center network is evolving into a more simplified, open infrastructure. IN THIS eGUIDE 2 Data Center 5 10G Ethernet 8 Remaking the 13 Standards 16 A Bridge to 20 Data Center 22 Networking Derby Heats Up Shakes Net Design Data Center for Soothing Terabit Ethernet as Ethernet Resources Handicapping the crowd- to the Core Low-latency switches are Headaches in the With 40/100G Ethernet Switch Driver ed field, from the odds-on Shift from three- to two- the foundation for build- Data Center products on the way, How next-generation data favorites to the long shots tier architectures driven ing a unified-fabric data Emerging IEEE specifica- Ethernet experts look center initiatives shape by need for speed, server center tions aim to address ahead to Terabit Ethernet the LAN switching market virtualization, unified serious management issues standards and products switching fabrics raised by the explosion of by 2015 virtual machines Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 2. NETWORKING REDEFINED Sponsored by DATA CENTER DERBY HEATS UP By Beth Schultz • Network World •HP releasing the BladeSystem Matrix, a converged Handicapping the crowded field, from the odds-on software, server, storage and network platform. favorites to the long shots •IBM deepening its relationship with Brocade, deciding to sell Brocade’s Foundry switches and routers under Network thoroughbred Cisco jumps into the blade server Cisco’s blade servers are part of its data center plat- the IBM banner. market. Server stallion HP adds security blades to its Pro- form, called the Unified Computing System (UCS), which •Juniper unveiling Stratus Project, a multiyear under- Curve switches. IBM teams up with Brocade. Oracle buys includes storage, network and virtualization resources. taking through which it will partner with server, stor- Sun. And everybody courts that prize filly VMware. Cisco’s platform includes VMware’s vSphere technology age and software companies to develop a converged In this era of server consolidation and virtualization, and partnerships with BMC Software, EMC, Intel, Micro- data center fabric. green initiatives and cloud computing, the data center is soft and Oracle. •Oracle buying Sun for its hardware and software, then in flux and all the major vendors are jockeying for position, But Cisco’s entry into the data center fray has kicked grabbing Virtual Iron for its Xen-based hypervisor. galloping in with new products, strategies and alliances. up some dust among its longtime server partners HP and “What you see right now is everybody shoring up and IBM, and forced all of the major players to respond in “Everything is pointing to a unified fabric,” says John getting as many offerings as they can to provide all the some way. “Cisco has been so successful in the network Turner, director of network and systems at Brandeis Univer- hardware in the data center. Cisco, for example, wants to space, all the other vendors have to take it seriously at sity in Waltham, Mass. make it so you can be a complete Cisco shop, including the data center level,’’ says Anne Skamarock, a research “We’re in a transition, and it’s very important not to just all your servers,” says Mitchell Ashley, principal consultant director at Focus Consulting. buy who you bought from before. This is a great time to evalu- with Converging Networks and a Network World blogger. The resultant flurry of activity has included: ate your vendors, ask about long-term road maps and part- 2 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 3. NETWORKING REDEFINED Sponsored by “This is a great time to evaluate your vendors, ask about long-term road maps THE DOOR IS and partnerships, see how integrated they are. I wouldn’t make any decisions ALWAYS OPEN hastily if I were in IT.” — Zeus Kerravala, analyst, Yankee Group nerships, see how integrated they are,” says Yankee Group mind, says Philip Buckley-Mellor, a designer with BT Vision, trix-like orchestrated provisioning system. The HP BladeSystem analyst Zeus Kerravala. “I wouldn’t make any decisions hastily a provider of digital TV service in London. Yet Buckley-Mellor Matrix packages and integrates servers, networking, storage, if I were in IT.” admits he can’t imagine BT Vision’s future data center with- software infrastructure and orchestration in a single platform. This industry shakeup also could provide an opportunity for out HP at the core. “We already have most of the Matrix pieces ... so orches- some long-shot vendors to make a move on the leaders. Kerrav- Buckley-Mellor expects most of Vision’s data center opera- trating new servers into place is the next logical step,” Buck- ala puts Brocade in this category because of its storage and net- tions to run on HP’s latest blades, the Intel Nehalem multicore ley-Mellor says. work strengths, Citrix Systems for virtualization, F5 Networks for processor-based G6 servers. The infrastructure will be virtualized networking, and Liquid Computing for fabric computing. “These using VMware as needed. HP’s Virtual Connect, a BladeSystem Place your wagers could be the dark horses,” he says. management tool, is an imperative. Gartner analyst George Weiss says Cisco and HP unified Turner agrees that opportunities are available for the right “The ability to use Virtual Connect to re-patch our re- compute platforms run pretty much neck and neck. How- vendors. “I’m happy with my Cisco network. I’m thrilled with sources with networks and storage live, without impacting ever, IBM, HP’s traditional blade nemesis in the data center, it. No, I’m wowed by it. But that doesn’t mean there isn’t an any other service, without having to send guys out to site, has more work to do in creating the fabric over which the opportunity for another vendor to come in, pique my inter- without having the risk of broken fibers, has shaved at least resources are assembled, he adds. est, gain my respect and get in here,” Turner says. “This is an 50%, and potentially 60% to 70%, off the time it takes to “IBM can do storage, and the server component in opportunity to take a big leap. Companies are going to be deploy a new server or change the configuration of existing blades, and the networking part through Cisco or Bro- doing big refreshes.” servers,” Buckley-Mellor says. cade, so from a user perspective, it seems a fairly inte- These changing times for IT infrastructure require an open Within another year or so, he expects Vision to move to a Ma- grated type of architecture. But it’s not as componentized 3 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 4. NETWORKING REDEFINED Sponsored by as what Cisco and HP have,” Weiss says. which have their own next-generation data center strate- have to be able to coordinate activities, like provisioning “But with Virtual Connect and networking solutions like gies—will have leads because they’ve already got deep cus- and scaling, across the three domains. We have to keep ProCurve [switches], and virtualization software, virtualiza- tomer relationships. them operating together to achieve business goals,” Anto- tion management, blade-based architecture, all of the ele- “IT organizations will look to vendors for their strategies and nopoulos says. ments Cisco is delivering are within HP’s grasp and to a large determine how they’ll utilize those capabilities vs. going out and From that perspective, a unified compute-network-stor- extent HP already delivers. It may not be everything, but, exploring everything on the market and figuring out what new age platform makes sense—one way to get orchestration is there may be things HP delivers that Cisco doesn’t, like a things they’ll try and which they’ll buy,” Ashley says. to have as many resources as possible from a single ven- command of storage management,” he explains. dor, he says. “Problem is, you can only achieve that within Buckley-Mellor sees one technology area in which Cisco Cover your bets small islands of IT or at small IT organizations. Once you get is a step ahead of HP—converged networking, a la Fibre In planning for their next-generation data centers, IT executives to a dozen or more servers, chances are even if you bought Channel over Ethernet (FCoE). Cisco’s Nexus 7000 data should minimize the number of vendors they’ll be working with. them at the same time from the same vendor, they’ll have center switch supports this ANSI protocol for converging At the same time, it’s unrealistic to not consider a multivendor some differences,” he adds. storage and networking and the UCS will feature FCoE in- approach from the get-go, says Andreas Antonopoulos, an ana- Skamarock equates these emerging unified data center terconnect switches. lyst with Nemertes Research. platforms to the mainframes of old. “With the mainframe, IT “There are no two ways about it, we’re very interested in “They’ll never be able to reduce everything down to one ven- had control over just about every component. That kind of con- converged networking,” Buckley-Mellor says. Still, he’s not dor, so unless they’ve got a multivendor strategy for integration, trol allows you to do and make assumptions that you can’t too worried. “That technology needs to mature and I’m sure they’re going to end up with all these distinct islands, and that when you have a more distributed, multi-vendor environment.” HP will be there with a stable product at the right time for us. will limit flexibility,” he says. That means every vendor in this race needs to contin- In the meantime, Virtual Connect works great and saves me He espouses viewing the new data center in terms of orches- ue to build partnerships and build out their ecosystems, an ocean of time,” he adds. tration, not integration. especially in the management arena.• All this is not to say that Cisco and HP are the only horses “Because we’ll have these massive dependencies among in the race for the next-generation data center. But they, servers, network and storage, we need to make sure we Schultz is a longtime IT writer and editor. You can reach her at as well as companies like IBM and Microsoft—each of can run these as systems and not individual elements. We bschultz5824@gmail.com 4 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 5. NETWORKING REDEFINED Sponsored by 10G ETHERNET SHAKES NET DESIGN TO THE CORE By Jim Duffy • Network World and support for the new storage protocols. Networking in Shift from three- to two-tier architectures driven by need the data center must evolve to a unified switching fabric.” for speed, server virtualization, unified switching fabrics A three-tier architecture of access, aggregation and core switches has been common in enterprise networks The emergence of 10 Gigabit Ethernet, virtualization and tency, lossless architecture that lends itself to a two-tier ap- for the past decade or so. Desktops, printers, servers and unified switching fabrics is ushering in a major shift in proach. Storage traffic cannot tolerate the buffering and laten- LAN-attached devices are connected to access switches, data center network design: three-tier switching architec- cy of extra switch hops through a three-tier architecture that which are then collected into aggregation switches to tures are being collapsed into two-tier ones. includes a layer of aggregation switching, industry experts say. manage flows and building wiring. Higher, non-blocking throughput from 10G Ethernet All of this necessitates a new breed of high-performance, Aggregation switches then connect to core routers/ switches allows users to connect server racks and top-of-rack low-latency, non-blocking 10G Ethernet switches now hitting switches that provide routing, connectivity to wide-area switches directly to the core network, obviating the need for an the market. And it won’t be long before these 10G switches network services, segmentation and congestion manage- aggregation layer. Also, server virtualization is putting more ap- are upgraded to 40G and 100G Ethernet switches when ment. Legacy three-tier architectures naturally have a plication load on fewer servers due to the ability to decouple those IEEE standards are ratified in mid-2010. large Cisco component–specifically, the 10-year-old Cata- applications and operating systems from physical hardware. “Over the next few years, the old switching equipment lyst 6500 switch–given the company’s dominance in en- More application load on less server hardware requires needs to be replaced with faster and more flexible switch- terprise and data center switching. a higher-performance network. es,” says Robin Layland of Layland Consulting, an adviser Cisco says a three-tier approach is optimal for segmen- Moreover, the migration to a unified fabric that converges to IT users and vendors. “This time, speed needs to be tation and scale. But the company also supports two-tier storage protocols onto Ethernet also requires a very low-la- coupled with lower latency, abandoning spanning tree architectures should customers demand it. 5 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 6. NETWORKING REDEFINED Sponsored by “We are offering both,” says Senior Product Manager service providers. Network performance has to be non-block- “The result of all the queues is that it can take 80 micro- Thomas Scheibe. “It boils down to what the customer ing, highly reliable and faultless with low and predictable la- seconds or more to cross a three-tier data center,” he says. tries to achieve in the network. Each tier adds another two tency for broadcast, multicast and unicast traffic types. New data centers require cut-through switching–which hops, which adds latency; on the flipside it comes down “New applications are demanding predictable perfor- is not a new concept–to significantly reduce or even elimi- to what domain size you want and how big of a switch mance and latency,” says Jayshree Ullal, CEO of Arista Net- nate buffering within the switch, Layland says. Cut-through fabric you have in your aggregation layer. If the customer works, a privately held maker of low-latency 10G Ethernet switches can reduce switch-to-switch latency from 15 to wants to have 1,000 10G ports aggregated, you need a top-of-rack switches for the data center. “That’s why the 50 microseconds to 2 to 4, he says. two-tier design big enough to do that. If you don’t, you legacy three-tier model doesn’t work because most of the Another factor negating the three-tier approach to data need another tier to do that.” switches are 10:1, 50:1 oversubscribed,” meaning different center switching is server virtualization. Adding virtualization Blade Network Technology agrees: “Two-tier vs. three- applications are contending for limited bandwidth which to blade or rack-mount servers means that the servers them- tier is in large part driven by scale,” says Dan Tuchler, vice can degrade response time. selves take on the role of access switching in the network. president of strategy and product management at Blade This oversubscription plays a role in the latency of today’s Virtual switches inside servers takes place in a hypervi- Network Technologies, a maker of blade server switches switches in a three-tier data center architecture, which is 50 sor and in other cases the network fabric is stretched to for the data center. “At a certain scale you need to start to 100 microseconds for an application request across the the rack level using fabric extenders. The result is that the adding tiers to add aggregation.” network, Layland says. Cloud and virtualized data center access switching layer has been subsumed into the serv- But the latency inherent in a three-tier approach is inade- computing with a unified switching fabric requires less than ers themselves, Lippis notes. quate for new data center and cloud computing environments 10 microseconds of latency to function properly, he says. “In this model there is no third tier where traffic has that incorporate server virtualization and unified switching Part of that requires eliminating the aggregation tier in a to flow to accommodate server-to-server flows; traffic is fabrics that converge LAN and storage traffic, experts say. data center network, Layland says. But the switches themselves either switched at access or in the core at less than 10 Applications such as storage connectivity, high-perfor- must use less packet buffering and oversubscription, he says. microseconds,” he says. mance computing, video, extreme Web 2.0 volumes and the Most current switches are store-and-forward devices Because of increased I/O associated with virtual switching like require unique network attributes, according to Nick Lip- that store data in large buffer queues and then forward it in the server there is no room for a blocking switch in between pis, an adviser to network equipment buyers, suppliers and to the destination when it reaches the top of the queue. the access and the core, says Asaf Somekh, vice president 6 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 7. NETWORKING REDEFINED Sponsored by of marketing for Voltaire, a maker of Infiniband and Ether- net switches for the data center. “It’s problematic to have so FORK IN THE ROAD many layers.” Virtualization, inexpensive 10G links and unified Ethernet switching fabrics are catalyzing a migration from three-tier Another requirement of new data center switches is to Layer 3 data center switching architectures to flatter two-tier Layer 2 designs that subsume the aggregation layer into the access layer. Proponents say this will decrease cost, optimize operational efficiency, and simplify management. eliminate the Ethernet spanning tree algorithm, Layland says. Currently all Layer 2 switches determine the best path from Three tier Two tier one endpoint to another using the spanning tree algorithm. Core Core Only one path is active, the other paths through the fabric to the destination are only used if the best path fails. The Aggregation lossless, low-latency requirements of unified fabrics in virtu- alized data centers requires switches using multiple paths Access/ to get traffic to its destination, Layland says. These switches Aggregation continually monitor potential congestion points and pick the Access fastest and best path at the time the packet is being sent. “Spanning tree has worked well since the beginning of Layer 2 networking but the ‘only one path’ [approach] is not good enough in a non-queuing and non-discarding world,” to acquire more servers. maintain and manage. Layland says. And a unified fabric means a server does not need sepa- “If you have switches with adequate capacity and Finally, cost is a key factor in driving two-tier architec- rate adapters and interfaces for LAN and storage traffic. you’ve got the right ratio of input ports to trunks, you don’t tures. Ten Gigabit Ethernet ports are inexpensive–about Combining both on the same network can reduce the num- need the aggregation layer,” says Joe Skorupa, a Gartner $500, or twice that of Gigabit Ethernet ports yet with 10 ber and cost of interface adapters by half, Layland notes. analyst. “What you’re doing is adding a lot of complexity times the bandwidth. Virtualization allows fewer servers to And by eliminating the need for an aggregation layer of and a lot of cost, extra heat and harder troubleshooting process more applications, thereby eliminating the need switching, there are fewer switches to operate, support, for marginal value at best.” • 7 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 8. NETWORKING REDEFINED Sponsored by REMAKING THE DATA CENTER By Robin Layland • Network World are as compelling as the virtualization story. Storage has Low-latency switches are the foundation for building a unified-fabric data center been moving to IP for years, with a significant amount of storage already attached via NAS or iSCSI devices. The A major transformation is sweeping over data center switch- porting server virtualization along with merging the separate cost-savings and flexibility gains are well-known. ing. Over the next few years the old switching equipment IP and storage networks is just too great. Supporting these The move now is to directly connect Fibre Channel stor- needs to be replaced with faster and more flexible switches. changes is impossible without the next evolution in switching. age to the IP switches, eliminating the separate Fibre Chan- Three factors are driving the transformation: server vir- The good news is that the switching transformation will take nel storage-area network. Moving Fibre Channel to the IP tualization, direct connection of Fibre Channel storage to years, not months, so there is still time to plan for the change. infrastructure is a cost-saver. The primary way is by reducing the IP switching and enterprise cloud computing. the number of adapters on a server. Currently servers need They all need speed and higher throughput to succeed but The drivers an Ethernet adapter for IP traffic and a separate storage unlike the past it will take more than just a faster interface. The story of how server virtualization can save money is well- adapter for the Fibre Channel traffic. Guaranteeing high This time speed needs to be coupled with lower latency, aban- known. Running a single application on a server commonly availability means that each adapter needs to be duplicated, doning spanning tree and supporting new storage protocols. results in utilization in the 10% to 30% range. Virtualization resulting in four adapters per server. A unified fabric reduces Without these changes, the dream of a more flexible and allows multiple applications to run on the server within their the number to two since the IP and Fibre Channel or iSCSI lower-cost data center will remain just a dream. Networking own image, allowing utilization to climb into the 70% to 90% traffic share the same adapter. The savings grow since halv- in the data center must evolve to a unified switching fabric. range. This cuts the number of physical servers required; saves ing the number of adapters reduces the number of switch Times are hard, money is tight; can a new unified-fabric re- on power and cooling and increases operational flexibility. ports and the amount of cabling. It also reduces operational ally be justified? The answer is yes. The cost-savings from sup- The storage story is not as well-known, but the savings costs since there is only one network to maintain. 8 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 9. NETWORKING REDEFINED Sponsored by The third reason is internal or enterprise cloud comput- the rest of the network the current data center switches onds to go from the server to the switch. Each switch-to- ing. In the past when a request reached an application, the provide very low latency, discard very few packets and switch hop adds 15 microseconds and can add as much work stayed within the server/application. Over the years, support 10 Gigabit Ethernet interconnects. The problem is as 40 microseconds. For example, assume two servers are this way of design and implementing applications has that these new challenges need even lower latency, better at the “far” end of the data center. A packet leaving the changed. Increasingly when a request arrives at the server, reliability, higher throughput and support for Fibre Chan- requesting server travels to the top of rack switch, then the the application may only do a small part of the work; it dis- nel over Ethernet (FCoE) protocol. end-of-row switch and onward to the core switch. The hops tributes the work to other applications in the data center, The first challenge is latency. The problem with the are then repeated to the destination server. That is four making the data center one big internal cloud. current switches is that they are based on a store-and- switch-to-switch hops for a minimum of 60 microseconds. Attaching storage directly to this IP cloud only increases forward architecture. Store-and-forward is generally asso- Add in the 10 microseconds to reach each server and the the number of critical flows that pass over the switching ciated with applications such as e-mail where the mail total is 80 microseconds. The delay can increase to well cloud. A simple example shows why low latency is a must. server receives the mail, stores it on a disk and then later over 100 microseconds and becomes a disaster if a switch If the action took place within the server, then each storage forwards it to where it needs to go. Store-and-forward is has to discard the packet, requiring the TCP stack on the get would only take a few microseconds to a nanosecond to considered very slow. How are layer 2 switches, which are sending server to time out and retransmit the packet. perform. With most of the switches installed in enterprises very fast, store-and-forward devices? Latency of 80 microseconds each way was acceptable the get can take 50 to 100 microseconds to cross the cloud, Switches have large queues. When a switch receives in the past when response time was measured in seconds, which, depending on the number of calls, adds significant a packet, it puts it in a queue, and when the message but with the goal to provide sub-second response time, the delays to processing. If a switch discards the packet, the reaches the front of the queue, it is sent. Putting the pack- microseconds add up. An application that requires a large response can be even longer. It becomes critical that the et in a queue is a form of store-and-forward. A large queue chunk of data can take a long time to get it when each get cloud provides very low latency with no dropped packets. has been sold as an advantage since it means the switch can only retrieve 1,564 byes at a time. A few hundred round can handle large bursts of data without discards. trips add up. The impact is not only on response time. The The network and switch problem The result of all the queues is that it can take 80 micro- application has to wait for the data resulting in an increase Why can’t the current switching infrastructure handle vir- seconds or more for a large packet to cross a three-tier data in the elapsed time it takes to process the transaction. tualization, storage and cloud computing? Compared with center. The math works as follows. It can take 10 microsec- That means that while a server is doing the same amount 9 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 10. NETWORKING REDEFINED Sponsored by The first big change in [new generation] switches is the way the switch forwards packets. HOW DO THEY Instead of a store-and-forward design, a cut-through design is used, which significantly DO THAT? reduces or eliminates queuing inside the switch. A cut-through design can reduce switch time [of] 15 to 50 microseconds to two to four microseconds. of work, there is an increase in the number of concurrent plex and expensive to implement. It is only now with the A current problem with the multi-path approach is that tasks, lowering the server overall throughput. very low-latency requirement that switch manufacturers there is no standard on how they do it. Work is underway The new generation of switches overcomes the large can justify spending the money to implement it. within standard groups to correct this problem but for the latency of the past by eliminating or significantly reducing The second big change is abandoning spanning tree early versions each vendor has their own solution. A signif- queues and speeding up their own processing. The words within the data center switching fabric. The new genera- icant amount of the work falls under a standard referred used to describe it are: lossless transport; non-blocking; low tion of switches uses multiple paths through the switching to as Data Center Bridging (DCB). The reality is that for latency; guaranteed delivery; multipath and congestion man- fabric to the destination. They are constantly monitoring the immediate future mixing and matching different ven- agement. Lossless transport and guaranteed delivery mean potential congestion points, or queuing points, and pick dor’s switches within the data center is not possible. Even they don’t discard packets. Non-blocking means they either the fastest and best path at the time the packet is being when DCB and other standards are finished there will be don’t queue the packet or have a queue length of one or two. sent. Currently all layer 2 switches determine the “best” many interoperability problems to work out, thus a single The first big change in the switches is the design of the path from one endpoint to another one using the span- vendor solution may be the best strategy. way the switch forwards packets. Instead of a store-and- ning tree algorithm. Only one path is active, the other Speed is still part of the solution. The new switches are forward design, a cut-through design is generally used, paths through the fabric to the destination are only used built for very dense deployment of 10 Gigabit and prepared which significantly reduces or eliminates queuing inside if the “best” path fails. Spanning tree has worked well for 40/100 Gigabit. The result of all these changes reduces the switch. A cut-through design can reduce switch time since the beginning of layer 2 networking but the “only the trip time mentioned from 80 microseconds to less than from 15 to 50 microseconds to two to four microseconds. one path” is not good enough in a non-queuing and non- 10 microseconds, providing the needed latency and through- Cut-through is not new, but it has always been more com- discarding world. put to make fiber channel and cloud computing practical. 10 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 11. NETWORKING REDEFINED Sponsored by Virtualization curve ball problem with this approach is the coordination required on the same port. The answer is to eliminate the spanning Server virtualization creates additional problems for the between the two groups and the level of knowledge of the tree restriction of not allowing a message to be sent back current data center switching environment. The first prob- networking required by the server group. Having the net- over the port it came from. lem is each physical server has multiple virtual images, work group maintain the soft switch in the server creates each with its own media access control (MAC) address. the same set of problems. Spanning tree and virtualization This causes operational complications and is a real prob- Today, the answer is to learn to deal with confusion and The second curve ball from virtualization is ensuring that lem if two virtual servers communicate with each other. develop procedures to make the best of the situation and there is enough throughput to and from the server and The easiest answer is to put a soft switch in the VM, which hope for the best. A variation on this is to use a soft switch that the packet takes the best path through the data cen- all the VM vendors provide. This allows the server to pres- from the same vendor as the switches in the network. The ter. As the number of processors on the physical server ent a single MAC address to the network switch and per- idea is that coordination will be easier since the switch keeps increasing, the number of images increase, with form the functions of a switch for the VMs in the server. vendor built it and has hopefully made the coordination the result that increasingly large amounts of data need There are several problems with this approach. The easier. Cisco is offering this approach with VMware. to be moved in and out of the server. The first answer is soft switch needs to enforce policy and access control The third solution is to have all the communications to use 10 Gigabit and eventually 40 or 100 Gigabit. This list (ACL); make sure VLANs are followed and implement from the virtual server sent to the network switch. This is a good answer but may not be enough since the data security. For example, if one image is compromised, it would simplify the switch in the VM since it would not center needs to create a very low-latency, non-blocking should not be able to freely communicate with the other have to enforce policy, tag packets or worry about secu- fabric with multiple paths. Using both adapters attached images on the server if policy says they should not be rity. The network switch would perform all these functions to different switches allows multiple paths along the en- talking to each other. as if the virtual servers were directly connected to the tire route, helping to ensure low latency. If they were on different physical servers the network servers and this was the first hop into the network. Once again spanning tree is the problem. The solution would make sure policy and security procedures were fol- This approach has appeal since it keeps all the well is to eliminate spanning tree, allowing both adapters to lowed. The simple answer is that the group that maintains developed processes in place and restores clear account- be used. The reality is the new generation layer 2 switches the server and the soft switch needs to make sure all the ability on who does what. The problem is spanning tree in the data center will act more like routers, implementing network controls are followed and in place. The practical does not allow a port to receive a packet and send it back their own version of OSPF at layer 2. 11 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 12. NETWORKING REDEFINED Sponsored by Storage or next and thus have time to plan an orderly transformation. treme, Force 10 and Voltaire have switches that can. The last reason new switches are needed is Fibre Channel The transformation can also be taken in steps. For ex- The second part is whether the vendor can overcome the storage. Switches need to support the ability to run stor- ample, one first step would be to migrate Fibre Channel spanning tree problem along with support for dual adapt- age traffic over Ethernet/IP such as NAS, ISCSI or FCoE. storage onto the IP fabric and immediately reduce the ers and multiple pathing with congestion monitoring. As is Besides adding support for the FCoE protocol they will also number of adapters on each server. This can be accom- normally the case vendors are split on whether to wait until be required to abandon spanning tree and enable greater plished by replacing just the top-of-the-rack switch. The standards are finished before providing a solution or pro- cross-sectional bandwidth. For example Fibre Channel re- storage traffic flows over the server’s IP adapters and to vide an implementation based on their best guess of what quires that both adapters to the server are active and carry- the top-of-the-rack switch, which sends the Fibre Channel the standards will look like. Cisco and Arista Networks have ing traffic, something the switch’s spanning tree algorithm traffic directly to the SAN. The core and end-of-rack switch jumped in early and provide the most complete solutions. doesn’t support. Currently the FCoE protocol is not finished do not have to be replaced. The top-of-the-rack switch Other vendors are waiting for the standards to be complet- and vendors are implementing a draft version. The good supports having both IP adapters active for storage traf- ed in the next year before releasing products. news is that it is getting close to finalization. fic only with spanning tree’s requirement of only one ac- What if low latency is a future requirement, what is the tive adapter applying to just the data traffic. Brocade and best plan? Whenever the data center switches are sched- Current state of the market Cisco currently offer this option. uled for replacement they should be replaced with switch- How should the coming changes in the data center affect your If low latency is needed, then all the data center switches es that can support the move to the new architecture and plan? The first step is to determine how much of your traffic need to be replaced. Most vendors have not yet implement- provide very low latency. This means it is very important needs very low latency right now. If cloud computing, migrat- ed the full range of features needed to support the switch- to understand the vendor’s plans and migration schemes ing critical storage or a new low-latency application such as ing environment described here. To understand where a that will move you to the next-generation unified fabric. algorithmic stock trading is on the drawing broad, then it is vendor is; it is best to break it down into two parts. The best to start the move now to the new architecture. Most en- first part is whether the switch can provide very low latency. Layland is head of Layland Consulting. He can be reached terprises don’t fall in that group yet but they will in this year Many vendors such as Arista Networks, Brocade, Cisco, Ex- at robin@layland.com. 12 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 13. NETWORKING REDEFINED Sponsored by STANDARDS FOR SOOTHING HEADACHES IN THE DATA CENTER By Jim Duffy • Network World “There needed to be a way to communicate between the Emerging IEEE specifications aim to address serious management issues hypervisor and the network,” says Jon Oltsik, an analyst raised by the explosion of virtual machines at Enterprise Systems Group. “When you start thinking about the complexities associated with running dozens of Cisco, HP and others are waging an epic battle to gain interface cards (NIC) and blade servers and put it back VMs on a physical server the sophistication of data center control of the data center, but at the same time they are onto physical Ethernet switches connecting storage and switching has to be there.” joining forces to push through new Ethernet standards compute resources. But adding this intelligence to the hypervisor or host that could greatly ease management of those increasingly The IEEE draft standards boast a feature called Virtual would add a significant amount of network processing virtualized IT nerve centers. Ethernet Port Aggregation (VEPA), an extension to physical overhead to the server, Oltsik says. It would also dupli- The IEEE 802.1Qbg and 802.1Qbh specifications are and virtual switching designed to eliminate the large number cate the task of managing media access control address designed to address serious management issues raised of switching elements that need to be managed in a data tables, aligning policies and filters to ports and/or VMs by the explosion of virtual machines in data centers center. Adoption of the specs would make management and so forth. that traditionally have been the purview of physical serv- easier for server and network administrators by requiring “If switches already have all this intelligence in them, why ers and switches. In a nutshell, the emerging standards fewer elements to manage, and fewer instances of element would we want to do this in a different place?” Oltsik notes. would offload significant amounts of policy, security and characteristics—such as switch address tables, security and VEPA does its part by allowing a physical end station management processing from virtual switches on network service attribute policies, and configurations—to manage. to collaborate with an external switch to provide bridg- 13 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 14. NETWORKING REDEFINED Sponsored by ing support between multiple virtual end stations and around mid-2011, according to those involved in the IEEE based on the VN-Tag specification created by Cisco and VMs, and external networks. This would alleviate the effort, but pre-standard products could emerge late this VMware to have a policy follow a VM as it moves. This need for virtual switches on blade servers to store and year. Specifically, bg addresses edge virtual bridging: an multichannel capability attaches a tag to the frame that process every feature—such as security, policy and ac- environment where a physical end station contains mul- identifies which VM the frame came in on. cess control lists (ACLs)—resident on the external data tiple virtual end stations participating in a bridged LAN. But another extension was required to allow users to center switch. VEPA allows an external bridge—or switch—to perform in- deploy remote switches—instead of those adjacent to the ter-VM hairpin forwarding of frames, something standard server rack—as the policy controlling switches for the vir- Diving into IEEE draft standard details 802.1Q bridges or switches are not designed to do. tual environment. This is where 802.1Qbh comes in: It Together, the 802.1Qbg and bh specifications are de- “On a bridge, if the port it needs to send a frame on is allows edge virtual bridges to replicate frames over mul- signed to extend the capabilities of switches and end sta- the same it came in on, normally a switch will drop that tiple virtual channels to a group of remote ports. This will tion NICs in a virtual data center, especially with the pro- packet,” says Paul Congdon, CTO at HP ProCurve, vice enable users to cascade ports for flexible network design, liferation and movement of VMs. Citing data from Gartner, chair of the IEEE 802.1 group and a VEPA author. “But and make more efficient use of bandwidth for multicast, officials involved in the IEEE’s work on bg and bh say 50% VEPA enables a hairpin mode to allow the frame to be broadcast and unicast frames. of all data center workloads will be virtualized by 2012. forwarded out the port it came in on. It allows it to turn The port extension capability of bh lets administrators Some of the other vendors involved in the bg and bh around and go back.” choose the switch they want to delegate policies, ACLs, work include 3Com, Blade Network Technologies, Bro- VEPA does not modify the Ethernet frame format but only filters, QoS and other parameters to VMs. Port extenders cade, Dell, Extreme Networks, IBM, Intel, Juniper Net- the forwarding behavior of switches, Congdon says. But will reside in the back of a blade rack or on individual works and QLogic. While not the first IEEE specifications VEPA by itself was limited in its capabilties. So HP com- blades and act as a line card of the controlling switch, to address virtual data centers, bg and bh are amend- bined its VEPA proposal with a Cisco’s VN-Tag proposal for says Joe Pelissier, technical lead at Cisco. ments to the IEEE 802.1Q specification for virtual LANs server/switch forwarding, management and administration “It greatly reduces the number of things you have to and are under the purview of the organization’s 802.1 to support the ability to run multiple virtual switches and manage and simplifies management because the control- Data Center Bridging and Interworking task groups. multiple VEPAs simultaneously on the endpoint. ling switch is doing all of the work,” Pelissier says. The bg and bh standards are expected to be ratified This required a channeling scheme for bg, which is What’s still missing from bg and bh is a discov- 14 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 15. NETWORKING REDEFINED Sponsored by Cisco and HP are leading proponents of the IEEE effort despite the fact that OF LIKE MINDS Cisco is charging hard into HP’s traditional server territory while HP is ramping up its networking efforts. ... ery protocol for autoconfiguration, Pelissier says. Some Cisco, HP say they’re in synch “This isn’t the battle it’s been made out to be,” Pelissier says. in the 802.1 group are leaning toward using the existing Cisco and HP are leading proponents of the IEEE effort de- Though Congdon acknowledges he initially proposed Logical Link Discovery Protocol (LLDP), while others, includ- spite the fact that Cisco is charging hard into HP’s tradition- VEPA as an alternative to Cisco’s VN-Tag technique, the two ing Cisco and HP, are inclined to define a new protocol for al server territory while HP is ramping up its networking ef- together present “a nice layered architecture that builds the task. forts in an attempt to gain control of data centers that have upon one another where virtual switches and VEPA form “LLDP is limited in the amount of data it can carry and how been turned on their heads by virtualization technology. the lowest layer of implementation, and you can move all quickly it can carry that data,” Pelissier says. “We need some- Cisco and HP say their VEPA and VN-Tag/multichannel and the way to more complex solutions such as Cisco’s VN-Tag.” thing that carries data in the range of 10s to 100s of kilobytes port extension proposals are complementary despite reports that And the proposals seem to have broad industry support. and is able to send the data faster rather than one 1,500 byte they are competing techniques to accomplish the same thing: “We do believe this is the right way to go,” says Dhritiman frame a second. LLDP doesn’t have fragmentation capability reducing the number of managed data center elements and de- Dasgupta, senior manager of data center marketing at Juniper. either. We want to have the capability to split the data among fining a clear line of demarcation between NIC, server and switch “This is putting networking where it belongs, which is on net- multiple frames.” administrators when monitoring VM communications. working devices. The network needs to know what’s going on.”• 15 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 16. NETWORKING REDEFINED Sponsored by A BRIDGE TO TERABIT ETHERNET By Jim Duffy • Network World According to the 802.3ba task force, bandwidth re- With 40/100G Ethernet products on the way, Ethernet experts look ahead to quirements for computing and core networking applica- Terabit Ethernet standards and products by 2015 tions are growing at different rates, necessitating the defi- nition of two distinct data rates for the next generation of IT managers who are getting started with--or even pushing for 10G fixed Ethernet switches doubled in 2008, accord- Ethernet. Servers, high-performance computing clusters, the limits of--10 Gigabit Ethernet in their LANs and data ing to Infonetics. And there is pent-up demand for 40 blade servers, storage-area networks and network-at- centers don’t have to wait for higher-speed connectivity. Gigabit and 100 Gigabit Ethernet, says John D’Ambrosia, tached storage all currently make use of 1G and 10G Eth- Pre-standard 40 Gigabit and 100 Gigabit Ethernet chair of the 802.3ba task force in the IEEE and a senior ernet, with 10G growing significantly in 2007 and 2008. products--server network interface cards, switch uplinks research scientist at Force10 Networks. I/O bandwidth projections for server and computing and switches—have hit the market. And standards-com- “There are a number of people already who are using applications, including server traffic aggregation, indicate pliant products are expected to ship in the second half of link aggregation to try and create pipes of that capacity,” that there will be a significant market potential for a 40G this year, not long after the expected June ratification of he says. “It’s not the cleanest way to do things ... [but] Ethernet interface, according to the task force. Ethernet the 802.3ba standard. people already need that capacity.” at 40G will provide approximately the same cost balance The IEEE, which began work on the standard in late D’Ambrosia says even though 40/100G Ethernet prod- between the LAN and the attached stations as 10G Ether- 2006, is expected to define two different speeds of Eth- ucts haven’t arrived yet, he’s already thinking ahead to Tera- net, the task force believes. ernet for two different applications: 40G for server con- bit Ethernet standards and products by 2015. “We are going Core networking applications have demonstrated the nectivity and 100G for core switching. to see a call for a higher speed much sooner than we saw the need for bandwidth beyond existing capabilities and be- Despite the global economic slowdown, global revenue call for this generation” of 10/40/100G Ethernet, he says. yond the projected bandwidth requirements for computing 16 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 17. NETWORKING REDEFINED Sponsored by applications. Switching, routing, and aggregation in data previous 802.3 amendments, new physical layers specific says. “By our own projections, the need for 100G was in centers, Internet exchanges and service provider peering to either 40Gbps or 100Gbps operation will be defined. the 2010 timeframe. We should have been done with the points, and high-bandwidth applications such as video on By employing the existing 802.3 MAC protocol, 100G [spec] probably in the 2007-08 timeframe, at the demand and high-performance computing, need a 100 802.3ba is intended to maintain full compatibility with the latest. We actually started it late, which is going to make Gigabit Ethernet interface, according to the task force. installed base of Ethernet nodes, the task force says. The the push for terabit seem early by comparison. But when “Initial applications (of 40/100G Ethernet) are already spec is also expected to use “proven and familiar media,” we look at the data forecasts that we’re seeing, it looks showing up, in stacking and highly aggregated LAN links, including optical fiber, backplanes and copper cabling, to be on cue.” but the port counts are low,” says George Zimmerman, CTO and preserve existing network architecture, management Driving demand for 40/100G Ethernet are the same of SolarFlare, a maker of Ethernet physical layer devices. and software, in an effort to keep design, installation and drivers currently stoking 10G: data center virtualization Zimmerman says 10G is just now taking off in the ac- maintenance costs at a minimum. and storage, and high-definition videoconferencing and cess layer of large networks and will eventually move to With initial interoperability testing commencing in late medical imaging. Some vendors are building 40/100G the client side, creating the need for 40/100G in the dis- 2009, public demonstrations will emerge in 2010, and Ethernet capabilities into their products now. tribution layer and the network core. certification testing will start once the standard is ratified, He says the application of 100 Gigabit Ethernet in the says Brad Booth, chair of the Ethernet Alliance. Vendors prepare for 100 Gigabit Ethernet core is imminent, and is about two years away in the distri- The specification and formation of the 40/100G task Cisco’s Nexus 7000 data center switch, which debuted in ear- bution layer. “Both will be driven by and drive 10G adoption force did not come without some controversy, however. ly 2009, is designed for future delivery of 40/100G Ethernet. in the access and client end of the network, where today the Participants in the Higher Speed Study Group (HSSG) “We have a little more headroom, which isn’t bad to numbers are still much smaller than the potential,” he says. within the IEEE were divided on whether to include 40G have when you look at future Ethernet speed transitions Ethernet as part of their charter or stay the course with coming in the market,” says Doug Gourlay, senior director of Spec designed for seamless upgrades 100 Gigabit Ethernet. data center marketing and product management at Cisco. The 802.3ba specification will conform to the full-duplex After about a month though, the HSSG agreed to work on “We’re pretty early advocates of the 100G effort in the IEEE. operating mode of the IEEE 802.3 Media Access Control a single standard that encompassed both 40G and 100G. “[But] the earliest you’ll see products from any com- (MAC) layer, according to the task force. As was the case in “In a sense, we were a little bit late with this,” D’Ambrosia pany that are credible deliveries and reasonably priced: 17 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 18. NETWORKING REDEFINED Sponsored by Latency is a bigger issue than most people anticipate. ... As you aggregate traffic into 10G ports, TIME AND just the smallest difference in the clocks between ports can cause high latency and packet loss. TIME AGAIN At 40G, it’s an order of magnitude more important than it is for 10G and Gig. —Tim Jefferson, general manager, Spirent second half of 2010 onward for 40/100G,” he adds. bit Ethernet modules planned for release in early 2010, cause a lot of the innovations going on with Ethernet and Verizon Business offers 10G Ethernet LAN and Ethernet says Tim Jefferson, general manager of the converged a lot of the demand for all these changes in data centers Virtual Private Line services to customers in 100 U.S. met- core solutions group at Spirent. Jefferson says one of the are meant to address lower latencies,” Jefferson adds. ro markets. Verizon Business also offers “10G-capable” caveats that users should be aware of as they migrate Ethernet Private Line services. from 10G to 40/100G Ethernet is the need to ensure pre- Cabling challenges The carrier has 40G Ethernet services on its five-year cise clocking synchronization between systems--especially Another challenge is readying the cabling infrastructure road map but no specific deployment dates, says Jeff between equipment from different vendors. for 40/100G, experts say. Ensuring the appropriate grade Schwartz, Group Manager, Global Ethernet Product Mar- Imprecise clocking between systems at 40/100G--even at and length of fiber is essential to smooth, seamless op- keting. Instead, Verizon Business has more 10G Ethernet 10G--can increase latency and packet loss, Jefferson says. eration, they say. access services on tap. “This latency issue is a bigger issue than most people “The big consideration is, what’s a customer’s cabling “We want to get to 100G,” Schwartz says. “40G may be anticipate,” he says. “At 10G, especially at high densities, installation going to look like and what they’re looking for an intermediary step.” the specs allow for a little variance for clocks. As you ag- to be able to handle that,” Booth says. “They are probably Once Verizon Business moves its backbone architec- gregate traffic into 10G ports, just the smallest difference going to need to have a parallel fiber capability.” ture toward 40/100G, products and services will be fol- in the clocks between ports can cause high latency and “The recommendations we’re making to customers on lowing, he says. packet loss. At 40G, it’s an order of magnitude more im- their physical plant today are designed to take them from Spirent Communications, a maker of Ethernet testing portant than it is for 10G and Gig. 1G to 10G; 10G to a unified fabric; and then address gear, offers 40G Ethernet testing modules, with 100 Giga- “This is a critical requirement in data centers today be- future 40G,” Cisco’s Gourlay says. 18 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 19. NETWORKING REDEFINED Sponsored by The proposed physical interfaces (PHY) for 40G Ethernet the Ethernet Alliance. The 100 Gigabit Ethernet rate will in- The proposed PHYs for 40G Ethernet are 1 meter back- include a range to cover distances inside the data center clude distances and media appropriate for data center, as plane, 10 meter copper and 100 meter multimode fiber; and up to 100 meters, to accommodate a range of server form well as service provider interconnection for intra-office and 10 meter copper, 100 meter multimode, and 10 kilometer factors, including blade, rack and pedestal, according to inter-office applications, according to the organization. and 40 kilometer single-mode fiber for 100 Gigabit Ethernet.• 19 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 20. NETWORKING REDEFINED Sponsored by DATA CENTER AS ETHERNET SWITCH DRIVER By Jim Duffy • Network World And pricing pressure is expected to increase, accord- How next-generation data center initiatives shape the LAN switching market ing to a Nov. 19, 2009, Goldman Sachs survey of 100 IT executives on IT spending. With its 3Com buy, HP can 2010 promises to be an interesting year in the enterprise single large vendor more attractive to customers, he says. now offer a core data center switch in addition to the en- LAN switching market. Indeed, the LAN switching market is no longer “Cisco terprise switches it sells at roughly half the price of com- With the exception of Avaya, next-generation data cen- and the Seven Dwarves”–the seven companies all vy- parable Cisco products, the survey notes. And with Juni- ter initiatives are driving the LAN switching market and ing for that 25% to 30% share Cisco doesn’t own. The per ramping up its IBM and Dell OEM channels, Cisco’s its consolidation. And they are all intended to compete LAN switching market is now steered by Cisco, IBM, HP market share will be squeezed if profit margins are to be more intensely with Cisco, which owns 70% of the Ether- and Dell, and perhaps Brocade–data center networking, maintained, the survey suggests. net switching market but still has an insatiable appetite server and storage stalwarts looking to take their custom- Another carrot for Juniper and its high-performance for growth. ers to the next-generation infrastructure of unified fabrics, networking direction will be buying patterns. The Goldman “Big data center vendors are driving LAN switching de- virtualization, and the like. Sachs survey found that most respondents base their pur- cisions, and purchases,” says Zeus Kerravala, an analyst Data center deployments of 10G Ethernet are help- chase on price performance over architectural road map. with The Yankee Group. “Where innovation’s been needed ing to drive the market, according to Dell’Oro Group. The Where does all this jockeying among the top tier leave is in the data center.” firm expects the global Ethernet switching market to grow Extreme, Enterasys, Force10 and the rest of the pack? “Innovation is being driven in the data center,” says modestly in 2010, to $16.3 billion from $15.6 billion in They’ve always claimed price/performance advances over Steve Schuchart of Current Analysis. The drive to auto- 2009. This is down considerably though from the $19.3 Cisco but never gained any meaningful market share. And mate the data center is making the all-in-one buy from a billion market in 2008, Dell’Oro notes. in terms of marriages, Enterasys united with Siemens En- 20 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver
  • 21. NETWORKING REDEFINED Sponsored by IS THE GLASS The Dell’Oro Group expects the global Ethernet switching market to grow modestly in 2010, to $16.3 billion from $15.6 billion in 2009. This is down considerably though HALF FULL? from the $19.3 billion market in 2008. terprise Communications to go squarely after the secure network that requires fewer resources to operate and ac- Avaya says it will issue a Nortel/Avaya product road wired/wireless unified communications opportunity. quire while offering unique capabilities to scale for future map 30 days after the deal’s close. Force10 is merging with Turin Networks, a provider of requirements and changing demands,” says Chief Mar- “The best place for Nortel data is in HP/3Com or Bro- wireless backhaul, Carrier Ethernet and converged access keting Officer Paul Hooper. “We achieve this through the cade, a company looking to expand its customer base,” systems for service providers. Force10 seems to be gravi- delivery of a consistent Ethernet portfolio, stretching from he says. tating more and more to the carrier cloud, but is still a the edge of the network to the core, all powered by a The best place for everyone else is with a major OEM high-performance data center play–though one that was single OS, ExtremeXOS. Extreme’s network platform also partner, according to CurrentAnalysis’ Schuchart. And if left behind by the data center systems mainstays. enables organizations to migrate their data centers from they haven’t had much success selling on price/perfor- That leaves Extreme Networks virtually alone in LAN physical to virtual to cloud networks. The benefit is that mance, perhaps they should play the architectural road switching. The company has been extending its product enterprises can smoothly transition from separate to con- map card. line for data center-specific applications, such as virtual- verged networks and carriers can adopt pure Ethernet- “For companies that don’t have a deal or are not whol- ization and 10G Ethernet. But analysts say they will have based services.” ly owned by a compute vendor, next year’s going to be little relevance beyond Extreme’s installed base. Switching may not be a differentiator for Avaya either, tough sailing for them,” Schuchart says. “There’s also a “What problem is Extreme solving that nobody else is?” after the Nortel deal. Due to the price sensitive and hotly fair amount of room out there for companies who have Kerravala asks. “There just isn’t a differentiator compel- competitive nature of the LAN switching business, Ker- best-of-breed products, although in a data center moving ling enough.” ravala believes Avaya will look to part with its acquired towards virtualized automation, the standalone providers Extreme begs to differ. “Extreme Networks delivers a Nortel data networking products. are going to have a harder time.”• 21 of 22 Data Center Derby 10G Ethernet Shakes Remaking the Soothing Data Center A Bridge to Data Center as Resources Heats Up Net Design Data Center Headaches Terabit Ethernet Ethernet Switch Driver