{DESCRIPTION} This screen displays the course title, course number and author’s name and title. It also contains images of Data Center Networking products (left to right): Cisco Nexus 5010/5020 switch modules, IBM b-type r-Series 4-slot, 8-slot, and 16-slot Ethernet switches, and IBM j-type m-series 2-slot, 6-slot and 11-slot Ethernet IP routers. {TRANSCRIPT} Welcome to Introducing Data Center Networking. This is module number XTW01 - Topic 6
{DESCRIPTION} This screen displays the topic objectives. {TRANSCRIPT} After completing this course, you will be able to: Describe the Data Center Networking environment Explain the concept of Fibre Channel over Ethernet List the four major enhancements in Converged Enhanced Ethernet List the three major types of IBM DCN Products
{DESCRIPTION} This screen displays the topic agenda. {TRANSCRIPT} The agenda for this module is to: Describe the Data Center network environment by discussing the evolution of networking prior to modern Data Centers. Note the challenges of the Data Center Environment. Discuss Fibre Channel over Ethernet at a conceptual level. List some of the enhancements being made to Ethernet in what is known as Converged Enhanced Ethernet. Review the major components of the IBM Data Center Networking product line.
{DESCRIPTION} This screen displays a basic Ethernet topology featuring four desktops interconnected with a coax cable. {TRANSCRIPT} In order to better appreciate the concept of Data Center Networking (DCN), let’s first review how Ethernet networking has progressed through the years. Ethernet began as Network Interface Cards (NICs) in workstations interconnected with coax cabling. The devices used a transmission rate of 10 Mbps. The network was very simple but provided the ability for one workstation to communicate to other devices attached to the network. Later devices such as Ethernet hubs were introduced and twisted pair cabling began to be used, instead of coax cabling, making cabling much easier to manage. Devices still operated at 10Mbps.
{DESCRIPTION} This screen illustrates an Ethernet bridge/switch topology featuring eight desktops connected to a bridge or switch which is connected to a application/file server. {TRANSCRIPT} As the number of devices increased on the network, devices like bridges and switches were developed and added which enhanced device performance by isolating network collisions. Servers were added as a common network resource providing application and file services for all workstations. It was common during this era for an individual department or team to have their own server or group of servers.
{DESCRIPTION} This screen illustrates a secure Ethernet server room topology that features two groups of eight desktops connected to a bridge or switch which in turn is connected to a router that connects again a bridge or switch to a disc storage server and a application/file server. {TRANSCRIPT} As network usage continued to grow and the need for communications between department networks emerged, network routers were added which allowed for communications to span the entire user environment. Servers were gradually move in to more secure rooms that provided better cooling and power capabilities as server processing and storage capacity increased. Management of the servers became more monitored and controlled. These rooms were commonly called server rooms or server farms. These servers farms were really the infant form of what we refer to today as Data Centers. Network devices such as routers and switches also made their way into the server rooms as the number servers increased. Network speeds also took a step forward now able to communicate at 100Mbps.
{DESCRIPTION} This screen illustrates a Top of Rack Ethernet switch topology that features two row of racked servers (each populated with sixteen servers) with four Ethernet switches placed on top of the rack. The switches are connected to two end of the row switches which are connected to a router and the User network. {TRANSCRIPT} The use of servers expanded and therefore the number of servers used rapidly increase. The physical space for housing servers began to be a concern and servers shifted from free standing or floor models to rack mountable servers. Ethernet switches were added in the racks to provide network connectivity to the entire rack of servers .These are known as Top of Rack switches. The row of racks would have a larger capacity switch, End of Row (EOR), in which all of the Top of Rack (TOR) switches would connect.
{DESCRIPTION} This screen illustrates a FC SAN topology that features two row of racked servers (each populated with sixteen servers) with four Ethernet switches placed on top of the rack, and four Fibre Channel switch installed between the servers to support SAN storage. The switches are connected to two end of row Ethernet switch modules which are connected to a router and the User network. {TRANSCRIPT} As storage requirements increased Storage Area Networks (SANs) were introduced. Instead of the data being stored on disc drives attached internally (or sometimes externally) to each server, data storage was now provided by large storage systems attach to the SAN and the servers used Host Bus Adapters (HBAs) to communicate with the storage systems. The SAN was typically composed of Fibre Channel switches in each rack and then those switches connected to a larger fibre channel switch (or switches) providing access to the needed storage.
{DESCRIPTION} This screen illustrates a Redundant Switch topology that features two row of racked servers (each populated with sixteen servers) with eight Ethernet switches placed on top of the rack, and eight Fibre Channel switch installed between the servers to support SAN storage. The switches are connected to four end of row Ethernet switch modules which are connected to two routers and the User network. {TRANSCRIPT} As the need for user’s access to data provided by the servers became more and more important, redundancy was implemented in both the Ethernet network and the SAN. Redundant NICs and HBAs were installed in the servers. Redundant sets of Ethernet switches and Fibre channel switches were also implemented to try to avoid a single device failure causing a server outage to the user.
{DESCRIPTION} This screen displays two topologies. The first one illustrates a User Workgroup topology featuring eight desktops connected to a bridge or switch which is connected to a application/file server. The second one illustrates a Data Center Network topology that consists of a Home, Office and Mobile or Remote Office. {TRANSCRIPT} Let’s now consider the changes that have taken place in the user portion of the network. User networks are becoming more diverse. Users are no longer are restricted to the simple department’s workgroup network we mentioned earlier. Users now work from home, mobile devices, hotel rooms and other remote locations.
{DESCRIPTION} This screen illustrates a Data Center Network topology that features two row of racked servers (each populated with sixteen servers) with eight Ethernet switches placed on top of the rack, and eight Fibre Channel switch installed between the servers to support SAN storage. The switches are connected to four end of row Ethernet switch modules which are connected to two routers and the User network. {TRANSCRIPT} As Data Centers have grown in size and number over the last several years, many challenges have been realized. Space in the data center has become a premium as data centers have been consolidated into fewer but larger facilities. These larger facilities are under pressure to reduce power and cooling requirements while maximizing space utilization and keep management cost to a minimum. At the same time, server virtualization, while having reduced the numbers of servers, have increased the volume of traffic for each physical server, thus placing addition requirements on the performance of the Ethernet switches. Network speeds have increase from 100 Mps to 1 Gbps and even 10 Gbps with standards for 40Gb and 100 Gb Ethernet being developed. Networks in the data center must be highly available and easily adaptable and scalable to meet the requirements that new applications may impose on the network. Many factors in today’s environment are pushing the consolidation of Data Centers, servers and infrastructure, but they can be summarized as simply: Occupy less space Consume less power Produce less heat Simplify management Reduce Total Cost of Ownership Server consolidation and server virtualization has been addressing these challenges in the server area of the data center. IBM Data Center Networking products offer solutions to the these challenges in the networking portion of data center.
{DESCRIPTION} This screen displays a singles server featuring a Ethernet NIC and Fibre Channel Adapter card. {TRANSCRIPT} One of the emerging technologies in Data Center Networking is Fibre Channel over Ethernet (FCoE) sometimes referred to as Fibre Channel over Converged Enhanced Ethernet (FCoCEE) In a traditional SAN attached server environment, the server would have an Ethernet NIC for IP data traffic and a separate Fibre Channel (FC) HBA for FC storage traffic.
{DESCRIPTION} This screen displays a singles server featuring a Converged Network Adapter card. {TRANSCRIPT} Using FCoE in a Converged Ethernet environment, uses a Converged Network Adapter (CNA) to carry both IP data traffic and FC Storage traffic over the same connection. The CNA provides the function of an ethernet NIC and a FC HBA.
{DESCRIPTION} This screen list the four Ethernet protocols. {TRANSCRIPT} Native Fibre channel has flow control mechanisms that are more robust than the mechanisms typically found in traditional Ethernet. As a result of not having some of these key functionalities, the Ethernet standards are being enhanced to incorporate suitable replacements for this functionality. Traditional Ethernet was designed to be a best-effort type of network. Frames may get dropped or arrive out of order when the network devices are busy. Higher layer protocols such as TCP/IP have the responsibility for handling events such as a lost frame or frames not arriving in the correct order. Having a protocol such as TCP/IP providing these types of functions adds overhead and impacts performance. We might say that traditional Ethernet is a lossy network and enhancements are needed to make it a lossless network to support protocols such as FCoE that does not have the overhead of higher layer protocols such as TCP/IP There are four standards that describe the enhancements in Converged Enhanced Ethernet Priority Bases Flow Control ( IEEE 802.1Qbb ) – provides for flow control to be performed on a priority bases. This enables only the traffic that needs to be throttled to be impacted instead of all of the traffic on the port. Enhanced Transmission Selection ( IEEE 802.1Qaz) – allows for the allocation of bandwidth for among traffic classes. Congestion Notification (IEEE 802.1Qau) – provides for end to end congestion notification to limit transmission rate to avoid frame loss. Data Center Bridge Capability Exchange ( IEEE 802.1Qaz) – is a discover protocol in which network devices can exchange their support capabilities of protocols such as the three prior enhancements.
{DESCRIPTION} This screen illustrates a Converged Data Center Network topology that features two row of racked servers (each populated with sixteen servers) with eight Ethernet switches placed on top of the rack, and two Converged switch modules to support SAN storage. The switches are connected to four end of row Ethernet switch modules which are connected to two routers and the User network. {TRANSCRIPT} In a converged Data Center Network utilizing FCoE, the number of switches and the amount of cabling will be reduced. In the illustration, Ethernet NICs and FC HBAs have been replaced with Converged Network Adapters, the Fibre channel switches have been removed from the racks and now the storage traffic is sharing the Converged Enhanced Ethernet infrastructure with the IP data traffic. There is less hardware to manage and maintain, less switches to power and cool, less rack space occupied. Converged FCoE switches connecting to the SAN provide the means for the storage traffic to go from FCoE to FC and FC to FCoE formats. IBM DCN products can also provide Power over Ethernet for devices that can utilize this option as a power source. This reduces the number of power supply devices by using power provided by the PoE port of the switch.
{DESCRIPTION} This screen provides active html links. {TRANSCRIPT} This slide provides the web links to the IBM DCN products. In the slides that follow, products will be referenced that may or may not be availabe at the time you view this module as well as new products and features may have been announced that are not included in this module. Please refer to the information contained at these links for the latest products and features.
{DESCRIPTION} This screen display images of the IBM e-series Ethernet switches (left to right) J48E, J08E and J16E. {TRANSCRIPT} This slide shows some of the j-type switch products offered. J-type switches run Juniper Networks’ JUNOS operating system. The J48E (pictured on the left) offers a feature known as virtual chassis in which multiple switches can be configured to operate as a single virtual switch. This makes managment much easier as only the single virtual switch is managed not all the individual switches. The others switches J08E and J16E are a chassis form factor with 8 and 16 card slots available providing high scalability and port density.
{DESCRIPTION} This screen display images of the IBM m-series 2, 6 and 11-slot Ethernet IP routers. {TRANSCRIPT} This slide shows some of the j-type router products offered. J-type routers run Juniper Networks’ JUNOS operating system. The j-type routers provide high performance IP routing in a variety of chassis sizes. Shown in this picture are 2, 6 and 11 slot j-type routers.
{DESCRIPTION} This screen displays IBM DCN Data Center and Campus b-type (Brocade) switches that are presented in two columns (top to bottom): IBM r-series 4, 8, and 16-slot Ethernet switches, IBM Converged 24x 10Gb FCoE 8x 8 Gbps FC ports Switch B32, IBM x-series 10GbE/1GbE switch module, IBM c-series 24, 48 and 50 port copper/fiber4-slot Ethernet switches, IBM m-series 4, 8, 16 and 32-slot Ethernet IP routes, IBM g-series 48-port fixed and stackable Ethernet POE edge switches, and IBM s-series 8 and 16-slot Ethernet edge/distribution switches. {TRANSCRIPT} This slide summarizes IBM’s b-type (Brocade) switch and router products. IBM m-series IP routers with high performance, multi-service and Multi-Protocol Label Switching (MPLS) capabilities for the data center core and border layers IBM r-series switches with high GbE & 10 GbE port density, scalability and performance for the data center End-of-Row and aggregation layer. IBM Converged Switch B32 with FCoE and FC performance and port-density for the data center Top-of-Rack CNA server access IBM x-series switches with high 10 GbE/ 1 GbE dual speed port density and high performance for data center Top-of-Rack server access and aggregation layers IBM c-series switches with high performance multi-service capabilities (m-series firmware) for data center Top-of-Rack GbE server access layer. 48- and 50-port. IBM DCN complementary enterprise campus Power-over-Ethernet converged edge models include: - g-series switches with high density POE stackable models for converged POE edge (wiring closets) and - s-series switches with high density POE chassis for converged edge and distribution layers.
{DESCRIPTION} This screen displays an image of the Cisco Nexus 5010 and 5020 switch module. {TRANSCRIPT} This slide shows two Cisco models of the IBM DCN product line. The Cisco Nexus 5000 switches (models 5010 and 5020 shown) provide ports supporting Converged Enhanced Ethernet and 4 Gb Fibre Channel.
{DESCRIPTION} This screen displays the topic objectives. {TRANSCRIPT} Having completed this course, you should be able to: Describe the Data Center Networking environment Explain the concept of Fibre Channel over Ethernet List the four major enhancements in Converged Enhanced Ethernet List the three major types of IBM DCN Products
{DESCRIPTION} This screen displays terms and acronyms. {TRANSCRIPT} This slide represents a glossary of acronyms and terms used in this module
{DESCRIPTION} This screen displays IBM trademarks. {TRANSCRIPT} The following are trademarks of the International Business Machines Corporation in the United States, other countries or both.
{DESCRIPTION} This screen displays Thank You! {TRANSCRIPT} Thank You!!