What's New in Teams Calling, Meetings and Devices March 2024
Virtualized Networking in Virtualized Data Center and Cloud Computing
1. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
Virtual Networking in Virtualized Data Centers and Cloud Computing__
Jim Mukerjee Dec 14, 2009
Throughout the evolution of computing, the methodology, shape and location of
computing facilities have constantly changed mainly as a result of new technologies, and
often also due to shifts in market demand. In the last six decades, computing has
mutated from mainframes to minicomputers to personal computers to hand-held
devices and smart-phones. With each shift, the underlying architecture of computing
has become more distributed. However, now computing is taking on yet another new
shape. It is becoming more centralized again as most of the principal computing
components and functionalities are moving back into the data centers. The new data
centers are designed to be linked reliably over vast grids of ubiquitous networks
resembling a “cloud”, or a collection of clouds. The speed of technical innovation at an
unprecedented rate has created fundamental changes in how the markets work. All of
these concepts have inspired the next generation of network computing called Cloud
Computing.
The latest mutation in computing follows naturally from the combination of cheaper
and more powerful processors with faster and more ubiquitous networking
technologies. In effect, the basic components of computing are getting standardized and
commoditized, resulting in the evolution of bigger and more diverse systems, just as we
observe in the process of evolution in nature. This transformation permits computing to
be disaggregated into discrete offerings, termed “services”. As a result, data centers are
becoming factories which produce computing services on an industrial scale, hence the
sobriquet Cloud Computing Services. In simple terms, cloud computing is the use of
information technology as a “service” over the network. Some common examples of
consumer services are web-based e-mail, web-based applications such social-media
applications, online search, online word-processing, online spreadsheets, online
shopping, online data storage and Web 2.0 “mash-up” applications. Among other
prominent commercial examples are Software-as-a-Service (SaaS), Infrastructure-as-a-
Service (IaaS) and Platform-as-a-Service (PaaS) [1, 2, and 3].
Cloud computing resembles the proven trend of business outsourcing, as both provide
the benefit of leveraging the expertise of others and being cost efficient. But, cloud
computing has additional benefits of flexibility, scalability, elasticity, and reliability.
Amazon Web Services (AWS), Google AppEngine, and Microsoft Azure are prominent
examples among cloud computing service vendors. There is sufficient market demand
for cloud computing services. Figure 1 shows IDC’s forecast [4] of cloud computing
services worldwide revenue to increase from $17B in 2009 to $44B in 2013. This
accounts for a growth from 5% of total IT revenue in 2009 to 10% in 2013, a ~26%
1
2. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
compound annual growth rate. But the strategic imperative is the impact of cloud
services on “net new growth” in IT market, which is estimated to be 27% of the new
growth in IT revenue in 2013. As Geoffrey Moore pointed out in “crossing the chasm”,
cloud computing is in the stage of new technology adoption cycle, where total revenue
may appear relatively small, but the market adoption curve is steep.
Figure 1: Worldwide IT Cloud Computing Services Revenue Forecast, 2009-2013, IDC
As cloud computing moves beyond the hype into mainstream adoption, the inevitable
question from users is “what is cloud computing”? Since it is still in its infancy, there are
diverse opinions and definitions of cloud computing. For the purpose of this paper, we
will use the following short, generic definition by Gartner [5]: “Cloud computing is a
style of computing where massively scalable, and elastic, IT-related capabilities are
provided “as a service” to external customers using Internet technologies”. What’s
really new are the Acquisition Mode based on purchasing of services, the Business
Model based on pay-for-use, the Access Model based on access over the Internet on any
device, and the Technical Model based on scalable, dynamic, multi-tenant and sharable
computing infrastructure, implemented in Virtualized Data Centers.
The cloud computing industry represents a large ecosystem of many models, vendors,
and market niches. The National Institute of Standards and Technology, Information
2
3. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
Technology Laboratory [6] has attempted to classify the various cloud computing
approaches in terms of essential characteristics, delivery models, and deployment
models.
Essential Characteristics:
On-demand self-service: A consumer can unilaterally provision computing
capabilities, such as server time and network storage, as needed automatically
without requiring human interaction with each service’s provider.
Ubiquitous network access: Capabilities are available over the network and
accessed through standard mechanisms that promote use by heterogeneous
thick or thin client platforms (e.g., mobile phones, laptops, and PDAs).
Location independent resource pooling: The service provider’s computing
resources are pooled to serve all consumers using a multi-tenant model, with
different physical and virtual resources dynamically assigned and reassigned
according to consumer demand. Examples of resources include storage,
processing, memory, network bandwidth, and virtual machines.
Rapid elasticity: Capabilities can be rapidly and elastically provisioned to
quickly scale up and rapidly released to quickly scale down. To the consumer,
the capabilities available for provisioning often appear to be infinite and can be
purchased in any quantity at any time.
Measured Service: Cloud systems automatically control and optimize use of
resources by using a measuring capability appropriate for the type of service
(e.g., storage, processing, bandwidth, and active user accounts).
Delivery Models:
Cloud Software as a Service (SaaS): The capability provided to the consumer is
to use the provider’s applications running on a cloud infrastructure and
accessible from various client devices through a thin client interface such as a
Web browser (e.g., web-based email).
Cloud Platform as a Service (PaaS): The capability provided to the consumer is
to deploy onto the cloud infrastructure consumer-created applications using
3
4. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
programming languages and tools supported by the provider (e.g., java, python,
.Net).
Cloud Infrastructure as a Service (IaaS): The capability provided to the
consumer to provision processing, storage, networks, and other fundamental
computing resources where the consumer is able to deploy and run arbitrary
software, which can include operating systems and applications.
Deployment Models:
Private cloud: The cloud infrastructure is operated solely for an organization. It
may be managed by the organization or a third party and may exist on premise
or off premise.
Public cloud: The cloud infrastructure is made available to the general public or
a large industry group and is owned by an organization selling cloud services.
Hybrid cloud: The cloud infrastructure is a composition of two or more clouds
(private or public) that remain unique entities but are bound together by
standardized or proprietary technology that enables data and application
portability (e.g., cloud bursting).
Figure 2 is a graphical representation of internetworking of cloud computing
deployment models.
Figure 2: Schematic diagram of Cloud Computing Deployment Models
4
5. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
To understand the technology needed to industrialize data centers, it helps to look at
the history of electricity. It was only after the widespread deployment of the “rotary
converter”, i.e. the transformer, that different power plants and generators could be
assembled into a universal grid [7]. Similarly, in computing, a technology called
“virtualization” now allows physically separate computer systems, and other
computing resources, to act as one large computing power plant, or a grid of
computing plants, from which cloud computing services can be delivered on demand.
Virtualization is not new. The origins of virtualization go back to the 1960s, when IBM
developed the technology so that its customers could make better use of their
mainframes [8]. It lingered in obscurity until VMware applied the technology to
commodity computers in today’s data centers.
Virtualization is a proven software technology that is rapidly transforming the IT
landscape and fundamentally changing the way that people compute. Today’s powerful
computer hardware was designed to run a single operating system and a single
application. This leaves most machines vastly underutilized. Virtualization enables
running multiple virtual machines (VM) on a single physical machine, sharing the
resources of that single computer across multiple environments. Different VMs can run
different operating systems and multiple applications on the same physical computer,
thereby separating the computing workload from the underlying hardware [9]. Once
computers have become disembodied, all sorts of possibilities open up. VMs can be
generated in minutes and moved around while running, perhaps to consolidate on one
physical server to save energy and floor space. A mirrored VM can take over should the
original one fail, and VMs can even be prepackaged and sold as “virtual appliances”.
VMware virtualization works by inserting a thin layer of software directly on the
computer hardware or on a host operating system. This contains a virtual machine
monitor or “hypervisor” that allocates hardware resources dynamically and
transparently. Multiple operating systems run concurrently on a single physical
computer and share hardware resources with each other. By encapsulating an entire
machine, including CPU, memory, operating system, and network devices, a VM is
completely compatible with all standard operating systems, applications, and device
drivers. Each VM contains a complete system, eliminating potential conflicts. Several
operating systems and applications can run at the same time on a single computer,
with each having access to the resources it needs when it needs them. Figure 3 exhibits
a schematic depiction of a VM.
5
6. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
Figure 3: Conceptual diagram of a Virtual Machine
Virtualizing a single physical computer is just the beginning. An entire virtual
infrastructure can be built scaling across hundreds of interconnected physical
computers, storage devices, and networking connections throughout the data centers.
Such virtual infrastructures, called Virtualized Data Centers (VDC), serve as the building
blocks for building Private, Public and Hybrid Clouds to deliver cloud computing services.
There is no need to assign servers, storage, or network bandwidth permanently to each
application. Instead, hardware resources are dynamically allocated when and where
they’re needed within the cloud. The highest priority applications always have the
necessary resources without wasting money on excess hardware, which is only used at
peak times. By connecting a private cloud to a public cloud a hybrid cloud can be
created, providing the flexibility, availability and scalability the business needs to
succeed.
What is a Virtualized Data Center?
A virtual infrastructure enables sharing of physical resources of multiple machines
across the entire infrastructure [10]. Server virtualization enables sharing the resources
of a single physical computer across multiple VMs for maximum efficiency. Resources
are shared across multiple VMs and applications. Business needs are the driving force
behind dynamically mapping the physical resources of the infrastructure to
applications—even as those needs evolve and change. Servers along with network and
storage are aggregated into a unified pool of IT resources that can be utilized by the
applications when and where they’re needed. This resource optimization drives greater
flexibility in the organization and results in lower capital and operational expenditures.
Figure 4 below exhibits a schematic diagram of a VDC.
6
7. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
Figure 4: Schematic diagram of a Virtualized Data Center
A virtual infrastructure consists of the following components:
Bare-metal hypervisors to enable full virtualization of each computer.
Virtual infrastructure services such as resource management and consolidated
backup to optimize available resources among VMs.
Automation solutions that provide special capabilities to optimize a particular IT
process such as provisioning or disaster recovery.
By decoupling the software environment from its underlying hardware infrastructure,
multiple servers, storage devices, and networks can be aggregated into shared pools of
IT resources. These resources can then be utilized dynamically, securely and reliably by
applications as and when needed. This pioneering approach lets customers use building
blocks of inexpensive industry-standard servers to build a self-optimizing data center
and deliver high levels of utilization, availability, automation and flexibility.
Therefore, the key technological enhancements enabling cloud computing are based on
two foundation cornerstones: Virtualization and Networking. Paul Maritz, CEO of
VMware, [11, 12] and Padmasree Warrior, CTO of Cisco, [13] have articulated succinctly
how their respective companies contribute their expertise to make cloud computing a
reality. The reduction of complexity by the “genius of encapsulation” makes
virtualization the main ingredient that provides the flexibility, scalability, elasticity, and
reliability required for internal and external clouds to operate efficiently. Intelligent
networking plays a vital role to address the need for efficient, reliable, and secure data
7
8. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
communications within a federation of internal clouds, external clouds, and existing IT
infrastructures, and also provide solutions to the concerns related to security,
interoperability, and SLA compliance. Increasing demand for cloud computing services
generates the volume of data communication traffic, which requires simultaneous
“multiplexing” of a multitude of enterprise and consumer users, necessary to improve
the utilization rates of large virtual data centers to benefit from the economies of scale
that make cloud computing economically viable [1].
8
9. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
Virtual Switch, Virtual Networking, and Virtualized Distribution Switch
We have discussed the fundamental market requirements for virtualization of key
components of data centers, and the increasingly vital role that networking technology
plays, in the new designs and efficient operation of Virtualized Data Centers (VDC). So,
this section describes the effects of virtualization on networking, and the advent of
Virtualized Distribution Switch (VDS) technology, which is the main focus of this paper
and relevant for this course. To illustrate virtual networking concepts, design and
implementation by VMware and Cisco are provided as real-world examples. The effect
of virtualization on servers and storage devices are also very significant, but are not
elaborated in this paper, except to the extent that they create the need for virtual
networking for effective data communications in a VDC.
Server virtualization allows multiple Operating System (OS) images to transparently
share the same physical server and I/O devices, by creating multiple Virtual Machines
(VMs), described previously. As a consequence, it introduces the need to support local
switching between different VMs within the same server, thus pushing the access layer
of the network further away from its original location and invalidating the practice that
each network access port corresponds to a single physical server running a single OS
image.
Server virtualization also invalidates a second practice that the nature of the
relationship between an OS image and the network is static. By abstracting hardware
from software, virtualization effectively enables OS images to become mobile, which
means that a VM can be moved from one physical server to another within the data
center or even across multiple data centers. This move can take place within the same
access switch or to another access switch in the same or a different data center. The
consequences of this new level of mobility on the network and their effects may go
beyond just the access layer. In addition, some of the services deployed in the
aggregation layer may need to be modified to support VM mobility. In terms of pure
Layer 2 switching and connectivity, mobility of VMs poses fairly stringent requirements
on the underlying network infrastructure, especially at the access layer. It requires that
both the source and destination hosts be part of the same set of Layer 2 domains
(VLANs). Therefore, all switch ports of a particular virtualization cluster must be
configured uniformly as trunk ports that allow traffic from any of the VLANs used by the
cluster’s VMs. Further, as VMs move from one physical server to another, it is also
desirable that all the network policies defined in the network for the VM (for example,
ACLs) be consistently applied, no matter what the location of the VM in the network.
9
10. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
The easiest and most straightforward way to network VMs is to implement a standalone
software switch as part of the hypervisor. This is the methodology VMware adopted in
developing the Virtual Switch (vSwitch) in ESX Server 3 hypervisor [14]. Each virtual
network interface card (vNIC) logically connects a VM to the vSwitch and allows the VM
to send and receive traffic through that interface. Virtual switches allow virtual
machines on the same ESX Server host to communicate with each other using the same
protocols that would be used over physical switches, without the need for additional
networking hardware. If two vNICs attached to the same vSwitch need to communicate
with each other, the vSwitch will perform the Layer 2 switching function directly,
without any need to send traffic to the physical network. ESX Server vSwitches also
support VLANs that are compatible with standard VLAN implementations from other
vendors. A VM can be configured with one or more virtual Ethernet adapters, each of
which each has its own IP address and MAC address. As a result, VMs have the same
properties as physical machines from a networking standpoint. Figure 6 below depicts
the concept of vSwitch and the use of vNICs to connect VMs to the external network
through physical Ethernet adapters.
Figure 6: Virtual switches in ESX Server3 connect VMs and service console and to
external networks
There are three types of virtual Ethernet adapters available for VMs in VMware vSphere:
10
11. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
1. vmxnet is a paravirtualized device that works only if VMware Tools is installed
on the Operating System.
2. vlance emulates the AMD Lance PCNet32 Ethernet adapter. It is compatible
with most 32-bit guest operating systems and can be used without VMware
Tools.
3. e1000 emulates the Intel E1000 Ethernet adapter and is used in either 64-bit or
32-bit virtual machines.
There are two other virtual adapters that are available through VMware technology:
4. Vswif is a paravirtualized device similar to vmxnet that is used by the VMware
ESX service console.
5. Vmknic is a device in the VMkernel that is used by the TCP/IP stack to serve NFS
and software iSCSI clients.
Virtual switches are the key networking components in VMware Infrastructure 3 (VI 3),
and provide the underpinning of Virtual Networking. Up to 248 vSwitches can be
created on each ESX Server 3 host. A vSwitch is configured at run time from a collection
of small functional units in a customized manner. Some of the key functional units are:
The core Layer 2 forwarding engine (looks up each frame’s destination MAC when
it arrives, forwards a frame to one or more ports for transmission, and avoids
unnecessary deliveries (in other words, it is not a hub).
• VLAN tagging, stripping, and filtering units.
• Layer 2 security, checksum, and segmentation offload units.
Supports VLAN segmentation at the port level, which means it can be configured
either with access to a single VLAN (an access port) or with access to multiple
VLANs (a trunk port).
When the vSwitch is built at run time, ESX Server 3 loads only those components it
needs. It installs and runs only what is actually needed to support the specific physical
and virtual Ethernet adapter types used in the configuration. This way the system pays
the lowest possible cost in complexity and demands on system performance. An
additional benefit of the modular design is that VMware and third-party developers can
easily incorporate modules to enhance the system in the future. In addition, an
administrator can manage many configuration options for the switch as a whole and for
individual ports using the Virtual Infrastructure (VI) Client.
. Following are important features of virtual switches:
Virtual ports: The ports on a virtual switch provide logical connection points
among virtual devices and between virtual and physical devices. Each virtual
11
12. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
switch can have up to 1,016 virtual ports, with a limit of 4,096 ports on all
virtual switches on a host. The virtual ports provide a rich control channel for
communication with the virtual Ethernet adapters attached to them.
Uplink ports: Uplink ports are associated with physical adapters, providing a
connection between the virtual network and the physical networks. They
connect to physical adapters when they are initialized by a device driver or
when the teaming policies for virtual switches are reconfigured. Virtual Ethernet
adapters (vNIC) connect to virtual ports when the VM is initialized, or the VM is
connected, or the VM is migrated using VMware VMotion. A vNIC adapter
updates the vSwitch port with MAC filtering information when it is initialized or
when it changes.
Port groups: Port groups make it possible to specify that a given VM should
have a particular type of connectivity on every host, and they contain enough
configuration information to provide persistent and consistent network access
for virtual Ethernet adapters. Some of the information contained in a port group
includes vSwitch name, VLANIDs and policies for tagging and filtering, the
teaming policy and traffic shaping parameters. This is all the information
needed for a switch port.
Uplinks: Uplinks are the physical Ethernet adapters that serve as bridges
between the virtual and physical network. The virtual ports connected to them
are called uplink ports. A host may have up to 32 uplinks.
Other important characteristics of virtual switches are:
Virtual switches do not learn from the network to populate their forward tables.
This helps to minimize denial of service attacks.
Virtual switches make private copies of frame data used to make forwarding or
filtering decisions. This ensures the guest operating systems cannot access
sensitive data once the frame is passed onto the virtual switch.
VMware technology ensures that frames are contained within the appropriate
VLAN on a vSwitch: 1) by carrying the data outside the frame as it passes
through the vSwitch, and 2) because there is no dynamic trunking support that
could open up isolation leaks, making the data vulnerable to attack.
In many ways, the ESX Server vSwitch is similar to physical Ethernet Access Switch in a
traditional layered networking architecture.
12
13. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
In some notable ways, vSwitch is different from a physical Ethernet switch, namely:
vSwitches do not require a spanning tree protocol, because VMware Infrastructure
3 enforces a single-tier networking topology.
There’s no way to interconnect multiple vSwitches. Network traffic cannot flow
directly from one virtual switch to another within the same host. vSwitches
provide all the ports required in one switch.
There’s no need to cascade vSwitches or prevent bad vSwitch connections, and
because they don’t share physical Ethernet adapters, leaks between switches do
not occur.
Each vSwitch is isolated and has its own forwarding table, so every destination the
switch looks up can match only ports on the same virtual switch where the frame
originated. This feature improves security, making it difficult for hackers to break
vSwitch isolation.
Creating and using Virtual Networks using Virtual Switches
Creating VLAN logical groupings of switch ports enable communications between the
stations as if they were on the same physical LAN. Technically, each VLAN is simply a
broadcast domain, configured through software. If a machine is moved to another
location, it can remain on the same VLAN broadcast domain without hardware
reconfiguration. Whereas traditional bridged LANs have only one broadcast domain,
VLAN networks may have multiple virtual broadcast domains within the boundary of a
bridged LAN.
The benefits of VLANs include flexible network partition and configuration, performance
improvement and cost savings.
Flexibility: Because VLANs partition the network based on logical groupings
instead of physical topology, users can be moved to new locations without
reconfiguration. This provides more flexibility and time savings.
Performance improvement: In a traditional network, frames reach all hosts
within the network. This affects performance when you have a large number of
end users. Segmenting broadcast traffic into port groupings helps preserve
network bandwidth and save processor time.
Cost savings: Typically, routers are needed to partition LANs into multiple
broadcast domains. VLANs eliminate this need, reducing hardware costs.
13
14. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
VLAN Tagging
To support VLANs in Virtual Network infrastructure, the virtual or physical network must
tag the Ethernet frames with 802.1Q tags using virtual switch tagging (VST), virtual
machine guest tagging (VGT), or external switch tagging (EST). VST mode is the most
common configuration, where one port group is provisioned on a vSwitch for each
VLAN, and the virtual Ethernet adapter is attached to the port group instead of the
switch directly. The port group tags outbound frames, removes tags for inbound frames,
and ensures frames on one VLAN don’t leak into another VLAN.
NIC Teaming
NIC Teaming is a feature of VI3 that allows a single virtual switch to be connected to
multiple physical Ethernet adapters. A team can share traffic loads between physical
and virtual networks and provide passive failover in case of an outage. NIC teaming
policies are set at the port group level.
Benefits of NIC teaming include load balancing and failover:
Load balancing: Load balancing allows spreading network traffic from VMs on a
vSwitch across two or more physical Ethernet adapters, providing higher
throughput. NIC teaming offers different options for load balancing, including
route based load balancing on the originating vSwitch port ID, on the source
MAC hash, or on the IP hash.
Failover: Either Link status or Beacon Probing can be used for failover detection.
Link Status relies solely on the link status of the network adapter. Failures such
as cable pulls and physical switch power failures are detected, but configuration
errors are not. The Beacon Probing method sends out beacon probes to detect
upstream network connection failures. This method detects many of the failure
types not detected by link status alone. By default, NIC teaming applies a fail-
back policy, whereby physical Ethernet adapters are returned to active duty
immediately when they recover, and making standby adapters redundant.
Layer 2 Security
vSwitches can enforce security policies at the network layer by disabling promiscuous
mode by default, locking down MAC address changes, and blocking forged transmit.
These features prevent VMs from impersonating other nodes on the network.
14
15. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
Virtual Network Management
In VI3 the Virtual Network can be managed using a central management facility using
Virtual Center (vCenter), which provides tools for building and maintaining Virtual
Network Infrastructure. vCenter can be used to add, delete, and modify vSwitches and
to configure port groups with VLANs and NIC teaming. The roles feature of vCenter can
be used to assign the permissions a network administrator needs to manage the Virtual
Network
The effect of virtualizing the network is manifested in moving the network access layer
up into the ESX server host. This creates a change in the demarcation of management
roles between the network administrator and the server administrator. The server
administrator now assumes the network management role using the VI3 client and
centralized vCenter management facilities [15].
Figures 6 and 7 represent the evolution from traditional Physical Networking to Virtual
Networking, resulting in new designs and implementations of Virtualized Data Centers.
Figure 6: Schematic diagram of Physical Networking before Virtualization
15
16. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
Figure 7: Schematic diagram of Virtual Networking using Virtual Switches & VMs
Virtualized Distributed Switch
The appeal of enhanced flexibility and improved efficiency that virtualization provides
has created a massive adoption and scale out of virtual networking, resulting in ever
more deployments of VMs and virtual switches. Application of clusters and vMotion, to
move VMs around, require network configuration of source and destination hosts to be
the same to avoid any dropped sessions. Combining this with the needs of several
departments and groups, each with different requirements, creates a large
administrative burden of adding hosts with customized networking requirements. This
quickly translates into a burgeoning need for simpler network management, flexibility in
virtual network administration responsibility, and increased features and functionalities.
To address these emerging virtual networking requirements, VMware and Cisco [15, 16,
and 17] have introduced the concept of Virtualized Distributed Switch (VDS), which is
implemented as vNetwork Distributed Switch (vDS) by VMware. In essence, vDS
provides a simplified and unified virtual network management framework. Also, vCenter
now provides an abstracted, resource-centric, centralized view of the entire networking
architecture.
16
17. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
The vDS simplifies and enhances the provisioning, administration and monitoring of
virtual machine networking by:
Moving away from host-level network configuration
Statistics and policies follow the VM simplifying debugging and troubleshooting
and enabling enhanced security.
VMware vNetwork Distributed Switch consists of distributed port groups that
are configured similar to port groups on standard switches, but extend across
multiple hosts. This simplifies configuration of VMs across multiple hosts and
facilitates easy setup for VMotion
Building a foundation for networking resource pools and view the network as a
clustered resource
Figure 8 below demonstrates the transition between a Standard vSwitch and a
vNetwork Distributed Switch (vDS)
Figure 8: A schematic transition from Standard vSwitch to vNetwork Distributed Switch
This is accomplished by abstracting and moving the “control plane” of individual
vSwitches, and aggregating them in the vCenter with a centralized view of all
management tasks from a data center management perspective. In other words, vDS
17
18. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
spans across all hosts in a data center and network management is simplified by
centralization of management tasks. However, the “data plane” remains in individual
vSwitches to carry out the core Layer 2 forwarding and NIC teaming functions. When
using vMotion to migrate from one host to another, the “port state” of the VM moves
with the VM to the new host location. This attribute permits the introduction of state-
full third-party, or VMware, functionalities, such as IDS, IPS, Firewall, and other security-
related features into an environment that exploits migration through vMotion. Figure 9
demonstrates this concept.
Figure 9: Schematic depiction of the separation of “control plane” & “data plane” in vDS
VMware and Cisco collaboration on Distributed Virtual Switch and Cisco Nexus 1000V
Switch
In 2009 VMware and Cisco jointly developed the concept of a Distributed Virtual Switch
(DVS), which essentially decouples the “control” and “data” planes of the embedded
switch and allows multiple, independent vSwitches (data planes) to be managed by a
centralized management system (control plane). VMware has branded its
implementation of DVS as the vNetwork Distributed Switch (vDS) and the control plane
18
19. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
component is implemented within VMware vCenter. This approach effectively allows
virtual machine administrators to move away from host-level network configuration and
manage network connectivity at the VMware ESX cluster level, which was described
earlier.
Cisco is using the DVS framework, and the VMware vNetwork third-party vSwitch
extension API of VMware vDS, to deliver a portfolio of networking solutions that can
operate directly within the distributed ESX hypervisor layer and offer a feature set and
operational model that are familiar and consistent with other Cisco networking products
[17]. This approach provides an end-to-end network solution to meet the new
requirements created by server virtualization. Specifically, it introduces a new set of
features and capabilities that enable VM interfaces to be individually identified,
configured, monitored, migrated, and diagnosed in a way that is consistent with the
current network operation models. These features are collectively referred to as Cisco
Virtual Network Link (VN-Link). The term literally indicates the creation of a logical link
between a vNIC on a VM and a Cisco switch (enabled for VN-Link). This connection is the
logical equivalent of using a cable to connect a NIC with a network port of an access-
layer switch. In addition, by providing VM-aware network and storage services to a
virtualized infrastructure, VN-Link improves data center flexibility, increases application
availability, strengthens security and compliance, and simplifies management. In short,
VN-Link expands the scope and the benefits, engendered by server virtualization
strategy.
The effort has resulted in the introduction of Cisco Nexus 1000V Series Switches,
featuring the Cisco NX-OS Software data center operating system [18]. The Cisco Nexus
1000V Series extends the virtual networking feature set to a level consistent with
physical Cisco switches and brings advanced data center networking, security, and
operating capabilities to the Virtualized Data Center environment. It provides end-to-
end physical and virtual network provisioning, monitoring, and administration with VM–
level granularity using common and existing network tools and interfaces. The Cisco
Nexus 1000V Series transparently integrates with VMware vCenter management facility
to provide a centralized, consistent VM provisioning workflow.
A technical description of the inner components of Cisco Nexus 100V product will help
in understanding how the integration between VMware and Cisco technologies was
implemented. A switch enabled for VN-Link operates on the concept of virtual Ethernet
(vEth) interfaces. These virtual interfaces are dynamically provisioned based on network
policies, stored in the switch, during VM provisioning operations by the hypervisor
management layer (VMware vCenter). These virtual interfaces then maintain network
configuration attributes, security, and statistics for a given virtual interface across
migration of VMs. vEth interfaces are the virtual equivalent of physical network access
19
20. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
ports. A switch enabled for VN-Link can implement several vEth interfaces per physical
port, and it creates a mapping between each vEth interface and the corresponding vNIC
on the VM, as shown in Figure 10 below. An important benefit of vEth interfaces is that
they can follow vNICs when VMs migrate from one physical server to another using
vMotion, while maintaining the port configuration and port state. By virtualizing the
network access port with vEth interfaces, VN-Link effectively enables transparent
mobility of VMs across different physical servers and different physical access-layer
switches, which provides the flexibility, agility, and efficiency of VDCs.
Port profiles are a collection of interface configuration commands that can be
dynamically applied at either physical or virtual interfaces. Any changes to a given port
profile are propagated immediately to all ports that have been associated with it. A port
profile can define a sophisticated collection of attributes such as VLAN, private VLAN
(PVLAN), ACL, port security, NetFlow collection, rate limiting, QoS marking, and even
remote-port mirroring (through Encapsulated Remote SPAN (ERSPAN) for advanced,
per–VM troubleshooting. Port profiles are tightly integrated with the management layer
for the VM, in VMware vCenter, and enable simplified management of the virtual
infrastructure. To facilitate integration with the VM management layer, Cisco VN-Link
switches can push the catalog of port profiles into VMware vCenter, where they are
represented as distinct port groups. This integration allows VM administrators to choose
among a menu of profiles as they create VMs. When a VM is powered on or off, its
corresponding profiles are used to dynamically configure the vEth in the VN-Link switch.
VN-Link can be implemented in two ways:
1. As a Cisco DVS running entirely in software within the ESX hypervisor layer
(Cisco Nexus 1000V Series)
2. With a new class of devices that support network interface virtualization (NIV)
and eliminate the need for software-based switching.
Figure 10: Schematic diagram of Virtual and Physical Network Constructs in a
VN-Link Enabled Switch (Cisco Nexus 1000V Series Switches)
20
21. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
Deploying VN-Link in Existing Networks with the Cisco Nexus 1000V Series
The Cisco Nexus 1000V Series consists of two main types of components that can
emulate a 66-slot modular Ethernet switch with redundant supervisor functions. Figure
11 below exhibits Cisco Nexus 1000V Series distributed switching software running
inside the VMware ESX hypervisor.
Virtual Ethernet module (VEM)–“data plane”: This lightweight software
component runs inside the hypervisor. It enables advanced networking and
security features, performs switching between directly attached VMs, and
provides uplink capabilities to the rest of the network. Each hypervisor is
embedded with one VEM, which is the functional equivalent of VMware’s
vSwitch.
Virtual supervisor module (VSM)–“control plane”: This standalone, external,
physical or virtual appliance is responsible for the configuration, management,
monitoring, and diagnostics of the overall Cisco Nexus 1000V Series system (that
is, the combination of the VSM itself and all the VEMs it controls) as well as the
integration with VMware vCenter. A single VSM can manage up to 64 VEMs.
VSMs can be deployed in an active-standby model, helping ensure high
availability.
Figure 11: Cisco Nexus 1000V Series Distributed Switching Architecture
21
22. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
In the Cisco Nexus 1000V Series, traffic between virtual machines is switched locally at
each instance of a VEM. Each VEM is also responsible for interconnecting the local
virtual machines with the rest of the network through the upstream access-layer
network switch (blade, top-of-rack, end-of-row, etc.). The VSM is responsible for
running the control plane protocols and configuring the state of each VEM accordingly,
but it never takes part in the actual forwarding of packets.
Deploying VN-Link with Network Interface Virtualization
In addition to the distributed virtual switch model, which requires a tight integration
between the hypervisor, its management layer, and the virtual networking components
and implements switching in software within the hypervisor, Cisco has developed a
hardware approach based on the concept of network interface virtualization (NIV). NIV
completely removes any switching function from the hypervisor and locates it in a
hardware network switch physically independent of the server [17].
However, NIV still requires a component on the host, called the interface virtualizer,
that can be implemented either in software within the hypervisor or in hardware within
an interface virtualizer–capable adapter. The purpose of the interface virtualizer is
twofold:
For traffic going from the server to the network, the interface virtualizer identifies
the source vNIC and explicitly tags each of the packets generated by that vNIC with a
unique tag, also known as a virtual network tag (VNTag).
For traffic received from the network, the interface virtualizer removes the VNTag
and directs the packet to the specified vNIC.
The interface virtualizer never performs any local switching between VMs. The
switching process is completely decoupled from the hypervisor, which brings
networking of VMs equivalent to networking of physical servers. Switching is always
performed by the network switch to which the interface virtualizer connects,
which in this case is called the virtual interface switch (VIS) to indicate its capability not
only to switch between physical ports, but also between virtual interfaces (VIFs)
corresponding to vNICs that are remote from the switch. In other words, each vNIC in a
VM will correspond to a VIF in the VIS, as shown in Figure 12, and any switching or policy
enforcement function will be performed within the VIS and not in the hypervisor. The
VIS can be any kind of access-layer switch in the network (a blade, top-of-rack, or end-
of-row switch) as long as it supports NIV.
22
23. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
Figure 12: Architectural Elements of the NIV Model
An important consequence of the NIV model is that the VIS cannot be any IEEE 802.1D–
compliant Ethernet switch, but it must implement some extensions to support the
newly defined satellite relationships. These extensions are link local and must be
implemented both in the switch and in the interface virtualizer. Without such
extensions, the portions of traffic belonging to different VMs cannot be identified
because the VMs are multiplexed over a single physical link.
In addition, a VIS must be enabled to potentially forward a frame back on the same
inbound port from which it was received, which violates IEEE 801.D standard that
defines the operation of Layer 2 Ethernet switches. The restriction was originally
introduced in the standard to avoid the creation of loops in Layer 2 topologies while
enabling relatively simple hardware implementations of Layer 2 forwarding engines. The
technology that is currently available for implementing forwarding engines allows much
more sophisticated algorithms, and thus the restriction no longer needs to be imposed.
Nonetheless, the capability of a network switch to send packets back on the same
23
24. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
interface from which they were received still requires the proper level of
standardization. Cisco defined a protocol, VNTag, which has been submitted to the IEEE
802.3 task force for standardization.
NIV represents innovation at Layer 2 that is designed for deployment within the VN-Link
operating framework. Specifically, it includes the same mechanisms, such as port
profiles, vEth interfaces, support for virtual machine mobility, a consistent network
deployment and operating model, and integration with VMware vCenter management,
as the Cisco Nexus 1000V Series.
In conclusion of the technical section of the paper, the introduction of blade server
architectures and server virtualization in VDC has invalidated several design,
operational, and diagnostic assumptions of traditional data center networks. Server
virtualization allows multiple OS images to transparently share the same physical server
and I/O devices. As a consequence, it introduces the need to support local switching
between different VMs within the same physical server. Cisco and VMware have
collaborated to define a set of APIs that enable transparent integration of third-party
networking capabilities within the VMware Virtual Infrastructure.
Cisco has been the first networking vendor to take advantage of such capabilities to
deliver VN-Link, a portfolio of networking solutions that can operate directly within the
distributed ESX hypervisor layer and offer a feature set and operational model that is
familiar and consistent with other Cisco networking products. This approach provides an
end-to-end virtual networking solution to the new requirements created by server
virtualization in virtualized data centers.
Examples of prominent commercial computing services enabled by cloud computing.
Cloud computing services, enabled by virtualized data centers, for enterprise and
consumer markets, are becoming more prevalent as new customers are discovering the
cost benefit and convenience of using computing services on the Internet. The most
prominent example is that of the wide acceptance of Software-as-a-Service (SaaS)
offerings, pioneered by Salesforce.com, which has inspired software development
Platform-as-a-Service (PaaS) and computing Infrastructure-as-a-Service (IaaS). Among
several prominent commercial cloud computing service providers are:
Amazon Web Services (AWS) [19], which provides a flexible, cost-effective, scalable, and
easy to use Elastic Compute Cloud (EC2) platform for businesses of all sizes, and hosting
large public datasets on Simple Storage Service (S3). In addition to building new
applications on AWS, companies can begin to move existing SOA-based solutions to the
cloud by migrating discrete components of their legacy applications. Larger companies
24
25. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
typically run a hybrid model where pieces of the application run in their data center and
other portions run in the cloud.
Google App Engine [20] is offering application domain-specific platforms targeted
exclusively at traditional web applications. It also provides automatic scaling and high-
availability mechanisms, key characteristics of cloud computing, and a proprietary
MegaStore data storage for AppEngine applications. In the category of web platform,
AWS leads in market share and force.com (from salesforce.com) provides a more
complete development platform. Google Apps [21] empower enterprises and
consumers with online applications, such as, Gmail, Google Docs, and Google Sites,
which are free for consumer and education customers, and $50/user/year for business
customers.
Microsoft Windows Azure [22] provides an application development platform as a cloud
computing service. It integrates three parts – the Compute service, the Storage service,
and the Fabric – to work together to provide a bridge for Windows developers to
produce cloud-based applications using cloud computing services.
Common examples of consumer services are web-based e-mail, social-media
applications (e.g. Facebook, Twitter, mySpace, YouTube, Flickr), online search, word-
processing, spreadsheets, calendar, online shopping, online data storage, and Web 2.0
web-services development platform for users to create their own “mash-up”
applications using tools like Microsoft’s Popfly, Yahoo!Pipes and Iceberg.
The significant advances made in cloud computing is not without its shortcomings,
which are the obstacles to the growth of cloud computing. These challenges provide
new opportunities for future investigation leading to the promise of true Utility
Computing. A list of challenges [1] are identified below:
1. Cloud computing service is not yet a fungible commodity in the same manner as
public utilities, such as electricity, natural gas, and water.
2. High availability of cloud computing service.
3. Vendor and data lock-in.
4. Data confidentiality and ability to audit.
5. Performance unpredictability.
6. Arbitrarily scalable storage.
7. Debugging of large-scale distributed systems.
8. Software licensing model to fit cloud computing.
9. International regulations related to data ownership, and data transfer, across
national borders [7].
25
26. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
The rise of cloud computing is more than just another platform. It will undoubtedly
transform the IT industry, but it will also change the way people work and companies
operate.
List of References
[1] Armbrust, M., Fox, A., et al, Above the Clouds: A Berkeley View of Cloud
Computing, Electrical Engineering & Computer Sciences, Tech Report No UCB/EECS-
2009-28, Univ of California, Berkeley, February, 2009.
[2] Dikaiakos, M.D., Pallis, G., Katsaros, D. Mehra, P., and Vakali, A., Guest Editors’
Introduction: Cloud Computing, IEEE, 12 (5), September, 2009.
[3] Carolan, J., and Gaede, S., Introduction to Cloud Computing Architecture White
Paper, 1st Edition, Sun Microsystems Inc., June 2009. Available online from:
<http://www-cdn.sun.com/featured-articles/CloudComputing.pdf>
[4] Gens, F., IDC’s New IT Cloud Services Forecast: 2009-2013, IDC exchange,
October 5, 2009. Available online from: <http://blogs.idc.com/ie/>
[5] Cearley, D.W., and Smith, D.M., Key Attributed Distinguish Cloud Computing
Services, March 10, 2009, Gartner (ID: G00166207)
[6] Mell, P., and Grance, T., The NIST Definition of Cloud Computing, Version 15,
October 7, 2009. Available online from: <http://csrc.nist.gov/groups/SNS/cloud-
computing/cloud-def-v15.doc>
[7] Siegele, L., Let it Rise, A Special Report on Corporate IT, The Economist, October
25, 2008.
[8] IBM Dynamic Infrastructure, Available online from <http://www-
03.ibm.com/systems/dynamicinfrastructure.html>
[9] Virtualization Basics, VMware Website. Available online from:
<http://www.vmware.com/virtualization/>
[10] Reduce Costs with a Virtual Infrastructure, VMware Website. Available online
from: <http://www.vmware.com/virtualization/virtual-infrastructure.html>
[11] Maritz, P., Benefits of VMware Virtualization, Video of keynote speech at
VMWorld 2009, September 1, 2009. Available online from:
<http://www.vmware.com/virtualization/>
[12] Maritz, P., Make the Most of Cloud Computing in the Private Cloud and as a
Service with Public Clouds. Video available online from:
<http://www.vmware.com/solutions/cloud-services/>
[13] Warrior, P., Cisco Cloud Vision, Video of keynote speech at CiscoLive 2009.
Available online from: <http://www.cisco.com/en/US/netsol/ns976/index.html>
[14] VMware Virtual Networking Concepts White Paper. Available online from:
<http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf>
[15] Brunsdon, G., Networking with VMware Infrastructure, VMware Technical Track
Webinar, March 10, 2009.
26
27. Term Paper: Virtual Networking in Virtualized Data Centers and Cloud Computing
[16] What’s New in VMware vSphere 4: Virtual Networking White Paper. Available
online from: <http://www.vmware.com/files/pdf/vsphere-whatsnew-networking-
wp.pdf>
[17] Cisco VN-Link: Virtualization-Aware Networking, A Technical Primer. Available
online from: <http://www.cisco.com/en/US/products/ps9902/prod-white-papers-
list.html>
[18] Cisco Nexus 1000V Series Switches At-A-Glance Website. Available online from:
<http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns892/ns8
94/at_a_glance_c45-492852.pdf>
[19] Amazon Web Services (AWS) White Paper, December, 2009. Available online
from: <http://aws.amazon.com/what-is-aws/>
[20] Google AppEngine Website. Available online from: <http://code.google.com/>
[21] Google Apps Website. Available online from:
<http://www.google.com/apps/intl/en/business/index.html>
[22] Chappell, D., Introducing Windows Azure White Paper, David Chappell &
Associates, March 2009.
____________________________________________________________________
27