Xen Cloud Platform (XCP) is an open-source virtualization platform based on Citrix's XenServer. It provides a complete virtualization stack including automation, resource pooling, and event management. XenServer has had significant adoption with over 1 million downloads and is used by over 50% of Fortune 500 companies. While many organizations use VMware for critical workloads, Citrix and Microsoft are often used alongside for non-critical systems and cost savings due to their lower prices.
2. What is XCP?
XCP = Xen Cloud Platform
Open Source Citrix’s XenServer
Announced in 2009
Built from XenServer until XCP 1.5
XenServer 6.1 built from XCP 1.6
Datacenter and cloud-ready API
Complete virtualization stack
Automation
Resource pooling
Event management
4. Leveraging Multiple Hypervisors
Source: Info-Tech Research Group; N = 71
1%3% 7%
31%
58%
How many server virtualization
vendors are you using?
5 4 3 2 1
41%
32%
9%
9%
4% 5%
What pair of vendors
are you using?
VMware/Citrix
VMware/Microsoft
VMware/Oracle
Microsoft/Oracle
Oracle/Red Hat
Microsoft/Red Hat
71%
20%
7%
2%What vendor are
you using?
VMware
Microsoft
Citrix
Red Hat
• Many organizations leverage a combination of
VMware, for advanced management of critical
workloads and apps, and Citrix or Microsoft for cost
savings in non-critical systems.
• Microsoft can also bring high performance for
Microsoft apps like Exchange or SharePoint.
• Citrix XenServer is often utilized to support Citrix’s
XenDesktop.
The Benefits
• When possible, ensure one of your solutions
can manage the other for day-to-day
management tasks like live migration & P2V.
• Microsoft & Citrix can manage VMware and
each other.
• VMware is beginning to offer management of
Microsoft VMs.
The Challenges
5. A more open Xen a stronger XenServer
• Xen is a Linux Foundation Collaborative Project
ᵒhttp://xenproject.org
• Supported by industry pillars
ᵒAmazon, Cisco, Google, Intel
• Why the Linux Foundation?
ᵒProvide a trusted and neutral governance model
• What about XenServer?
ᵒXenServer will see accelerated growth
ᵒXenServer continues to power XenDesktop, CloudPlatform and NetScaler
6. What’s so Great About Xen?
• It’s robust
ᵒNative 64-bit hypervisor
ᵒRuns on bare metal
ᵒDirectly leverages CPU hardware for virtualization
• It’s widely-deployed
ᵒTens of thousands of organizations have deployed Xen
• It’s advanced
ᵒOptimized for hardware-assisted virtualization and paravirtualization
• It’s trusted
ᵒOpen, resilient Xen security framework
• It’s part of mainline Linux
8. Understanding the Domain 0 Component
Domain 0 is a compact specialized Linux VM that manages the network and
storage I/O of all guest VMs … and isn’t the XenServer hypervisor
9. Understanding the Linux VM Component
Linux VMs include paravirtualized kernels and drivers, and Xen is part of
Mainline Linux 3.0
10. Understanding the Windows VM Component
Windows VMs use paravirtualized drivers to access storage and network
resources through Domain 0
13. Management Architecture Comparison
“The Other Guys”
Traditional Management
Architecture
Single backend management server
Xen Cloud Platform
Distributed
Management Architecture
Clustered management layer
16. XenServer Pool
• Migrates VM disks from any
storage type to any other
storage type
ᵒLocal, DAS, iSCSI, FC
• Supports cross pool migration
ᵒRequires compatible CPUs
• Encrypted Migration model
• Specify management interface
for optimal performance
Live Storage XenMotion
XenServer Hypervisor
VDI(s)
Live
Virtual
Machine
18. Memory Overcommit
• Feature name: Dynamic Memory
Control
• Ability to over-commit RAM
resources
• VMs operate in a compressed or
balanced mode within set range
• Allow memory settings to be
adjusted while VM is running
• Can increase number of VMs per
host
19. High Availability
• Automatically monitors hosts and
VMs
• Easily configured within XenCenter
• Relies on Shared Storage
ᵒiSCSI, NFS, HBA
• Reports failure capacity for DR
planning purposes
20. Cost Effective VM Densities
• Supporting VMs with up to:
ᵒ16 vCPU per VM
ᵒ128GB Memory per VM
• Supporting hosts with up to:
ᵒ1TB Physical RAM
ᵒ160 logical processors
• Yielding up to 150 Desktop images per host
• Cisco Validated Design for XenServer on UCS
21. Distributed Virtual Network Switching
• Virtual Switch
ᵒOpen source: www.openvswitch.org
ᵒProvides a rich layer 2 feature set
ᵒCross host internal networks
ᵒRich traffic monitoring options
ᵒovs 1.4 compliant
• DVS Controller
ᵒVirtual appliance
ᵒWeb-based GUI
ᵒCan manage multiple pools
ᵒCan exist within pool it manages
VM
VM
VM
VM
VM
22. Switch Policies and Live Migration
VM
VM
VM
VM
Linux VM1
•Allow all traffic
Linux VM2
•Allow SSH on eth0
•Allow HTTP on eth1
Windows VM
•Allow RDP and deny HTTP
Linux VM1
•Allow all traffic
Linux VM2
•Allow SSH on eth0
•Allow HTTP on eth1
Windows VM
•Allow RDP and deny HTTP
SAP VM
•Allow only SAP traffic
•RSPAN to VLAN 26
Windows VM
•Allow all traffic
Linux VM
•Allow SSH on eth0
•Allow HTTP on eth1
Windows VM
•Allow all traffic
SAP VM
•Allow only SAP traffic
•RSPAN to VLAN 26
Linux VM
•Allow SSH on eth0
•Allow HTTP on eth1
VM
24. vSphere 5.1 and XCP 1.6 Quick Comparison
Feature XCP vSphere Edition
Hypervisor high availability Yes Standard
NetFlow Yes Enterprise Plus
Centralized network management Yes Enterprise Plus
Distributed virtual network switching Yes Enterprise Plus with Cisco Nexus 1000v
Storage live migration Yes Standard
Serial port aggregation Not Available Standard
Optimized for desktop workloads Yes Desktop Edition is repackaged
Enterprise Plus
Licensing Free Processor based
25. Getting involved with XCP
• Download it and use it
• http://lists.xen.org/xen-api
• https://github.com/xen-org
• https://launchpad.net/xcp
Welcome to the XenServer Technical Presentation. In this presentation we’ll be covering many of the core features of XenServer, and we’ll have the option of diving a bit deeper in areas which you may be interested in.
More and more organizations are choosing to host different workloads on different hypervisors enabling them not only better overall performance of their environment by also better utilizing their budget. Over 40% of companies in a recent Info-Tech study said they were using 2 or more server virtualization vendors within their datacenter, with almost half of these using Citrix and VMware together. The major challenge of this model is day to day management tasks, such as live migration, that you ideally want to complete through one management console. Currently both Citrix and Microsoft can manage each others VMs as well as VMware. VMware is beginning to offer management of Microsoft VMs.
Since XenServer is based on the open source Xen project, it’s important to understand how Xen itself works. Xen is a bare metal hypervisor which directly leverages virtualization features present in most CPUs from Intel and AMD since approximately 2007. These CPUs all feature VT-D or AMD-V instructions which allow virtual guests to run without needing performance robbing emulation. When Xen was first developed, the success of Vmware ESX was largely based on a series of highly optimized emulation routines. Those routines were needed to address shortcomings in the original x86 instruction set which created obstacles to running multiple general purpose “protected mode” operating systems such as Windows 2000 in parallel. With Xen, and XenServer, those obstacles were overcome through use of both the VT-D instruction set extensions and para-virtualization. Paravirtualization is a concept in which either the operating system is modified, or specific drivers are modified to become “virtualization aware”. Linux itself can optionally run as paravirtualized, while Windows requires the use of both hardware assistance and paravirtualized drivers to run at maximum potential on a hypervisor.These advances served to spur early adoption of Xen based platforms whose performance outstripped ESX in many critical applications. Eventually VMW released ESXi to leverage VT-D and paravirtualization, but it wasn’t until 2011 and vSphere 5 that ESXi became the only hypervisor for vSphere.
This is a slide that shows a blowup of the Xen virtualization engine and the virtualization stack “Domain 0” with a Windows and Linux virtual machine. The green arrows show memory and CPU access which goes through the Xen engine down to the hardware. In many cases Xen will get out of the way of the virtual machine and allow it to go right to the hardware.Xen is a thin layer of software that runs right on top of the hardware, Xen is only around 50,000 lines of code. The lines show the path of I/O traffic on the server. The storage and network I/O connect through a high performance memory bus in Xen to the Domain 0 environment. In the domain 0 these requests are sent through standard Linux device drivers to the hardware below.
Domain 0 is a Linux VM with higher priority to the hardware than the guest operating systems. Domain 0 manages the network and storage I/O of all guest VMs, and because it uses Linux device drivers, a broad range of physical devices are supported
Linux VMs include paravirtualized kernels and drivers. Storage and network resources are accessed through Domain 0, while CPU and memory are accessed through Xen to the hardwarehttp://wiki.xen.org/wiki/Mainline_Linux_Kernel_Configs
Windows VMs use paravirtualized drivers to access storage and network resources through Domain 0. XenServer is designed to utilize the virtualization capabilities of Intel VT and AMD-V enabled processors. Hardware virtualization enables high performance virtualization of the Windows kernel without using legacy emulation technology
Since all these use cases depend on a solid data center platform, let’s start by exploring the features critical to successful enterprise virtualization
Successful datacenter solutions require an easy to use management solution, and XenServer is no different. For XenServer this management solution is called XenCenter. If you’re familiar with vCenter for vSphere, you’ll see a number of common themes. XenCenter is the management console for all XenServer operations, and while there is a powerful CLI and API for XenServer, the vast majority of customers perform daily management tasks from within XenCenter. These tasks include starting and stopping VM, managing the core infrastructure such as storage and networks, through to configuring advanced features such as HA, workload placement and alerting. This single pane of glass also allows administrators to directly access the consoles of the virtual machines themselves. As you would expect, there is a fairly granular set of permissions which can be applied, and I’ll cover that topic in just a little bit.
What differentiates Live Storage Migration from Live VM Migration is that with Live Storage Migration the storage used for the virtual disks is moved from one storage location while the VM itself may not change virtualization hosts. In XenServer, Live VM Migration is branded XenMotion and logically Live Storage Migration became Storage XenMotion. With Storage XenMotion, live migration occurs using a shared nothing architecture which effectively means that other than having a reliable network connection between source and destination, no other elements of the virtualization infrastructure need be common. What this means is that with Storage XenMotion you can support a large number of storage agility tasks, all from within XenCenterFor example:Upgrade a storage arrayProvide tiered storage arraysUpgrade a pool with VMs on local storageRebalance VMs between XenServer pools, or CloudStack clusters
One of the key problems facing virtualization admins is the introduction of newer servers into older resource pools. There are several ways vendors have chosen to solve this problem. They can either “downgrade” the cluster to a known level (say Pentium Pro or Core 2), disallow mixed CPU pools, or level the pool to the lowest common feature set. The core issue when selecting the correct solution is to understand how workloads actually leverage the CPU of the host. When a guest has direct access to the CPU (in other words there is no emulation shim in place), then that guest also has the ability to interrogate the CPU for its capabilities. Once those capabilities are known, the guest can optimize its execution to leverage the most advanced features it finds and thus maximize its performance. The downsize is that if the guest is migrated to a host which lacks a given CPU feature, the guest is likely to crash in a spectacular way. Vendors which define a specific processor architecture for the “base” are effectively deciding that feature set in advance and then hooking the CPU feature set instruction and returning that base set of features. The net result could be performance well below that possible with the “least capable” processor in the pool. XenServer takes a different approach and looks at the feature set capabilities of the CPU and leverages the FlexMigration instruction set within the CPU to create a feature mask. The idea is to ensure that only the specific features present in the newer processor are disabled and that the resource pool runs at its maximum potential. This model ensures that live migrations are completely safe, regardless of the processor architectures; so long as the processors come from the same vendor.
The ability to overcommit memory in a hypervisor was born at a time when the ability to overcommit a CPU far outpaced the ability to populate physical memory in a server in a cost effective manner. The end objective of overcommiting memory is to increase the quantity of VMs which a given host can run. This lead to multiple ways of extracting more memory from a virtualization host than was physically present. The four most common ways of solving this problem are commonly referred to as “transparent page sharing”, “memory ballooning”, “page swap” and “memory compression”. While each has the potential to solve part of the problem, using multiple solutions often yielded the best outcome. Transparent page sharing which seeks to share the 4k memory pages used by an operating system to store its read-only code. Memory ballooning seeks to introduce a “memory balloon” which appears to consume some of the system memory and effectively share it between multiple virtual machines. “Page swap” is nothing more than placing memory pages which haven’t been accessed recently on a disk storage system, and “memory compression” seeks to compress the memory (either swapped or in memory) with a goal of creating additional free memory from commonalities in memory between virtual machines.Since this technology has been an evolutionary attempt to solve a specific problem, it stands to reason that several of the approaches offer minimal value in todays’ environment. For example, transparent page sharing assumes that the readonly memory pages in an operating system are common across VMs, but the reality is that the combination of large memory pages and memory page randomization and tainting have rendered the benefits from transparent page sharing largely ineffective. The same holds true for page swapping whose performance overhead often far exceeds the benefit. What this means is that the only truly effective solutions today are memory ballooning and memory compression. XenServer currently implements a memory balloning solution under the feature name of “dynamic memory control”. DMC leverages a balloon driver within the XenServer tools to present the guest with a known quantity of memory at system startup, and then will modify the amount of free memory seen by the guest in the even the host experiences memory pressure. It’s important to present the operating system with a known fixed memory value at system startup as that’s when the operating system defines key parameters such as cache values.
As today's hosts get more powerful, they are often tasked with hosting increasing numbers of virtual machines. For example, only a few years ago server consolidation efforts were generating consolidation ratios of 4:1 or even 8:1, today’s faster processors coupled with greater memory densities can easily support over a 20:1 consolidation ratio without significantly overcommiting CPUs. This creates significant risk of application failure in the event of a single host failure. High availability within XenServer protects your investment in virtualization by ensuring critical resources are automatically restarted in the event of a host failure. There are multiple restart options allowing you to precisely define what critical means in your environment.
When desktop virtualization is the target workload, the correct hypervisor solution will be one which not only provides a high performance platform, and has features designed to lower the overall deployment costs and address critical use cases, but one which offers flexibility in VM and host configurations while still offering a cost effective VM density. Since this is a classic case of use case matters, take a look at the Cisco Validated Design for XenDesktop on UCS with XenServerhttp://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/ucs_xd_xenserver_ntap.pdf
It is through the use of SRIOV and other cloud optimizations that the NetScaler SDX platform is able to provide the level of throughput, scalability and tenant isolation that it can. The NetScaler SDX is a hardware Application Delivery Controller capable of sustained throughput over 50 Gbps, all powered by a stock Cloud Optimized XenServer 6 hypevisor.
One of the most obvious comparisons is between vSphere and XenServer. A few years ago vSphere was the clear technical leader, but today the gap has closed considerably and there are clear differences in overall strategy and market potential. Key areas which XenServer had lagged, for example with live migration or advanced network switching are either being addressed or have already been addressed. Of course there will always be features which XenServer is unlikely to implement, such as serial port aggregation, or platforms it’s unlikely to support, such as legacy Windows operating systems, but for the majority of virtualization tasks both platforms are compelling solutions.