SlideShare uma empresa Scribd logo
1 de 126
XenServer 6.1 Technical Overview
September 2012
What is XenServer?
What’s so Great About Xen?

• It’s robust
 ᵒNative 64-bit hypervisor
 ᵒRuns on bare metal
 ᵒDirectly leverages CPU hardware for virtualization
• It’s widely-deployed
 ᵒTens of thousands of organizations have deployed Xen
• It’s advanced
 ᵒOptimized for hardware-assisted virtualization and paravirtualization
• It’s trusted
 ᵒOpen, resilient Xen security framework
• It’s part of mainline Linux

 © 2012 Citrix | Confidential – Do Not Distribute
Understanding Architectural Components

The Xen hypervisor and control domain (dom0) manage physical server
 resources among virtual machines




 © 2012 Citrix | Confidential – Do Not Distribute
Understanding the Domain 0 Component

Domain 0 is a compact specialized Linux VM that manages the network and
storage I/O of all guest VMs … and isn’t the XenServer hypervisor




© 2012 Citrix | Confidential – Do Not Distribute
Understanding the Linux VM Component

Linux VMs include paravirtualized kernels and drivers, and Xen is part of
Mainline Linux 3.0




© 2012 Citrix | Confidential – Do Not Distribute
Understanding the Windows VM Component

Windows VMs use paravirtualized drivers to access storage and network
resources through Domain 0




© 2012 Citrix | Confidential – Do Not Distribute
XenServer Meets All Virtualization Needs
                                                   • High performance, resilient virtualization platform
           Enterprise
                                                   • Simple deployment and management model
          Data Center                              • Host based licensing to control CAPEX



           Desktop                                 • Optimized for high performance desktop workloads
         Virtualization                            • Storage optimizations to control VDI CAPEX




            Cloud                                  • Platform for IaaS and Cloud Service Providers
                                                   • Powers the NetScaler SDX platform
        Infrastructure                             • Fully supports Software Defined Networking


© 2012 Citrix | Confidential – Do Not Distribute
Enterprise
Data Center Virtualization
XenCenter – Simple XenServer Management

•      Single pane of management glass
•      Manage XenServer hosts
    ᵒ Start/Stop VMs
•      Manage XenServer resource pools
    ᵒ Shared storage
    ᵒ Shared networking
•      Configure advanced features
    ᵒ HA, WLB, Reporting, Alerting
•      Configure updates

    © 2012 Citrix | Confidential – Do Not Distribute
Management Architecture Comparison
                 “The Other Guys”                       Citrix XenServer




         Traditional Management                          Distributed
               Architecture                        Management Architecture
           Single backend management server           Clustered management layer

© 2012 Citrix | Confidential – Do Not Distribute
Role-Based Administration

• Provide user roles with varying permissions
  • Pool Admin
  • Pool Operator
  • VM Power Admin
  • VM Admin
  • VM Operator
  • Read-only
• Roles are defined within a Resource Pool

• Assigned to Active Directory users, groups

• Audit logging via Workload Reports


 © 2012 Citrix | Confidential – Do Not Distribute
XenMotion Live VM Migration




                                                   Shared Storage
                                                                    More about XenMotion


© 2012 Citrix | Confidential – Do Not Distribute
Live Storage XenMotion

                                     Live                           • Migrates VM disks from any
                                    Virtual
                                    Machine
                                                                      storage type to any other
                                                                      storage type
                                                                     ᵒLocal, DAS, iSCSI, FC
                            XenServer Hypervisor                    • Supports cross pool migration
                                                                     ᵒRequires compatible CPUs
                      VDI(s)                                        • Encrypted Migration model
                                                                    • Specify management interface
                                                                      for optimal performance
                                                   XenServer Pool

                                                                                      More about Storage XenMotion


© 2012 Citrix | Confidential – Do Not Distribute
Heterogeneous Resource Pools

                          Virtual Machine
                                                                                                     Safe Live Migrations

                                                                                                         Mixed Processor Pools




            Feature       Feature       Feature    Feature   Feature   Feature     Feature     Feature
               1             2             3          4         1         2           3           4


                             Older CPU                                           Newer CPU

                           XenServer 1                                           XenServer 2

© 2012 Citrix | Confidential – Do Not Distribute
Memory Overcommit

                                                   • Feature name: Dynamic Memory
                                                    Control
                                                   • Ability to over-commit RAM
                                                    resources
                                                   • VMs operate in a compressed or
                                                    balanced mode within set range
                                                   • Allow memory settings to be
                                                    adjusted while VM is running
                                                   • Can increase number of VMs per
                                                    host


© 2012 Citrix | Confidential – Do Not Distribute
Virtual Appliances (vApp)

• Support for “vApps” or Virtual
  Appliances
 ᵒOVF definition of Virtual Appliance
• vApp contains one or more Virtual
  Machines
• Enables grouping of VMs which can
  be utilized by
 ᵒXenCenter
 ᵒIntegrated Site Recovery
 ᵒAppliance Import and Export
 ᵒHA


 © 2012 Citrix | Confidential – Do Not Distribute
Virtual Machine Protection and Recovery

• Policy based snapshotting and
  archiving
• Separate scheduling options for
  snapshot and archive
 ᵒSnapshot-only or Snapshot and Archive
• Policy Configuration
 ᵒAdd multiple VMs to policy
 ᵒSearch filter available
 ᵒVM can only belong to 1 policy
 ᵒXenCenter or CLI



 © 2012 Citrix | Confidential – Do Not Distribute
High Availability in XenServer

                                                   • Automatically monitors hosts and
                                                     VMs
                                                   • Easily configured within XenCenter
                                                   • Relies on Shared Storage
                                                    ᵒiSCSI, NFS, HBA

                                                   • Reports failure capacity for DR
                                                     planning purposes



                                                                                More about HA


© 2012 Citrix | Confidential – Do Not Distribute
Advanced Data Center
Automation
Optimizing Storage – Integrated StorageLink

                                                             Virtualization can hinder the linkage
 XenServer                                                   between servers and storage, turning
   Hosts
                                                             expensive storage systems into little
                                                             more than “dumb disks”



                                                             Citrix StorageLink™ technology lets your
  XenServer                        StorageLink               virtual servers fully leverage all the
    Hosts                                          Storage
                                                             power of existing storage systems

                                                                                      More about StorageLink


© 2012 Citrix | Confidential – Do Not Distribute
Workload Placement Services

• Feature name: Workload Balancing
• Automated guest start-up and
  management based on defined
  policy
• Guests automatically migrate from
  one host to another based on
  resource usage
• Power-on/off hosts as needed
• Report on utilization of pool
  resources – by VM, by host, etc.
More about WLB

 © 2012 Citrix | Confidential – Do Not Distribute
Integrated Site Recovery

• Supports LVM SRs
• Replication/mirroring setup outside
  scope of solution
 ᵒFollow vendor instructions
 ᵒBreaking of replication/mirror also manual
• Works with every iSCSI and FC
  array on HCL
• Supports active-active DR



                                                    More about Site Recovery


 © 2012 Citrix | Confidential – Do Not Distribute
Delegated Web Based Administration

• Enables:
 • IT delegation for administrators
 • VM level administration for end users
• Support for multiple pools
• Active Directory enabled
• XenVNC and RDP console access




 © 2012 Citrix | Confidential – Do Not Distribute
Live Memory Snapshot and Rollback

• Live VM snapshot and revert
 ᵒBoth memory and disk state are
  captured
 ᵒOptional quiesce option via VSS
  provider (Windows guests)
 ᵒOne-click revert
• Snapshot branches
 ᵒSupport for parallel subsequent
  checkpoints based on a previous
  common snapshot




 © 2012 Citrix | Confidential – Do Not Distribute
Desktop Optimized XenServer
Supporting High Performance Graphics

• Feature name: GPU pass-through
• Enables high-end graphics in VDI
  deployments with HDX 3D Pro
• Optimal CAD application support
  with XenDesktop
• More powerful than RemoteFX,
  virtual GPUs, or other general
  purpose graphics solutions




 © 2012 Citrix | Confidential – Do Not Distribute
Benefits of GPU Pass-through

Without GPU pass-through, each user                With GPU pass-through, hardware
    requires their own Blade PC                          costs are cut up to 75%

                                                                      GPU cards




                                                                    XenServer Host




                                                                     More about GPU Pass Through


© 2012 Citrix | Confidential – Do Not Distribute
Controlling Shared Storage Costs – IntelliCache

• Caching of XenDesktop 5 images
• Leverages local storage
• Reduce IOPS on shared storage
• Supported since XenServer 5.6 SP2




 © 2012 Citrix | Confidential – Do Not Distribute
IntelliCache Fundamentals
    XenDesktop                                                   1.   Master Image created through
                                                                      XenDesktop MCS
                                                                 2.   VM is configured to use Master Image
                                                                 3.   VM using Master Image is started




                                                    001
                                                    011
                                                                 4.   XenServer creates read cache object
                                                                      on local storage




                                                          0101
                                                          0011
                                                                 5.   Reads in VM being done from local
                                                                      cache
                                                                 6.   Additional Reads done from SAN
                                                                      when required
                                                                 7.   Writes will happen in VHD child per
                                                                      VM
                                                                 8.   Local “write” cache is deleted when
          0101
          0011




     Master Image
       Cache                                                          VM is shutdown/restarted
                                                                 9.   Additional VMs will use same read
                                                                      cache
NFS Based Storage
 © 2012 Citrix | Confidential – Do Not Distribute
Cost Effective VM Densities
• Supporting VMs with up to:
 ᵒ16 vCPU per VM
 ᵒ128GB Memory per VM
• Supporting XenServer hosts with up to:
 ᵒ1TB Physical RAM
 ᵒ160 logical processors
• Yielding up to 150 Desktop images per host


• Included at no cost with all XenDesktop purchases


• Cisco Validated Design for XenDesktop on UCS

 © 2012 Citrix | Confidential – Do Not Distribute
Cloud Optimized XenServer
Distributed Virtual Network Switching

• Virtual Switch                                    VM




 ᵒOpen source: www.openvswitch.org
 ᵒProvides a rich layer 2 feature set
 ᵒCross host internal networks                      VM



 ᵒRich traffic monitoring options
 ᵒovs 1.4 compliant

• DVS Controller
 ᵒVirtual appliance
                                                    VM




 ᵒWeb-based GUI                                     VM

 ᵒCan manage multiple pools
 ᵒCan exist within pool it manages                  VM




 © 2012 Citrix | Confidential – Do Not Distribute
Switch Policies and Live Migration
             Windows VM
                   •Allow
            Windows VM all traffic                 VM
             SAP •Allow all traffic
                  VM
                   •Allow only SAP traffic
            Linux VM
                   •RSPAN to VLAN 26
                  •Allow SSH on eth0
             Linux VM HTTP on eth1
                  •Allow
                   •Allow SSH on eth0              VM

                   •Allow HTTP on eth1



            Linux VM1
            Linux •Allow all traffic
                  VM1
            Linux •Allow all traffic
                  VM2                              VM

            Linux •Allow SSH on eth0
                  VM2
                  •Allow SSH on eth0
                  •Allow HTTP on eth1              VM
                  •Allow HTTP on eth1
            Windows VM
            Windows VM
                  •Allow RDP and deny HTTP
            SAP VM•Allow RDP and deny HTTP         VM


                   •Allow only SAP traffic
                   •RSPAN to VLAN 26                    More about DVSC


© 2012 Citrix | Confidential – Do Not Distribute
Single Root IO Virtualization (SR-IOV)
                                                                             Guest                          Guest
                                                    App                        VM    App                      VM
• PCI Specification for direct IO access
 ᵒHardware supports multiple PCI ids
                                                         VF driver                         VF driver
 ᵒPresents multiple virtual NICs from single NIC
• Virtual NICs presented directly into guests
                                                                                                            dom0
 ᵒMinimize hypervisor overhead in high
  performance networks
                                                                                vSwitch


• Not without downsides                             Physical
                                                     driver

 ᵒRequires specialized hardware
                                                               Virtual NIC                    Virtual NIC    NIC
 ᵒCan not participate in DVS
 ᵒDoes not support live migration
 ᵒLimited number of virtual NICs
    More about SRIOV

 © 2012 Citrix | Confidential – Do Not Distribute
NetScaler SDX – Powered by XenServer

• Complete tenant isolation
• Complete independence
• Partitions within instances
• Optimized network: 50+ Gbps
• Runs default XenServer 6




 © 2012 Citrix | Confidential – Do Not Distribute
System Center Integration
Support for SCVMM

• SCVMM communicates with CIMOM
  in XenServer which communicates
  with XAPI
• Requires SCVMM 2012
• Very easy to setup
 ᵒDelivered as Integration Suite
  Supplemental Pack
 ᵒAdd Resource Pool or host
• Secure communication using
  certificates


 © 2012 Citrix | Confidential – Do Not Distribute
Support for SCOM

• Monitor XenServer hosts through System
  Center Operations Manager
• Support for SCOM 2007 R2 and higher
• Part of Integration Suite Supplemental Pack
• Monitor various host information (considered
  Linux host)
 ᵒMemory usage
 ᵒProcess information
 ᵒHealth status




 © 2012 Citrix | Confidential – Do Not Distribute
XenServer Editions
Summary of Key Features and Packages
                                          • Integrated disaster recovery management
                                          • Provisioning services for physical and virtual workloads


                                          • Dynamic Workload Balancing and Power Management
                                          • Web Management Console with Delegated Admin
                                          • Monitoring pack for Systems Center Ops Manager

                                          • High Availability
                                          • Dynamic Memory Control
                                          • Shared nothing live storage migration

                                          • Resource pooling with shared storage
                                          • Centralized management console
                                          • No performance restrictions

© 2012 Citrix | Confidential – Do Not Distribute
vSphere 5.1 and XenServer 6.1 Quick Comparison
Feature                                               XenServer Edition               vSphere Edition
Hypervisor high availability                         Advanced             Standard
NetFlow                                              Advanced             Enterprise Plus
Centralized network management                       Free                 Enterprise Plus
Distributed virtual network switching                Advanced             Enterprise Plus with Cisco Nexus 1000v
Storage live migration                               Advanced             Standard
Serial port aggregation                              Not Available        Standard
Network based resource scheduling                    Enterprise           Not Available
Disk IO based resource scheduling                    Enterprise           Not Available
Optimized for desktop workloads                      Yes                  Desktop Edition is repackaged
                                                                          Enterprise Plus
Licensing                                            Host based           Processor based

  © 2012 Citrix | Confidential – Do Not Distribute
XenServer 6.1 – Product Edition Feature Matrix
                                             Feature      Free    Advanced       Enterprise     Platinum
64-bit Xen Hypervisor                                      a          a              a              a
Active Directory Integration                               a          a              a              a
VM Conversion Utilities                                    a          a              a              a
Live VM Migration with XenMotion™                          a          a              a              a
Multi-Server Management with XenCenter                     a          a              a              a
Management Integration with Systems Center VMM             a          a              a              a
Automated VM Protection and Recovery                                  a              a              a
Live Storage Migration with Storage XenMotion™                        a              a              a
Distributed Virtual Switching                                         a              a              a
Dynamic Memory Control                                                a              a              a
High Availability                                                     a              a              a
Performance Reporting and Alerting                                    a              a              a
Mixed Resource Pools with CPU Masking                                 a              a              a
Dynamic Workload Balancing and Power Management                                      a              a
GPU Pass-Through for Desktop Graphics Processing                                     a              a
IntelliCache™ for XenDesktop Storage Optimization                                    a              a
Live Memory Snapshot and Revert                                                      a              a
Provisioning Services for Virtual Servers                                            a              a
Role-Based Administration and Audit Trail                                            a              a
StorageLink™ Advanced Storage Management                                             a              a
Monitoring Pack for Systems Center Ops Manager                                       a              a
Web Management Console with Delegated Admin                                          a              a
Provisioning Services for Physical Servers                                                          a
Site Recovery                                                                                       a
       © 2012 Citrix | Confidential – Do Not Distribute
                                          Price           Free   $1000/server   $2500/server   $5000/server
Subscription Advantage
Citrix Subscription Advantage entitles customers the ability to upgrade to the latest software version for their product
at no additional charge. Support not included.
                                                     Renewal Categories
Current:
                                                                      Renewal SRP
Active memberships

Reinstatement:                                                        Renewal SRP + pro-rated renewal for time expired
Memberships that are expired 1 through 365 days                       + 20% fee

Recovery:
                                                                      Recovery SRP
Memberships that are expired more than 365 days
Edition                                                               Renewal SRP              Recovery SRP
XenServer Platinum                                                    $675.00 per SVR          $2,800.00 per SVR
XenServer Enterprise                                                  $325.00 per SVR          $1,400.00 per SVR
XenServer Advanced                                                    $130.00 per SVR          $560.00 per SVR

  © 2012 Citrix | Confidential – Do Not Distribute
Support Options
XenServer Support Options                                                  Premier Support
Cost                                                                       7% of license cost (SRP)
Product Coverage                                                           XenServer Advanced, Enterprise and Platinum
Coverage Hours                                                             24x7x365
Incidents                                                                  Unlimited
Named Contacts                                                             Unlimited
Type of Access                                                             Phone/Web/Email
                                                            Add-on Service Options
Software or Hardware TRM                             200 hours/Unlimited incidents/1region             $40,000
Additional TRM hours                                 100 hours                                         $20,000
Fully Dedicated TRM                                  1600 hours/Unlimited incidents/1 region           $325,000
On-site Days                                         On-site technical support service                 $2,000 per day
Assigned Escalation                                  200 hours/1 region (must have TRM)                $16,000
Fully Dedicated Assigned Escalation                  1600 hours                                        $480,000

  © 2012 Citrix | Confidential – Do Not Distribute
It’s Your Budget … Spend it Wisely

                                                   • Vendor lock-in great for vendor
             Single Vendor                         • Beware product lifecycles and tool set changes

                                                   • ROI Calculators always show vendor author as best
   ROI Can be Manipulated                          • Use your own numbers

                                                   • Over buying is costly; get what you need
 Understand Support Model                          • Support call priority with tiered models

                                                   • Some projects have requirements best suited to specific tool
          Use Correct Tool                         • Understand deployment and licensing impact

 Leverage Costly Features as                       • Blanket purchases benefit only vendor
         Required                                  • Chargeback to project for feature requirements


© 2012 Citrix | Confidential – Do Not Distribute
Work better. Live better.
GPU Pass-through Details
How GPU Pass-through Works

• Identical GPUs in a host auto-create a GPU
  group
• The GPU Group can be assigned to set of
  VMs – each VM will attach to a GPU at VM
  boottime
• When all GPUs in a group are in use,
  additional VMs requiring GPUs will not start
• GPU and non-GPU VMs can (and should)
  be mixed on a host
• GPU groups are recognized within a pool
 ᵒIf Server 1, 2, 3 each have GPU type 1, then
  VMs requiring GPU type 1 can be started on
  any of those servers



 © 2012 Citrix | Confidential – Do Not Distribute
GPU Pass-through HCL is Server Specific

• Server
 ᵒHP ProLiant WS460c G6 Workstation series*
 ᵒIBM System x3650 M3
 ᵒDell Precision R5500
• GPU (1-4 per host)
 ᵒNVIDIA Quadro 2000, 4000, 5000, 6000
 ᵒNVIDIA Tesla M2070-Q
• Support for Windows guests only
• Important: Combinations of servers +
  GPUs must be tested as a pair


 © 2012 Citrix | Confidential – Do Not Distribute
Limitations of GPU Pass-through

• GPU Pass-through binds the VM to host for duration of session
 ᵒRestricts XenMotion and WLB
• Multiple GPU types can exist in a single server
 ᵒE.g. high performance and mid performance GPUs
• VNC will be disabled, so RDP is required
• Fully supported for XenDesktop, best effort for other windows workloads
 ᵒNot supported for Linux guests
• HCL is very important




 © 2012 Citrix | Confidential – Do Not Distribute
IntelliCache Details
Enabling IntelliCache on XenServer Hosts

• IntelliCache requires local EXT3 storage, to be selected during XenServer
  installation
• If this is selected during installation the host is automatically enabled for
  IntelliCache
• Manual steps in Admin guide




 © 2012 Citrix | Confidential – Do Not Distribute
Enabling IntelliCache in XenDesktop

• http://support.citrix.com/
  article/CTX129052
• Use IntelliCache checkbox when
  adding a host in Desktop Studio
• Supported from XenDesktop 5 FP1




 © 2012 Citrix | Confidential – Do Not Distribute
NFS Ops




                                                                                                             10000
                                                                                                                     12000
                                                                                                                             14000
                                                                                                                                     16000
                                                                                                                                             18000




                                                                                        4000
                                                                                               6000
                                                                                                      8000




                                                                                 2000

                                                                             0
                                                                   0:00:00
                                                                   0:00:45
                                                                   0:01:30
                                                                   0:02:15
                                                                   0:03:00
                                                                   0:03:45
                                                                   0:04:30
                                                                   0:05:15
                                                                   0:06:00
                                                                   0:06:45
                                                                   0:07:30
                                                                   0:08:15
                                                                   0:09:00
                                                                   0:09:45
                                                                   0:10:30
                                                                   0:11:15




© 2012 Citrix | Confidential – Do Not Distribute
                                                                   0:12:00
                                                                   0:12:45
                                                                   0:13:30
                                                                   0:14:15
                                                                   0:15:00
                                                                   0:15:45
                                                                   0:16:30
                                                                   0:17:15
                                                                   0:18:00
                                                                   0:18:45
                                                                   0:19:30
                                                                   0:20:15
                                                   NFS Read Ops
                                                                   0:21:00
                                                                   0:21:45
                                                                   0:22:30
                                                                   0:23:15
                                                                   0:24:00
                                                                                                                                                     NFS Ops (Non-IC)




                                                                   0:24:45
                                                                   0:25:30
                                                   NFS Write Ops




                                                                   0:26:15
                                                                   0:27:00
                                                                   0:27:45
                                                                   0:28:30
                                                                   0:29:15
                                                                   0:30:00
                                                                   0:30:45
                                                                   0:31:30
                                                                   0:32:15
                                                                                                                                                                        IOPS – 1000 Users – No IntelliCache




                                                                   0:33:00
                                                                   0:33:45
                                                                   0:34:30
                                                                   0:35:15
                                                                   0:36:00
                                                                   0:36:45
                                                                   0:37:30
                                                                   0:38:15
                                                                   0:39:00
                                                                   0:39:45
                                                                   0:40:30
                                                                   0:41:15
                                                                   0:42:00
                                                                   0:42:45
                                                                   0:43:30
                                                                   0:44:15
                                                                   0:45:00
                                                                   0:45:45
NFS Ops




                                                                                       1000
                                                                                                1500
                                                                                                        2000
                                                                                                               2500
                                                                                                                      3000




                                                                                 500


                                                                             0
                                                                   0:00:00
                                                                   0:00:40
                                                                   0:01:20
                                                                   0:02:00
                                                                   0:02:40
                                                                   0:03:20
                                                                   0:04:00
                                                                   0:04:40
                                                                   0:05:20
                                                                   0:06:00
                                                                   0:06:40
                                                                   0:07:20
                                                                   0:08:00
                                                                   0:08:40
                                                                   0:09:20
                                                                   0:10:00




© 2012 Citrix | Confidential – Do Not Distribute
                                                                   0:10:40
                                                                   0:11:20
                                                                   0:12:00
                                                                   0:12:40
                                                                   0:13:20
                                                                   0:14:00
                                                                   0:14:40
                                                                   0:15:20
                                                                   0:16:00
                                                                   0:16:40
                                                                   0:17:20
                                                                   0:18:00
                                                                   0:18:40
                                                   NFS Read Ops
                                                                   0:19:20
                                                                   0:20:00
                                                                   0:20:40
                                                                   0:21:20
                                                                   0:22:00
                                                                   0:22:40
                                                                   0:23:20
                                                                                                                             NFS Ops (Cold Cache)




                                                   NFS Write Ops




                                                                   0:24:00
                                                                   0:24:40
                                                                   0:25:20
                                                                   0:26:00
                                                                   0:26:40
                                                                   0:27:20
                                                                   0:28:00
                                                                   0:28:40
                                                                   0:29:20
                                                                   0:30:00
                                                                   0:30:40
                                                                   0:31:20
                                                                                                                                                    IOPS – 1000 Users – Cold Cache Boot




                                                                   0:32:00
                                                                   0:32:40
                                                                   0:33:20
                                                                   0:34:00
                                                                   0:34:40
                                                                   0:35:20
                                                                   0:36:00
                                                                   0:36:40
                                                                   0:37:20
                                                                   0:38:00
                                                                   0:38:40
                                                                   0:39:20
                                                                   0:40:00
                                                                   0:40:40
                                                                   0:41:20
NFS Ops




                                                                                     10
                                                                                          15
                                                                                               20
                                                                                                    25
                                                                                                              35


                                                                                                         30




                                                                             0
                                                                                 5
                                                                   0:00:00
                                                                   0:00:45
                                                                   0:01:30
                                                                   0:02:15
                                                                   0:03:00
                                                                   0:03:45
                                                                   0:04:30
                                                                   0:05:15
                                                                   0:06:00
                                                                   0:06:45
                                                                   0:07:30
                                                                   0:08:15
                                                                   0:09:00
                                                                   0:09:45
                                                                   0:10:30
                                                                   0:11:15




© 2012 Citrix | Confidential – Do Not Distribute
                                                                   0:12:00
                                                                   0:12:45
                                                                   0:13:30
                                                                   0:14:15
                                                                   0:15:00
                                                                   0:15:45
                                                                   0:16:30
                                                                   0:17:15
                                                                   0:18:00
                                                                   0:18:45
                                                                   0:19:30
                                                                   0:20:15
                                                   NFS Read Ops    0:21:00
                                                                   0:21:45
                                                                   0:22:30
                                                                   0:23:15
                                                                   0:24:00
                                                                   0:24:45
                                                                                                                   NFS Ops (Hot Cache)




                                                                   0:25:30
                                                   NFS Write Ops




                                                                   0:26:15
                                                                   0:27:00
                                                                   0:27:45
                                                                   0:28:30
                                                                   0:29:15
                                                                   0:30:00
                                                                   0:30:45
                                                                   0:31:30
                                                                   0:32:15
                                                                   0:33:00
                                                                                                                                         IOPS – 1000 Users – Hot Cache Boot




                                                                   0:33:45
                                                                   0:34:30
                                                                   0:35:15
                                                                   0:36:00
                                                                   0:36:45
                                                                   0:37:30
                                                                   0:38:15
                                                                   0:39:00
                                                                   0:39:45
                                                                   0:40:30
                                                                   0:41:15
                                                                   0:42:00
                                                                   0:42:45
                                                                   0:43:30
                                                                   0:44:15
                                                                   0:45:00
Limitations of IntelliCache

• Best results achieved with local SSD drives
 ᵒSAS and SATA supported, but spindled disks are slower
• XenMotion and WLB restrictions (pooled images)
• Best practice Local space sizing
 ᵒExpecting 50% cache usage per user + daily log off
 ᵒ[real size master image] + #[users per server] * [size master image] * 0,5
 ᵒCache disk may vary according to VM lifecycle definition (reboot cycle)




 © 2012 Citrix | Confidential – Do Not Distribute
IntelliCache Conclusions

• Dramatic reduction of I/O for pooled desktops
• Significant reduction of I/O for assigned desktops
        ᵒStill need IOPS for write traffic
        ᵒLocal write cache benefits
• Storage investment much lower – and more appropriate
• Overall TCO 15 – 30 % improvement
• Continued evolution of features to yield better performance and TCO




 © 2012 Citrix | Confidential – Do Not Distribute
Workload Balancing Details
Components
                                                                                         Analysis Engine service

• Workload Balancing Components
 ᵒData Collection Manager service
 ᵒAnalysis Engine service                                                                         Data Store
 ᵒWeb Service Host




                                                                Resource Pool
                                                                 XenServer
 ᵒData Store
 ᵒXenServer
 ᵒXenCenter

                                                    XenCenter                               Data Collection
                                                                                            Manager service




                                                                Resource Pool
                                                                 XenServer
                                                                                Recommendations

                                                                                            Web Service Host



 © 2012 Citrix | Confidential – Do Not Distribute
Placement Strategies

• Maximize Performance
 ᵒDefault setting
 ᵒSpread workload evenly across all
  physical hosts in a resource pool
 ᵒThe goal is to minimize
  CPU, memory, and network pressure for
  all hosts
• Maximize Density
 ᵒFit as many virtual machines as
  possible onto a physical host
 ᵒThe goal is to minimize the number of
  physical hosts that must be online

 © 2012 Citrix | Confidential – Do Not Distribute
Critical Thresholds

• Components included in WLB
  evaluation:
 ᵒCPU
 ᵒMemory
 ᵒNetwork Read
 ᵒNetwork Write
 ᵒDisc Read
 ᵒDisk Write
• Optimization recommendation is
  being triggered if a threshold is
  reached


 © 2012 Citrix | Confidential – Do Not Distribute
Reports

• Pool Health
 ᵒShows aggregated resource usage for a pool. Helps you evaluate the effectiveness of
  your optimization thresholds
• Pool Health History
 ᵒDisplays resource usage for a pool over time. Helps you evaluate the effectiveness of
  your optimization thresholds
• Host Health History
 ᵒSimilar to Pool Health History but filtered by a specific host
• Optimization Performance History
 ᵒShows resource usage before and after executing optimization recommendations



 © 2012 Citrix | Confidential – Do Not Distribute
Reports

• Virtual Machine Motion History
 ᵒProvides information about how many times virtual machines moved on a resource
  pool, including the name of the virtual machine that moved, number of times it
  moved, and physical hosts affected
• Optimization Performance History
 ᵒShows resource usage before and after executing accepting optimization
  recommendations
• Virtual Machine Performance History
 ᵒDisplays key performance metrics for all virtual machines that operated on a host during
  the specified timeframe



 © 2012 Citrix | Confidential – Do Not Distribute
Workload Chargeback Reports

• Billing codes and costs
• Resources to be charged
• Exportable data




 © 2012 Citrix | Confidential – Do Not Distribute
Workload Balancing Virtual Appliance

• Ready-to-use WLB Virtual Appliance
• Up and running with WLB in minutes
  rather than hours
• Small footprint, Linux Virtual
  Appliance
 ᵒ~150Mb




 © 2012 Citrix | Confidential – Do Not Distribute
Installation

• Download Virtual Appliance
• Import Virtual Appliance
• Start Virtual Appliance
• Initial setup steps
 ᵒDefine steps
• Enable WLB in XenCenter




 © 2012 Citrix | Confidential – Do Not Distribute
Integrated Site Recovery
Details
Integrated Site Recovery

• Replaces StorageLink Gateway Site
  Recovery
• Decoupled from StorageLink adapters
• Supports LVM SRs only in this release
• Replication/mirroring setup outside
  scope of solution
 ᵒFollow vendor instructions
 ᵒBreaking of replication/mirror also manual
• Works with every iSCSI and FC array on
  HCL
• Supports active-active DR

 © 2012 Citrix | Confidential – Do Not Distribute
Feature Set

• Integrated in XenServer and XenCenter
• Support failover and failback
• Supports grouping and startup order through vApp functionality
• Failover pre-checks
 ᵒPowerstate of source VM
 ᵒDuplicate VMs on target pool
 ᵒSR connectivity
• Ability to start VMs paused (e.g. for dry-run)




 © 2012 Citrix | Confidential – Do Not Distribute
How it Works

• Depends on “Portable SR” technology
 ᵒDifferent from Metadata backup/restore functionality
• Creates a logical volume on SR during setup
• Logical Volume contains
 ᵒSR metadata information
 ᵒVDI metadata information for all VDIs stored on SR
• Metadata information is read during failover sr-probe




 © 2012 Citrix | Confidential – Do Not Distribute
Integrated Site Recovery - Screenshots




© 2012 Citrix | Confidential – Do Not Distribute
Distributed Virtual Switch
Details
Terminology

• OpenFlow
 ᵒAn open standard that separates the control and data paths for switching devices
• OpenFlow switch
 ᵒCould be physical or virtual
 ᵒIncludes packet processing and remote configuration/control support via OpenFlow
• Open vSwitch
 ᵒAn OSS Linux-based implementation of an OpenFlow virtual switch
 ᵒMaintained at www.openvswitch.org
• vSwitch Controller
 ᵒA commercial implementation of a OpenFlow controller
 ᵒProvides integration with XenServer pools


 © 2012 Citrix | Confidential – Do Not Distribute
Core Distributed Switch Objectives

• Extend network management to virtual networks
• Provide network monitoring using standard protocols
• Define network policies on virtual objects
• Support multi-tenant virtual data centers
• Provide cross host private networking without VLANs
• Answer to VMware VDS and Cisco Nexus 1000v




 © 2012 Citrix | Confidential – Do Not Distribute
Understanding Policies

• Access control                                    VM



 ᵒBasic Layer 3 firewall rules
 ᵒDefinable by pool/network/VM
 ᵒInheritance controls                              VM




                                                    VM




                                                    VM




                                                    VM




 © 2012 Citrix | Confidential – Do Not Distribute
Understanding Policies

• Access control                                    VM




• QoS
 ᵒRate limits to control bandwidth                  VM




                                                    VM




                                                    VM




                                                    VM




 © 2012 Citrix | Confidential – Do Not Distribute
Understanding Policies

• Access control                                    VM




• QoS
• RSPAN                                             VM


 ᵒTransparent monitoring of VM level
  traffic

                                                    VM




                                                    VM




                                                    VM




 © 2012 Citrix | Confidential – Do Not Distribute
What is NetFlow?

• Layer 3 monitoring protocol
• UDP/SCTP based
• Broadly adopted solution
• Implemented in three parts
 ᵒExporter (DVS)
 ᵒCollector
 ᵒAnalyzer
• DVSC is NetFlow v5 based
 ᵒEnabled at pool level



 © 2012 Citrix | Confidential – Do Not Distribute
Performance Monitoring

• Enabled via NetFlow
• Dashboard
 ᵒThroughput
 ᵒPacket flow
 ᵒConnection flow
• Flow Statistics
 ᵒSlice and dice reports
 ᵒSee top VM traffic
 ᵒData goes back 1 week




 © 2012 Citrix | Confidential – Do Not Distribute
Bonus Features *****

• Jumbo Frames
• Cross Server Private Networks
• LACP
• 4 NIC bonds




 © 2012 Citrix | Confidential – Do Not Distribute
High Availability Details
Protecting Workloads

                                                   • Not just for mission critical
                                                     applications anymore
                                                   • Helps manage VM density issues
                                                   • "Virtual" definition of HA a little
                                                     different than physical
                                                   • Low cost / complexity option to
                                                     restart machines in case of failure




© 2012 Citrix | Confidential – Do Not Distribute
High Availability Operation

                                                   • Pool-wide settings
                                                   • Failure capacity – number of hosts to
                                                     carry out HA Plan
                                                   • Uses network and storage heartbeat
                                                     to verify servers




© 2012 Citrix | Confidential – Do Not Distribute
VM Protection Options

• Restart Priority
 ᵒDo not restart
 ᵒRestart if possible
 ᵒRestart
• Start Order
 ᵒDefines a sequence and delay to ensure applications run correctly




 © 2012 Citrix | Confidential – Do Not Distribute
HA Design – Hot Spares

Simple Design
 ᵒ Similar to hot spare in disk array
 ᵒ Guaranteed available
 ᵒ Inefficient  Idle resources
Failure Planning
 ᵒ      If surviving hosts are fully loaded – VMs will be forced to start on spare
 ᵒ      Could lead to restart delays due to resource plugs
 ᵒ      Could lead to performance issues if spare is pool master
 ᵒ      If using WLB, need to exclude spare from rebalancing




 © 2012 Citrix | Confidential – Do Not Distribute
HA Design – Distributed Capacity

Efficient Design
 ᵒAll hosts utilized
 ᵒWLB can ensure optimal performance
Failure Planning
 ᵒImpacted VMs automatically placed for best fit
 ᵒRunning VMs undisturbed
 ᵒProvides efficient guaranteed availability




 © 2012 Citrix | Confidential – Do Not Distribute
HA Design – Impact of Dynamic Memory




Enhances Failure Planning
 ᵒDefine reduced memory which meets SLA
 ᵒOn restart, some VMs may “squeeze” their memory
 ᵒIncreases host efficiency



 © 2012 Citrix | Confidential – Do Not Distribute
HA Design - Preventing Single Point of Failure

• HA recovery may create single points of failure
• WLB host exclusion minimizes impact




 © 2012 Citrix | Confidential – Do Not Distribute
HA Enhancements in XenServer 6

• HA over NFS
• HA with Application Packages
 ᵒDefine multi-VM services
 ᵒDefine VM startup order and delays
 ᵒApplication packages can be defined from
  running VMs

• Auto-Start VMs are removed
 ᵒUsage conflicted with HA failure planning
 ᵒCreated situations when perceived host
  recovery wasn’t met



 © 2012 Citrix | Confidential – Do Not Distribute
High Availability – No Excuses

• Shared storage the hardest part of setup
 ᵒSimple wizard can have HA defined in minutes
 ᵒMinimally invasive technology
• Protects your important workloads
 ᵒReduce on-call support incidents
 ᵒAddresses VM density risks
 ᵒNo performance, workload, configuration penalties
• Compatible with resilient application designs
• Fault tolerant options exist through ecosystem



 © 2012 Citrix | Confidential – Do Not Distribute
StorageLink Details
Leverage Array Technologies

• No file system overlay                                   Array OS                      Array OS
• Use Best-of-Breed technologies
 ᵒThin Provisioning                                  Hypervisor Filesystem
                                                          Snapshotting                  Snapshotting
                                                           Provisioning                 Provisioning
 ᵒDeduplexing                                             Snapshotting
                                                             Cloning                      Cloning
 ᵒCloning                                                 Provisioning
                                                             Cloning
 ᵒSnapshotting
 ᵒMirroring                                         VM   VM    VM     VM   VM   VM     VM   VM      VM    VM


• Maximize array performance                        VM   VM    VM     VM   VM   VM     VM   VM      VM    VM




                                                    Traditional Approach             Citrix StorageLink

 © 2012 Citrix | Confidential – Do Not Distribute
No StorageLink – Inefficient LUN Usage
        Today                                  4 weeks                8 weeks             12 weeks

Customer request                      Customer adds 5 VMs      Customer adds 5 VMs   Customer adds 5 VMs
   for 600GB                            with 50 GB each          with 50 GB each       with 50 GB each
                                                                                       LUN 600GB
                                                                                                           
                                                                                                           Customer
400 GB free                             400 GB free             400 GB free                                requests
                                                                                                           new
                                                                                                           storage
 LUN 600GB                                LUN 600GB              LUN 600GB             LUN 600GB           capacity
                                                                       50GB disk            50GB disk
                                                                       50GB disk            50GB disk
                                                                       50GB disk            50GB disk
                                                                       50GB disk            50GB disk
                                                                       50GB disk            50GB disk
                                                   50GB disk           50GB disk            50GB disk
                                                   50GB disk           50GB disk            50GB disk
                                                   50GB disk           50GB disk            50GB disk
                                                   50GB disk           50GB disk            50GB disk
                                                   50GB disk           50GB disk            50GB disk

1 TB storage capacity

© 2012 Citrix | Confidential – Do Not Distribute
With StorageLink – Maximize Array Utilization
       Today                                 4 weeks               8 weeks              12 weeks

 Customer request                    Customer adds 5 VMs    Customer adds 5 VMs   Customer adds 5 VMs
    for 600 GB                         with 50 GB each        with 50 GB each       with 50 GB each



1 TB free                             750 GB free            500 GB free           250 GB free


                                                                                          50GB LUN
                                                                                          50GB LUN
                                                                                          50GB LUN
                                                                                          50GB LUN
                                                                                          50GB LUN
                                                                    50GB LUN              50GB LUN
                                                                    50GB LUN              50GB LUN
                                                                    50GB LUN              50GB LUN
                                                                    50GB LUN              50GB LUN
                                                                    50GB LUN              50GB LUN
                                                 50GB LUN           50GB LUN              50GB LUN
                                                 50GB LUN           50GB LUN              50GB LUN
                                                 50GB LUN           50GB LUN              50GB LUN
                                                 50GB LUN           50GB LUN              50GB LUN
                                                 50GB LUN           50GB LUN              50GB LUN




1 2012 Citrix Confidential – Do Not Distribute
© TB storage |capacity
StorageLink – Efficient Snapshot Management
             NO StorageLink                                                        With StorageLink

VM Snapshot capacity limited to LUN                                       VM Snapshot capacity limited storage
              size                                                                     pool size


            400 GB free
                                                                                    750 GB free


                                                      Snapshot capacity
             LUN 600GB
                                             350 GB
                                                                          750 GB




                     50GB disk
                     50GB disk                                                            50GB LUN
                     50GB disk                                                            50GB LUN
                     50GB disk                                                            50GB LUN
                     50GB disk                                                            50GB LUN
                                                                                          50GB LUN




© 2012 Citrix | Confidential – Do Not Distribute
Integrated StorageLink Architecture

                      XenServer Host



                      XAPI Daemon

                            SMAPI


                                            CSLG
             LVM NFS       NetApp …
                                           Bridge



              EQL      NTAP       SMI-S        …




© 2012 Citrix | Confidential – Do Not Distribute
SR-IOV Details
Network Performance for GbE with PV drivers




• XenServer PV drivers can sustain peak throughput on GbE
 ᵒHowever limited to 2.9Gb/s in total
• But XenServer uses significantly more CPU cycles than Linux
 ᵒLess available cycles for application
 ᵒ10GbE networks: CPU saturation in dom0 prevents achieving line rate
• Need to reduce I/O virtualization overhead in XenServer networking

 © 2012 Citrix | Confidential – Do Not Distribute
I/O Virtualization Overview – Hardware Solution

• VMDq (Virtual Machine Device Queue)
 ᵒSeparate Rx & Tx queue pairs of NIC for              Network Only
  each VM, Software “switch”.
• Direct I/O (VT-d)
 ᵒImproved I/O performance through direct             VM exclusively
  assignment of a I/O device to a HVM or PV           owns device
  workload
• SR-IOV (Single Root I/O Virtualization)
                                                    One Device, multiple
 ᵒChanges to I/O device silicon to support
                                                     Virtual Functions
  multiple PCI device ID’s, thus one I/O device
  can support multiple direct assigned guests.
  Requires VT-d.



 © 2012 Citrix | Confidential – Do Not Distribute
Where Does SR-IOV Fit In?

Technique                       Efficiency          Hardware Abstraction                        Applicability          Scalability
Characteristic
Emulation                        Low                 Very high                                   All device classes     High

Para-virtualization              Medium              High – requires installing paravirtual      Block, network         High
                                                     drivers on the guest
Acceleration (VMDq)              High                Medium:                                     Network only,          Medium (for
                                                     -Transparent to apps                        hypervisor dependent   accelerated interfaces)
                                                     -May require device-specific
                                                     accelerators
PCI Pass-through                 High                Low:                                        All devices            Low
                                                     -Explicit device plug/unplug
                                                     -Device specific drivers



                                                                                              SR-IOV Addresses This


  © 2012 Citrix | Confidential – Do Not Distribute
XenServer Solarflare SR-IOV Implementation
                    Typical SR-IOV Implementation                                         XS & Solarflare SR-IOV Model
                                                   Guest                          Guest    Guest
                         App                         VM    App                      VM     VM              App

                                                                                                                      Plug-in
                                                                                                          Netfront    driver
                               VF driver                         VF driver
                                                                                                           driver
                                                                                                                         VF




                                                                                  dom0     dom0
                                                                                                           Netback
                                                                                                            driver


                                                      vSwitch                                              vSwitch

                          Physical
                           driver
                                                                                                   Physical driver


                                     Virtual NIC                    Virtual NIC    NIC                               Virtual NIC   NIC




               Improved performance, but loss of services                                  Improved performance AND full
                 and management (e.g. live migration)                                     use of services and management

© 2012 Citrix | Confidential – Do Not Distribute
XenMotion in Detail
XenMotion – Live VM Migration

• Requires systems that have compatible CPUs
 ᵒMust be the same manufacturer
 ᵒCan be different speed
 ᵒMust support maskable features; or be of simlar type (e.g. 3450 and 3430)

• Minimal Downtime
 ᵒGenerally sub 200 mS; mostly due to network switches

• Requires shared storage
 ᵒVM state moves between hosts; underlying disks remain in existing location



 © 2012 Citrix | Confidential – Do Not Distribute
Detailed XenMotion Example
Pre-Copy Migration: Round 1
            • Systems verify correct storage and network setup on destination server
            • VM Resources Reserved on Destination Server




                  Source Virtual Machine                                  Destination




© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1
• While source VM is still running XenServer copies over memory image to destination server
• XenServer keeps track of any memory changes during this process




   © 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1




© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1




© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1
       • After first pass most of the memory image is now copied to the destination server
       • Any memory changes during initial memory copy are tracked




© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 2
                     • XenServer now does another pass at copying over changed memory




© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 2




© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 2
                         • Xen still tracks any changes during the second memory copy
                         • Second copy moves much less data
                         • Also less time for memory changes to occur




© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 2




© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration
            • Xen will keep doing successive memory copies until minimal differences
              between source and destination




© 2012 Citrix | Confidential – Do Not Distribute
XenMotion: Final
        •   Source VM is paused and last bit of memory and machine state copied over
        •   Master unlocks storage from source system and locks to destination system
        •   Destination VM is unpaused and attached to storage and network resources
        •   Source VM resources cleared




© 2012 Citrix | Confidential – Do Not Distribute
Storage XenMotion
Live Storage XenMotion
Upgrading VMs from Local to Shared Storage


                                                      Live
                                                     Virtual
                                                     Machine




                                             XenServer Hypervisor
                                                                      VDI(s)



                                                                      Local
                                                                     Storage

                  FC, iSCSI, NFS SAN                XenServer Pool



 © 2012 Citrix | Confidential – Do Not Distribute
Live Storage XenMotion
Moving VMs within a Pool with local-only storage


                                           Live
                                          Virtual
                                          Machine




                                  XenServer Hypervisor                    XenServer Hypervisor
                                           VDI(s)




                                          Local                                 Local
                                         Storage                               Storage
                                                         XenServer Pool



 © 2012 Citrix | Confidential – Do Not Distribute
Live Storage XenMotion
Moving or rebalancing VMs between Pools (Local  SAN)


                                     Live
                                    Virtual
                                    Machine



                       XenServer Hypervisor         XenServer Hypervisor
                         XenServer Hypervisor         XenServer Hypervisor
                           XenServer Hypervisor         XenServer Hypervisor
                                  VDI(s)




                                    Local
                                   Storage            FC, iSCSI, NFS SAN

                               XenServer Pool 1         XenServer Pool 2



 © 2012 Citrix | Confidential – Do Not Distribute
Live Storage XenMotion
Moving or rebalancing VMs between Pools (Local  Local)


                                     Live
                                    Virtual
                                    Machine



                       XenServer Hypervisor         XenServer Hypervisor
                         XenServer Hypervisor         XenServer Hypervisor
                           XenServer Hypervisor         XenServer Hypervisor
                                     VDI(s)



                                    Local                     Local
                                   Storage                   Storage
                               XenServer Pool 1         XenServer Pool 2



 © 2012 Citrix | Confidential – Do Not Distribute
VHD Benefits

• Many SRs implement VDIs as VHD trees
• VHDs are a copy-on-write format for storing virtual disks
• VDIs are the leaves of VHD trees
• Interesting VDI operation: snapshot (implemented as VHD “cloning”)


                                           RW                RO

                                       A
                                                    RO                RW

                                                         B        A
• A: Original VDI
• B: Snapshot VDI
 © 2012 Citrix | Confidential – Do Not Distribute
no color = empty
VDI Mirroring Flow                                            gradient = live
 SOURCE                                                                    DESTINATION

                                                     mirror
       VM                                                                        VM




root




  © 2012 Citrix | Confidential – Do Not Distribute
Benefits of VDI Mirroring

• Optimization: start with most similar VDI
 ᵒAnother VDI with the least number of different blocks
 ᵒOnly transfer blocks that are different
• New VDI field: Content ID for each VDI
 ᵒEasy way to confirm that different VDIs have identical content
 ᵒPreserved across VDI copy, refreshed after VDI attached RW
• Worst case is a full copy (common in server virtualization)
• Best case occurs when you use VM “gold images” (i.e. XenDesktop)




 © 2012 Citrix | Confidential – Do Not Distribute
Work better. Live better.

Mais conteúdo relacionado

Mais procurados

XenServer, Hyper-V, and ESXi - Architecture, API, and Coding
XenServer, Hyper-V, and ESXi -  Architecture, API, and CodingXenServer, Hyper-V, and ESXi -  Architecture, API, and Coding
XenServer, Hyper-V, and ESXi - Architecture, API, and Coding_Humair_Ahmed_
 
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущееxen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущееDenis Gundarev
 
VMware vSphere 5 seminar
VMware vSphere 5 seminarVMware vSphere 5 seminar
VMware vSphere 5 seminarMarkiting_be
 
VMware vSphere 5.1 Overview
VMware vSphere 5.1 OverviewVMware vSphere 5.1 Overview
VMware vSphere 5.1 OverviewESXLab
 
Top Troubleshooting Tips and Techniques for Citrix XenServer Deployments
Top Troubleshooting Tips and Techniques for Citrix XenServer DeploymentsTop Troubleshooting Tips and Techniques for Citrix XenServer Deployments
Top Troubleshooting Tips and Techniques for Citrix XenServer DeploymentsDavid McGeough
 
VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5Vepsun Technologies
 
Mythbusting goes virtual What's new in vSphere 5.1
Mythbusting goes virtual   What's new in vSphere 5.1Mythbusting goes virtual   What's new in vSphere 5.1
Mythbusting goes virtual What's new in vSphere 5.1Eric Sloof
 
Hyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the DifferencesHyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the DifferencesSolarWinds
 
What’s New in vCloud Director 5.1?
What’s New in vCloud Director 5.1?What’s New in vCloud Director 5.1?
What’s New in vCloud Director 5.1?Eric Sloof
 
Open source hypervisors in cloud
Open source hypervisors in cloudOpen source hypervisors in cloud
Open source hypervisors in cloudChetna Purohit
 
Xen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationXen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationsolarisyougood
 
Hyper-V Best Practices & Tips and Tricks
Hyper-V Best Practices & Tips and TricksHyper-V Best Practices & Tips and Tricks
Hyper-V Best Practices & Tips and TricksAmit Gatenyo
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3Vepsun Technologies
 
Selecting the correct hypervisor for CloudStack 4.5
Selecting the correct hypervisor for CloudStack 4.5Selecting the correct hypervisor for CloudStack 4.5
Selecting the correct hypervisor for CloudStack 4.5Tim Mackey
 
VMWARE VS MS-HYPER-V
VMWARE VS MS-HYPER-VVMWARE VS MS-HYPER-V
VMWARE VS MS-HYPER-VDavid Ramirez
 
How to Optimize Microsoft Hyper-V Failover Cluster and Double Performance
How to Optimize Microsoft Hyper-V Failover Cluster and Double PerformanceHow to Optimize Microsoft Hyper-V Failover Cluster and Double Performance
How to Optimize Microsoft Hyper-V Failover Cluster and Double PerformanceStarWind Software
 
Hypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVM
Hypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVMHypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVM
Hypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVMvwchu
 
Virtualization 101 - DeepDive
Virtualization 101 - DeepDiveVirtualization 101 - DeepDive
Virtualization 101 - DeepDiveAmit Agarwal
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep diveSanjeev Kumar
 

Mais procurados (20)

XenServer, Hyper-V, and ESXi - Architecture, API, and Coding
XenServer, Hyper-V, and ESXi -  Architecture, API, and CodingXenServer, Hyper-V, and ESXi -  Architecture, API, and Coding
XenServer, Hyper-V, and ESXi - Architecture, API, and Coding
 
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущееxen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
 
VMware vSphere 5 seminar
VMware vSphere 5 seminarVMware vSphere 5 seminar
VMware vSphere 5 seminar
 
VMware vSphere 5.1 Overview
VMware vSphere 5.1 OverviewVMware vSphere 5.1 Overview
VMware vSphere 5.1 Overview
 
Top Troubleshooting Tips and Techniques for Citrix XenServer Deployments
Top Troubleshooting Tips and Techniques for Citrix XenServer DeploymentsTop Troubleshooting Tips and Techniques for Citrix XenServer Deployments
Top Troubleshooting Tips and Techniques for Citrix XenServer Deployments
 
VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5
 
Mythbusting goes virtual What's new in vSphere 5.1
Mythbusting goes virtual   What's new in vSphere 5.1Mythbusting goes virtual   What's new in vSphere 5.1
Mythbusting goes virtual What's new in vSphere 5.1
 
Hyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the DifferencesHyper-V vs. vSphere: Understanding the Differences
Hyper-V vs. vSphere: Understanding the Differences
 
What’s New in vCloud Director 5.1?
What’s New in vCloud Director 5.1?What’s New in vCloud Director 5.1?
What’s New in vCloud Director 5.1?
 
Open source hypervisors in cloud
Open source hypervisors in cloudOpen source hypervisors in cloud
Open source hypervisors in cloud
 
Xen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentationXen server 6.1 technical sales presentation
Xen server 6.1 technical sales presentation
 
Hyper-V Best Practices & Tips and Tricks
Hyper-V Best Practices & Tips and TricksHyper-V Best Practices & Tips and Tricks
Hyper-V Best Practices & Tips and Tricks
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3
 
Improvements in Failover Clustering in Windows Server 2012
Improvements in Failover Clustering in Windows Server 2012Improvements in Failover Clustering in Windows Server 2012
Improvements in Failover Clustering in Windows Server 2012
 
Selecting the correct hypervisor for CloudStack 4.5
Selecting the correct hypervisor for CloudStack 4.5Selecting the correct hypervisor for CloudStack 4.5
Selecting the correct hypervisor for CloudStack 4.5
 
VMWARE VS MS-HYPER-V
VMWARE VS MS-HYPER-VVMWARE VS MS-HYPER-V
VMWARE VS MS-HYPER-V
 
How to Optimize Microsoft Hyper-V Failover Cluster and Double Performance
How to Optimize Microsoft Hyper-V Failover Cluster and Double PerformanceHow to Optimize Microsoft Hyper-V Failover Cluster and Double Performance
How to Optimize Microsoft Hyper-V Failover Cluster and Double Performance
 
Hypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVM
Hypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVMHypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVM
Hypervisors and Virtualization - VMware, Hyper-V, XenServer, and KVM
 
Virtualization 101 - DeepDive
Virtualization 101 - DeepDiveVirtualization 101 - DeepDive
Virtualization 101 - DeepDive
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep dive
 

Semelhante a Xen server 6.1 technical sales presentation

Christian ferver xen server_6.1_overview
Christian ferver xen server_6.1_overviewChristian ferver xen server_6.1_overview
Christian ferver xen server_6.1_overviewDigicomp Academy AG
 
Christian ferber xen server_6.1_storagexenmotion
Christian ferber xen server_6.1_storagexenmotionChristian ferber xen server_6.1_storagexenmotion
Christian ferber xen server_6.1_storagexenmotionDigicomp Academy AG
 
XenServer 5.5 - Czy można zaoszczędzić na wirtualizacji serwerów? Darmowy Xen...
XenServer 5.5 - Czy można zaoszczędzić na wirtualizacji serwerów? Darmowy Xen...XenServer 5.5 - Czy można zaoszczędzić na wirtualizacji serwerów? Darmowy Xen...
XenServer 5.5 - Czy można zaoszczędzić na wirtualizacji serwerów? Darmowy Xen...Peter Ocasek
 
V mware v sphere advanced administration
V mware v sphere advanced administrationV mware v sphere advanced administration
V mware v sphere advanced administrationbestip
 
8 christian ferber xen_server_6_news
8 christian ferber xen_server_6_news8 christian ferber xen_server_6_news
8 christian ferber xen_server_6_newsDigicomp Academy AG
 
V mware v sphere boot camp
V mware v sphere boot campV mware v sphere boot camp
V mware v sphere boot campbestip
 
Sdc 2012-how-can-hypervisors-leverage-advanced-storage-features-v7.6(20-9-2012)
Sdc 2012-how-can-hypervisors-leverage-advanced-storage-features-v7.6(20-9-2012)Sdc 2012-how-can-hypervisors-leverage-advanced-storage-features-v7.6(20-9-2012)
Sdc 2012-how-can-hypervisors-leverage-advanced-storage-features-v7.6(20-9-2012)Abhijeet Kulkarni
 
20 christian ferber xen_server_6_workshop
20 christian ferber xen_server_6_workshop20 christian ferber xen_server_6_workshop
20 christian ferber xen_server_6_workshopDigicomp Academy AG
 
XenServer Virtualization In Cloud Environments
XenServer Virtualization In Cloud EnvironmentsXenServer Virtualization In Cloud Environments
XenServer Virtualization In Cloud EnvironmentsTim Mackey
 
Xen Cloud Platform by Tim Mackey
Xen Cloud Platform by Tim MackeyXen Cloud Platform by Tim Mackey
Xen Cloud Platform by Tim Mackeybuildacloud
 
CloudStack Architecture Future
CloudStack Architecture FutureCloudStack Architecture Future
CloudStack Architecture FutureKimihiko Kitase
 
IT Camp - Vision Solutions Presentation
IT Camp - Vision Solutions PresentationIT Camp - Vision Solutions Presentation
IT Camp - Vision Solutions PresentationHarold Wong
 
Cloud platform technical sales presentation
Cloud platform technical sales presentationCloud platform technical sales presentation
Cloud platform technical sales presentationNuno Alves
 
Scvmm 2012 (maarten wijsman)
Scvmm 2012 (maarten wijsman)Scvmm 2012 (maarten wijsman)
Scvmm 2012 (maarten wijsman)hypervnu
 
MIVA Small Business Conference 2006
MIVA Small Business Conference 2006MIVA Small Business Conference 2006
MIVA Small Business Conference 2006webhostingguy
 

Semelhante a Xen server 6.1 technical sales presentation (20)

Christian ferver xen server_6.1_overview
Christian ferver xen server_6.1_overviewChristian ferver xen server_6.1_overview
Christian ferver xen server_6.1_overview
 
Christian ferber xen server_6.1_storagexenmotion
Christian ferber xen server_6.1_storagexenmotionChristian ferber xen server_6.1_storagexenmotion
Christian ferber xen server_6.1_storagexenmotion
 
Citrix Xs Update For Dataplex Nov 09
Citrix   Xs Update For Dataplex   Nov 09Citrix   Xs Update For Dataplex   Nov 09
Citrix Xs Update For Dataplex Nov 09
 
XenServer 5.5 - Czy można zaoszczędzić na wirtualizacji serwerów? Darmowy Xen...
XenServer 5.5 - Czy można zaoszczędzić na wirtualizacji serwerów? Darmowy Xen...XenServer 5.5 - Czy można zaoszczędzić na wirtualizacji serwerów? Darmowy Xen...
XenServer 5.5 - Czy można zaoszczędzić na wirtualizacji serwerów? Darmowy Xen...
 
V mware v sphere advanced administration
V mware v sphere advanced administrationV mware v sphere advanced administration
V mware v sphere advanced administration
 
8 christian ferber xen_server_6_news
8 christian ferber xen_server_6_news8 christian ferber xen_server_6_news
8 christian ferber xen_server_6_news
 
V mware v sphere boot camp
V mware v sphere boot campV mware v sphere boot camp
V mware v sphere boot camp
 
VMworld2011 Recap
VMworld2011 RecapVMworld2011 Recap
VMworld2011 Recap
 
Sdc 2012-how-can-hypervisors-leverage-advanced-storage-features-v7.6(20-9-2012)
Sdc 2012-how-can-hypervisors-leverage-advanced-storage-features-v7.6(20-9-2012)Sdc 2012-how-can-hypervisors-leverage-advanced-storage-features-v7.6(20-9-2012)
Sdc 2012-how-can-hypervisors-leverage-advanced-storage-features-v7.6(20-9-2012)
 
20 christian ferber xen_server_6_workshop
20 christian ferber xen_server_6_workshop20 christian ferber xen_server_6_workshop
20 christian ferber xen_server_6_workshop
 
XenServer Virtualization In Cloud Environments
XenServer Virtualization In Cloud EnvironmentsXenServer Virtualization In Cloud Environments
XenServer Virtualization In Cloud Environments
 
Xen Cloud Platform by Tim Mackey
Xen Cloud Platform by Tim MackeyXen Cloud Platform by Tim Mackey
Xen Cloud Platform by Tim Mackey
 
CloudStack Architecture Future
CloudStack Architecture FutureCloudStack Architecture Future
CloudStack Architecture Future
 
IT Camp - Vision Solutions Presentation
IT Camp - Vision Solutions PresentationIT Camp - Vision Solutions Presentation
IT Camp - Vision Solutions Presentation
 
Vm6 v mex
Vm6 v mexVm6 v mex
Vm6 v mex
 
Cloud platform technical sales presentation
Cloud platform technical sales presentationCloud platform technical sales presentation
Cloud platform technical sales presentation
 
CloudStack Architecture
CloudStack ArchitectureCloudStack Architecture
CloudStack Architecture
 
Scvmm 2012 (maarten wijsman)
Scvmm 2012 (maarten wijsman)Scvmm 2012 (maarten wijsman)
Scvmm 2012 (maarten wijsman)
 
MIVA Small Business Conference 2006
MIVA Small Business Conference 2006MIVA Small Business Conference 2006
MIVA Small Business Conference 2006
 
Virtualization Smackdown
Virtualization SmackdownVirtualization Smackdown
Virtualization Smackdown
 

Mais de Nuno Alves

E g innovations overview
E g innovations overviewE g innovations overview
E g innovations overviewNuno Alves
 
Citrix virtual desktop handbook (7x)
Citrix virtual desktop handbook (7x)Citrix virtual desktop handbook (7x)
Citrix virtual desktop handbook (7x)Nuno Alves
 
Citrix XenServer Design: Designing XenServer Network Configurations
Citrix XenServer Design:  Designing XenServer Network  ConfigurationsCitrix XenServer Design:  Designing XenServer Network  Configurations
Citrix XenServer Design: Designing XenServer Network ConfigurationsNuno Alves
 
Deploying the XenMobile 8.5 Solution
Deploying the XenMobile 8.5 SolutionDeploying the XenMobile 8.5 Solution
Deploying the XenMobile 8.5 SolutionNuno Alves
 
Cloudbridge video delivery
Cloudbridge video deliveryCloudbridge video delivery
Cloudbridge video deliveryNuno Alves
 
XenApp 6.5 - Event Log Messages
XenApp 6.5 - Event Log MessagesXenApp 6.5 - Event Log Messages
XenApp 6.5 - Event Log MessagesNuno Alves
 
Citrix cloud platform 4.2 data sheet
Citrix cloud platform 4.2 data sheetCitrix cloud platform 4.2 data sheet
Citrix cloud platform 4.2 data sheetNuno Alves
 
Cloud portal business manager product overview
Cloud portal business manager product overviewCloud portal business manager product overview
Cloud portal business manager product overviewNuno Alves
 
Reference architecture dir and es - final
Reference architecture   dir and es - finalReference architecture   dir and es - final
Reference architecture dir and es - finalNuno Alves
 
Provisioning server high_availability_considerations2
Provisioning server high_availability_considerations2Provisioning server high_availability_considerations2
Provisioning server high_availability_considerations2Nuno Alves
 
Xd planning guide - storage best practices
Xd   planning guide - storage best practicesXd   planning guide - storage best practices
Xd planning guide - storage best practicesNuno Alves
 
Introduction to storage technologies
Introduction to storage technologiesIntroduction to storage technologies
Introduction to storage technologiesNuno Alves
 
Xen server storage Overview
Xen server storage OverviewXen server storage Overview
Xen server storage OverviewNuno Alves
 
XenDesktop 7 Blueprint
XenDesktop 7 BlueprintXenDesktop 7 Blueprint
XenDesktop 7 BlueprintNuno Alves
 
Citrix virtual desktop handbook (5 x)
Citrix virtual desktop handbook (5 x)Citrix virtual desktop handbook (5 x)
Citrix virtual desktop handbook (5 x)Nuno Alves
 
New eBook! Citrix howto build an all star app desktop virtualization team
New eBook! Citrix howto build an all star app desktop virtualization teamNew eBook! Citrix howto build an all star app desktop virtualization team
New eBook! Citrix howto build an all star app desktop virtualization teamNuno Alves
 
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1Nuno Alves
 
Citrix Store front planning guide
Citrix Store front planning guideCitrix Store front planning guide
Citrix Store front planning guideNuno Alves
 
Microsoft by the Numbers
Microsoft by the NumbersMicrosoft by the Numbers
Microsoft by the NumbersNuno Alves
 
NetScaler Deployment Guide for XenDesktop7
NetScaler Deployment Guide for XenDesktop7NetScaler Deployment Guide for XenDesktop7
NetScaler Deployment Guide for XenDesktop7Nuno Alves
 

Mais de Nuno Alves (20)

E g innovations overview
E g innovations overviewE g innovations overview
E g innovations overview
 
Citrix virtual desktop handbook (7x)
Citrix virtual desktop handbook (7x)Citrix virtual desktop handbook (7x)
Citrix virtual desktop handbook (7x)
 
Citrix XenServer Design: Designing XenServer Network Configurations
Citrix XenServer Design:  Designing XenServer Network  ConfigurationsCitrix XenServer Design:  Designing XenServer Network  Configurations
Citrix XenServer Design: Designing XenServer Network Configurations
 
Deploying the XenMobile 8.5 Solution
Deploying the XenMobile 8.5 SolutionDeploying the XenMobile 8.5 Solution
Deploying the XenMobile 8.5 Solution
 
Cloudbridge video delivery
Cloudbridge video deliveryCloudbridge video delivery
Cloudbridge video delivery
 
XenApp 6.5 - Event Log Messages
XenApp 6.5 - Event Log MessagesXenApp 6.5 - Event Log Messages
XenApp 6.5 - Event Log Messages
 
Citrix cloud platform 4.2 data sheet
Citrix cloud platform 4.2 data sheetCitrix cloud platform 4.2 data sheet
Citrix cloud platform 4.2 data sheet
 
Cloud portal business manager product overview
Cloud portal business manager product overviewCloud portal business manager product overview
Cloud portal business manager product overview
 
Reference architecture dir and es - final
Reference architecture   dir and es - finalReference architecture   dir and es - final
Reference architecture dir and es - final
 
Provisioning server high_availability_considerations2
Provisioning server high_availability_considerations2Provisioning server high_availability_considerations2
Provisioning server high_availability_considerations2
 
Xd planning guide - storage best practices
Xd   planning guide - storage best practicesXd   planning guide - storage best practices
Xd planning guide - storage best practices
 
Introduction to storage technologies
Introduction to storage technologiesIntroduction to storage technologies
Introduction to storage technologies
 
Xen server storage Overview
Xen server storage OverviewXen server storage Overview
Xen server storage Overview
 
XenDesktop 7 Blueprint
XenDesktop 7 BlueprintXenDesktop 7 Blueprint
XenDesktop 7 Blueprint
 
Citrix virtual desktop handbook (5 x)
Citrix virtual desktop handbook (5 x)Citrix virtual desktop handbook (5 x)
Citrix virtual desktop handbook (5 x)
 
New eBook! Citrix howto build an all star app desktop virtualization team
New eBook! Citrix howto build an all star app desktop virtualization teamNew eBook! Citrix howto build an all star app desktop virtualization team
New eBook! Citrix howto build an all star app desktop virtualization team
 
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
Wp intelli cache_reduction_iops_xd5.6_fp1_xs6.1
 
Citrix Store front planning guide
Citrix Store front planning guideCitrix Store front planning guide
Citrix Store front planning guide
 
Microsoft by the Numbers
Microsoft by the NumbersMicrosoft by the Numbers
Microsoft by the Numbers
 
NetScaler Deployment Guide for XenDesktop7
NetScaler Deployment Guide for XenDesktop7NetScaler Deployment Guide for XenDesktop7
NetScaler Deployment Guide for XenDesktop7
 

Último

Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationKnoldus Inc.
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 

Último (20)

Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Data governance with Unity Catalog Presentation
Data governance with Unity Catalog PresentationData governance with Unity Catalog Presentation
Data governance with Unity Catalog Presentation
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 

Xen server 6.1 technical sales presentation

  • 1. XenServer 6.1 Technical Overview September 2012
  • 3. What’s so Great About Xen? • It’s robust ᵒNative 64-bit hypervisor ᵒRuns on bare metal ᵒDirectly leverages CPU hardware for virtualization • It’s widely-deployed ᵒTens of thousands of organizations have deployed Xen • It’s advanced ᵒOptimized for hardware-assisted virtualization and paravirtualization • It’s trusted ᵒOpen, resilient Xen security framework • It’s part of mainline Linux © 2012 Citrix | Confidential – Do Not Distribute
  • 4. Understanding Architectural Components The Xen hypervisor and control domain (dom0) manage physical server resources among virtual machines © 2012 Citrix | Confidential – Do Not Distribute
  • 5. Understanding the Domain 0 Component Domain 0 is a compact specialized Linux VM that manages the network and storage I/O of all guest VMs … and isn’t the XenServer hypervisor © 2012 Citrix | Confidential – Do Not Distribute
  • 6. Understanding the Linux VM Component Linux VMs include paravirtualized kernels and drivers, and Xen is part of Mainline Linux 3.0 © 2012 Citrix | Confidential – Do Not Distribute
  • 7. Understanding the Windows VM Component Windows VMs use paravirtualized drivers to access storage and network resources through Domain 0 © 2012 Citrix | Confidential – Do Not Distribute
  • 8. XenServer Meets All Virtualization Needs • High performance, resilient virtualization platform Enterprise • Simple deployment and management model Data Center • Host based licensing to control CAPEX Desktop • Optimized for high performance desktop workloads Virtualization • Storage optimizations to control VDI CAPEX Cloud • Platform for IaaS and Cloud Service Providers • Powers the NetScaler SDX platform Infrastructure • Fully supports Software Defined Networking © 2012 Citrix | Confidential – Do Not Distribute
  • 10. XenCenter – Simple XenServer Management • Single pane of management glass • Manage XenServer hosts ᵒ Start/Stop VMs • Manage XenServer resource pools ᵒ Shared storage ᵒ Shared networking • Configure advanced features ᵒ HA, WLB, Reporting, Alerting • Configure updates © 2012 Citrix | Confidential – Do Not Distribute
  • 11. Management Architecture Comparison “The Other Guys” Citrix XenServer Traditional Management Distributed Architecture Management Architecture Single backend management server Clustered management layer © 2012 Citrix | Confidential – Do Not Distribute
  • 12. Role-Based Administration • Provide user roles with varying permissions • Pool Admin • Pool Operator • VM Power Admin • VM Admin • VM Operator • Read-only • Roles are defined within a Resource Pool • Assigned to Active Directory users, groups • Audit logging via Workload Reports © 2012 Citrix | Confidential – Do Not Distribute
  • 13. XenMotion Live VM Migration Shared Storage More about XenMotion © 2012 Citrix | Confidential – Do Not Distribute
  • 14. Live Storage XenMotion Live • Migrates VM disks from any Virtual Machine storage type to any other storage type ᵒLocal, DAS, iSCSI, FC XenServer Hypervisor • Supports cross pool migration ᵒRequires compatible CPUs VDI(s) • Encrypted Migration model • Specify management interface for optimal performance XenServer Pool More about Storage XenMotion © 2012 Citrix | Confidential – Do Not Distribute
  • 15. Heterogeneous Resource Pools Virtual Machine Safe Live Migrations Mixed Processor Pools Feature Feature Feature Feature Feature Feature Feature Feature 1 2 3 4 1 2 3 4 Older CPU Newer CPU XenServer 1 XenServer 2 © 2012 Citrix | Confidential – Do Not Distribute
  • 16. Memory Overcommit • Feature name: Dynamic Memory Control • Ability to over-commit RAM resources • VMs operate in a compressed or balanced mode within set range • Allow memory settings to be adjusted while VM is running • Can increase number of VMs per host © 2012 Citrix | Confidential – Do Not Distribute
  • 17. Virtual Appliances (vApp) • Support for “vApps” or Virtual Appliances ᵒOVF definition of Virtual Appliance • vApp contains one or more Virtual Machines • Enables grouping of VMs which can be utilized by ᵒXenCenter ᵒIntegrated Site Recovery ᵒAppliance Import and Export ᵒHA © 2012 Citrix | Confidential – Do Not Distribute
  • 18. Virtual Machine Protection and Recovery • Policy based snapshotting and archiving • Separate scheduling options for snapshot and archive ᵒSnapshot-only or Snapshot and Archive • Policy Configuration ᵒAdd multiple VMs to policy ᵒSearch filter available ᵒVM can only belong to 1 policy ᵒXenCenter or CLI © 2012 Citrix | Confidential – Do Not Distribute
  • 19. High Availability in XenServer • Automatically monitors hosts and VMs • Easily configured within XenCenter • Relies on Shared Storage ᵒiSCSI, NFS, HBA • Reports failure capacity for DR planning purposes More about HA © 2012 Citrix | Confidential – Do Not Distribute
  • 21. Optimizing Storage – Integrated StorageLink Virtualization can hinder the linkage XenServer between servers and storage, turning Hosts expensive storage systems into little more than “dumb disks” Citrix StorageLink™ technology lets your XenServer StorageLink virtual servers fully leverage all the Hosts Storage power of existing storage systems More about StorageLink © 2012 Citrix | Confidential – Do Not Distribute
  • 22. Workload Placement Services • Feature name: Workload Balancing • Automated guest start-up and management based on defined policy • Guests automatically migrate from one host to another based on resource usage • Power-on/off hosts as needed • Report on utilization of pool resources – by VM, by host, etc. More about WLB © 2012 Citrix | Confidential – Do Not Distribute
  • 23. Integrated Site Recovery • Supports LVM SRs • Replication/mirroring setup outside scope of solution ᵒFollow vendor instructions ᵒBreaking of replication/mirror also manual • Works with every iSCSI and FC array on HCL • Supports active-active DR More about Site Recovery © 2012 Citrix | Confidential – Do Not Distribute
  • 24. Delegated Web Based Administration • Enables: • IT delegation for administrators • VM level administration for end users • Support for multiple pools • Active Directory enabled • XenVNC and RDP console access © 2012 Citrix | Confidential – Do Not Distribute
  • 25. Live Memory Snapshot and Rollback • Live VM snapshot and revert ᵒBoth memory and disk state are captured ᵒOptional quiesce option via VSS provider (Windows guests) ᵒOne-click revert • Snapshot branches ᵒSupport for parallel subsequent checkpoints based on a previous common snapshot © 2012 Citrix | Confidential – Do Not Distribute
  • 27. Supporting High Performance Graphics • Feature name: GPU pass-through • Enables high-end graphics in VDI deployments with HDX 3D Pro • Optimal CAD application support with XenDesktop • More powerful than RemoteFX, virtual GPUs, or other general purpose graphics solutions © 2012 Citrix | Confidential – Do Not Distribute
  • 28. Benefits of GPU Pass-through Without GPU pass-through, each user With GPU pass-through, hardware requires their own Blade PC costs are cut up to 75% GPU cards XenServer Host More about GPU Pass Through © 2012 Citrix | Confidential – Do Not Distribute
  • 29. Controlling Shared Storage Costs – IntelliCache • Caching of XenDesktop 5 images • Leverages local storage • Reduce IOPS on shared storage • Supported since XenServer 5.6 SP2 © 2012 Citrix | Confidential – Do Not Distribute
  • 30. IntelliCache Fundamentals XenDesktop 1. Master Image created through XenDesktop MCS 2. VM is configured to use Master Image 3. VM using Master Image is started 001 011 4. XenServer creates read cache object on local storage 0101 0011 5. Reads in VM being done from local cache 6. Additional Reads done from SAN when required 7. Writes will happen in VHD child per VM 8. Local “write” cache is deleted when 0101 0011 Master Image Cache VM is shutdown/restarted 9. Additional VMs will use same read cache NFS Based Storage © 2012 Citrix | Confidential – Do Not Distribute
  • 31. Cost Effective VM Densities • Supporting VMs with up to: ᵒ16 vCPU per VM ᵒ128GB Memory per VM • Supporting XenServer hosts with up to: ᵒ1TB Physical RAM ᵒ160 logical processors • Yielding up to 150 Desktop images per host • Included at no cost with all XenDesktop purchases • Cisco Validated Design for XenDesktop on UCS © 2012 Citrix | Confidential – Do Not Distribute
  • 33. Distributed Virtual Network Switching • Virtual Switch VM ᵒOpen source: www.openvswitch.org ᵒProvides a rich layer 2 feature set ᵒCross host internal networks VM ᵒRich traffic monitoring options ᵒovs 1.4 compliant • DVS Controller ᵒVirtual appliance VM ᵒWeb-based GUI VM ᵒCan manage multiple pools ᵒCan exist within pool it manages VM © 2012 Citrix | Confidential – Do Not Distribute
  • 34. Switch Policies and Live Migration Windows VM •Allow Windows VM all traffic VM SAP •Allow all traffic VM •Allow only SAP traffic Linux VM •RSPAN to VLAN 26 •Allow SSH on eth0 Linux VM HTTP on eth1 •Allow •Allow SSH on eth0 VM •Allow HTTP on eth1 Linux VM1 Linux •Allow all traffic VM1 Linux •Allow all traffic VM2 VM Linux •Allow SSH on eth0 VM2 •Allow SSH on eth0 •Allow HTTP on eth1 VM •Allow HTTP on eth1 Windows VM Windows VM •Allow RDP and deny HTTP SAP VM•Allow RDP and deny HTTP VM •Allow only SAP traffic •RSPAN to VLAN 26 More about DVSC © 2012 Citrix | Confidential – Do Not Distribute
  • 35. Single Root IO Virtualization (SR-IOV) Guest Guest App VM App VM • PCI Specification for direct IO access ᵒHardware supports multiple PCI ids VF driver VF driver ᵒPresents multiple virtual NICs from single NIC • Virtual NICs presented directly into guests dom0 ᵒMinimize hypervisor overhead in high performance networks vSwitch • Not without downsides Physical driver ᵒRequires specialized hardware Virtual NIC Virtual NIC NIC ᵒCan not participate in DVS ᵒDoes not support live migration ᵒLimited number of virtual NICs More about SRIOV © 2012 Citrix | Confidential – Do Not Distribute
  • 36. NetScaler SDX – Powered by XenServer • Complete tenant isolation • Complete independence • Partitions within instances • Optimized network: 50+ Gbps • Runs default XenServer 6 © 2012 Citrix | Confidential – Do Not Distribute
  • 38. Support for SCVMM • SCVMM communicates with CIMOM in XenServer which communicates with XAPI • Requires SCVMM 2012 • Very easy to setup ᵒDelivered as Integration Suite Supplemental Pack ᵒAdd Resource Pool or host • Secure communication using certificates © 2012 Citrix | Confidential – Do Not Distribute
  • 39. Support for SCOM • Monitor XenServer hosts through System Center Operations Manager • Support for SCOM 2007 R2 and higher • Part of Integration Suite Supplemental Pack • Monitor various host information (considered Linux host) ᵒMemory usage ᵒProcess information ᵒHealth status © 2012 Citrix | Confidential – Do Not Distribute
  • 41. Summary of Key Features and Packages • Integrated disaster recovery management • Provisioning services for physical and virtual workloads • Dynamic Workload Balancing and Power Management • Web Management Console with Delegated Admin • Monitoring pack for Systems Center Ops Manager • High Availability • Dynamic Memory Control • Shared nothing live storage migration • Resource pooling with shared storage • Centralized management console • No performance restrictions © 2012 Citrix | Confidential – Do Not Distribute
  • 42. vSphere 5.1 and XenServer 6.1 Quick Comparison Feature XenServer Edition vSphere Edition Hypervisor high availability Advanced Standard NetFlow Advanced Enterprise Plus Centralized network management Free Enterprise Plus Distributed virtual network switching Advanced Enterprise Plus with Cisco Nexus 1000v Storage live migration Advanced Standard Serial port aggregation Not Available Standard Network based resource scheduling Enterprise Not Available Disk IO based resource scheduling Enterprise Not Available Optimized for desktop workloads Yes Desktop Edition is repackaged Enterprise Plus Licensing Host based Processor based © 2012 Citrix | Confidential – Do Not Distribute
  • 43. XenServer 6.1 – Product Edition Feature Matrix Feature Free Advanced Enterprise Platinum 64-bit Xen Hypervisor a a a a Active Directory Integration a a a a VM Conversion Utilities a a a a Live VM Migration with XenMotion™ a a a a Multi-Server Management with XenCenter a a a a Management Integration with Systems Center VMM a a a a Automated VM Protection and Recovery a a a Live Storage Migration with Storage XenMotion™ a a a Distributed Virtual Switching a a a Dynamic Memory Control a a a High Availability a a a Performance Reporting and Alerting a a a Mixed Resource Pools with CPU Masking a a a Dynamic Workload Balancing and Power Management a a GPU Pass-Through for Desktop Graphics Processing a a IntelliCache™ for XenDesktop Storage Optimization a a Live Memory Snapshot and Revert a a Provisioning Services for Virtual Servers a a Role-Based Administration and Audit Trail a a StorageLink™ Advanced Storage Management a a Monitoring Pack for Systems Center Ops Manager a a Web Management Console with Delegated Admin a a Provisioning Services for Physical Servers a Site Recovery a © 2012 Citrix | Confidential – Do Not Distribute Price Free $1000/server $2500/server $5000/server
  • 44. Subscription Advantage Citrix Subscription Advantage entitles customers the ability to upgrade to the latest software version for their product at no additional charge. Support not included. Renewal Categories Current: Renewal SRP Active memberships Reinstatement: Renewal SRP + pro-rated renewal for time expired Memberships that are expired 1 through 365 days + 20% fee Recovery: Recovery SRP Memberships that are expired more than 365 days Edition Renewal SRP Recovery SRP XenServer Platinum $675.00 per SVR $2,800.00 per SVR XenServer Enterprise $325.00 per SVR $1,400.00 per SVR XenServer Advanced $130.00 per SVR $560.00 per SVR © 2012 Citrix | Confidential – Do Not Distribute
  • 45. Support Options XenServer Support Options Premier Support Cost 7% of license cost (SRP) Product Coverage XenServer Advanced, Enterprise and Platinum Coverage Hours 24x7x365 Incidents Unlimited Named Contacts Unlimited Type of Access Phone/Web/Email Add-on Service Options Software or Hardware TRM 200 hours/Unlimited incidents/1region $40,000 Additional TRM hours 100 hours $20,000 Fully Dedicated TRM 1600 hours/Unlimited incidents/1 region $325,000 On-site Days On-site technical support service $2,000 per day Assigned Escalation 200 hours/1 region (must have TRM) $16,000 Fully Dedicated Assigned Escalation 1600 hours $480,000 © 2012 Citrix | Confidential – Do Not Distribute
  • 46. It’s Your Budget … Spend it Wisely • Vendor lock-in great for vendor Single Vendor • Beware product lifecycles and tool set changes • ROI Calculators always show vendor author as best ROI Can be Manipulated • Use your own numbers • Over buying is costly; get what you need Understand Support Model • Support call priority with tiered models • Some projects have requirements best suited to specific tool Use Correct Tool • Understand deployment and licensing impact Leverage Costly Features as • Blanket purchases benefit only vendor Required • Chargeback to project for feature requirements © 2012 Citrix | Confidential – Do Not Distribute
  • 47. Work better. Live better.
  • 49. How GPU Pass-through Works • Identical GPUs in a host auto-create a GPU group • The GPU Group can be assigned to set of VMs – each VM will attach to a GPU at VM boottime • When all GPUs in a group are in use, additional VMs requiring GPUs will not start • GPU and non-GPU VMs can (and should) be mixed on a host • GPU groups are recognized within a pool ᵒIf Server 1, 2, 3 each have GPU type 1, then VMs requiring GPU type 1 can be started on any of those servers © 2012 Citrix | Confidential – Do Not Distribute
  • 50. GPU Pass-through HCL is Server Specific • Server ᵒHP ProLiant WS460c G6 Workstation series* ᵒIBM System x3650 M3 ᵒDell Precision R5500 • GPU (1-4 per host) ᵒNVIDIA Quadro 2000, 4000, 5000, 6000 ᵒNVIDIA Tesla M2070-Q • Support for Windows guests only • Important: Combinations of servers + GPUs must be tested as a pair © 2012 Citrix | Confidential – Do Not Distribute
  • 51. Limitations of GPU Pass-through • GPU Pass-through binds the VM to host for duration of session ᵒRestricts XenMotion and WLB • Multiple GPU types can exist in a single server ᵒE.g. high performance and mid performance GPUs • VNC will be disabled, so RDP is required • Fully supported for XenDesktop, best effort for other windows workloads ᵒNot supported for Linux guests • HCL is very important © 2012 Citrix | Confidential – Do Not Distribute
  • 53. Enabling IntelliCache on XenServer Hosts • IntelliCache requires local EXT3 storage, to be selected during XenServer installation • If this is selected during installation the host is automatically enabled for IntelliCache • Manual steps in Admin guide © 2012 Citrix | Confidential – Do Not Distribute
  • 54. Enabling IntelliCache in XenDesktop • http://support.citrix.com/ article/CTX129052 • Use IntelliCache checkbox when adding a host in Desktop Studio • Supported from XenDesktop 5 FP1 © 2012 Citrix | Confidential – Do Not Distribute
  • 55. NFS Ops 10000 12000 14000 16000 18000 4000 6000 8000 2000 0 0:00:00 0:00:45 0:01:30 0:02:15 0:03:00 0:03:45 0:04:30 0:05:15 0:06:00 0:06:45 0:07:30 0:08:15 0:09:00 0:09:45 0:10:30 0:11:15 © 2012 Citrix | Confidential – Do Not Distribute 0:12:00 0:12:45 0:13:30 0:14:15 0:15:00 0:15:45 0:16:30 0:17:15 0:18:00 0:18:45 0:19:30 0:20:15 NFS Read Ops 0:21:00 0:21:45 0:22:30 0:23:15 0:24:00 NFS Ops (Non-IC) 0:24:45 0:25:30 NFS Write Ops 0:26:15 0:27:00 0:27:45 0:28:30 0:29:15 0:30:00 0:30:45 0:31:30 0:32:15 IOPS – 1000 Users – No IntelliCache 0:33:00 0:33:45 0:34:30 0:35:15 0:36:00 0:36:45 0:37:30 0:38:15 0:39:00 0:39:45 0:40:30 0:41:15 0:42:00 0:42:45 0:43:30 0:44:15 0:45:00 0:45:45
  • 56. NFS Ops 1000 1500 2000 2500 3000 500 0 0:00:00 0:00:40 0:01:20 0:02:00 0:02:40 0:03:20 0:04:00 0:04:40 0:05:20 0:06:00 0:06:40 0:07:20 0:08:00 0:08:40 0:09:20 0:10:00 © 2012 Citrix | Confidential – Do Not Distribute 0:10:40 0:11:20 0:12:00 0:12:40 0:13:20 0:14:00 0:14:40 0:15:20 0:16:00 0:16:40 0:17:20 0:18:00 0:18:40 NFS Read Ops 0:19:20 0:20:00 0:20:40 0:21:20 0:22:00 0:22:40 0:23:20 NFS Ops (Cold Cache) NFS Write Ops 0:24:00 0:24:40 0:25:20 0:26:00 0:26:40 0:27:20 0:28:00 0:28:40 0:29:20 0:30:00 0:30:40 0:31:20 IOPS – 1000 Users – Cold Cache Boot 0:32:00 0:32:40 0:33:20 0:34:00 0:34:40 0:35:20 0:36:00 0:36:40 0:37:20 0:38:00 0:38:40 0:39:20 0:40:00 0:40:40 0:41:20
  • 57. NFS Ops 10 15 20 25 35 30 0 5 0:00:00 0:00:45 0:01:30 0:02:15 0:03:00 0:03:45 0:04:30 0:05:15 0:06:00 0:06:45 0:07:30 0:08:15 0:09:00 0:09:45 0:10:30 0:11:15 © 2012 Citrix | Confidential – Do Not Distribute 0:12:00 0:12:45 0:13:30 0:14:15 0:15:00 0:15:45 0:16:30 0:17:15 0:18:00 0:18:45 0:19:30 0:20:15 NFS Read Ops 0:21:00 0:21:45 0:22:30 0:23:15 0:24:00 0:24:45 NFS Ops (Hot Cache) 0:25:30 NFS Write Ops 0:26:15 0:27:00 0:27:45 0:28:30 0:29:15 0:30:00 0:30:45 0:31:30 0:32:15 0:33:00 IOPS – 1000 Users – Hot Cache Boot 0:33:45 0:34:30 0:35:15 0:36:00 0:36:45 0:37:30 0:38:15 0:39:00 0:39:45 0:40:30 0:41:15 0:42:00 0:42:45 0:43:30 0:44:15 0:45:00
  • 58. Limitations of IntelliCache • Best results achieved with local SSD drives ᵒSAS and SATA supported, but spindled disks are slower • XenMotion and WLB restrictions (pooled images) • Best practice Local space sizing ᵒExpecting 50% cache usage per user + daily log off ᵒ[real size master image] + #[users per server] * [size master image] * 0,5 ᵒCache disk may vary according to VM lifecycle definition (reboot cycle) © 2012 Citrix | Confidential – Do Not Distribute
  • 59. IntelliCache Conclusions • Dramatic reduction of I/O for pooled desktops • Significant reduction of I/O for assigned desktops ᵒStill need IOPS for write traffic ᵒLocal write cache benefits • Storage investment much lower – and more appropriate • Overall TCO 15 – 30 % improvement • Continued evolution of features to yield better performance and TCO © 2012 Citrix | Confidential – Do Not Distribute
  • 61. Components Analysis Engine service • Workload Balancing Components ᵒData Collection Manager service ᵒAnalysis Engine service Data Store ᵒWeb Service Host Resource Pool XenServer ᵒData Store ᵒXenServer ᵒXenCenter XenCenter Data Collection Manager service Resource Pool XenServer Recommendations Web Service Host © 2012 Citrix | Confidential – Do Not Distribute
  • 62. Placement Strategies • Maximize Performance ᵒDefault setting ᵒSpread workload evenly across all physical hosts in a resource pool ᵒThe goal is to minimize CPU, memory, and network pressure for all hosts • Maximize Density ᵒFit as many virtual machines as possible onto a physical host ᵒThe goal is to minimize the number of physical hosts that must be online © 2012 Citrix | Confidential – Do Not Distribute
  • 63. Critical Thresholds • Components included in WLB evaluation: ᵒCPU ᵒMemory ᵒNetwork Read ᵒNetwork Write ᵒDisc Read ᵒDisk Write • Optimization recommendation is being triggered if a threshold is reached © 2012 Citrix | Confidential – Do Not Distribute
  • 64. Reports • Pool Health ᵒShows aggregated resource usage for a pool. Helps you evaluate the effectiveness of your optimization thresholds • Pool Health History ᵒDisplays resource usage for a pool over time. Helps you evaluate the effectiveness of your optimization thresholds • Host Health History ᵒSimilar to Pool Health History but filtered by a specific host • Optimization Performance History ᵒShows resource usage before and after executing optimization recommendations © 2012 Citrix | Confidential – Do Not Distribute
  • 65. Reports • Virtual Machine Motion History ᵒProvides information about how many times virtual machines moved on a resource pool, including the name of the virtual machine that moved, number of times it moved, and physical hosts affected • Optimization Performance History ᵒShows resource usage before and after executing accepting optimization recommendations • Virtual Machine Performance History ᵒDisplays key performance metrics for all virtual machines that operated on a host during the specified timeframe © 2012 Citrix | Confidential – Do Not Distribute
  • 66. Workload Chargeback Reports • Billing codes and costs • Resources to be charged • Exportable data © 2012 Citrix | Confidential – Do Not Distribute
  • 67. Workload Balancing Virtual Appliance • Ready-to-use WLB Virtual Appliance • Up and running with WLB in minutes rather than hours • Small footprint, Linux Virtual Appliance ᵒ~150Mb © 2012 Citrix | Confidential – Do Not Distribute
  • 68. Installation • Download Virtual Appliance • Import Virtual Appliance • Start Virtual Appliance • Initial setup steps ᵒDefine steps • Enable WLB in XenCenter © 2012 Citrix | Confidential – Do Not Distribute
  • 70. Integrated Site Recovery • Replaces StorageLink Gateway Site Recovery • Decoupled from StorageLink adapters • Supports LVM SRs only in this release • Replication/mirroring setup outside scope of solution ᵒFollow vendor instructions ᵒBreaking of replication/mirror also manual • Works with every iSCSI and FC array on HCL • Supports active-active DR © 2012 Citrix | Confidential – Do Not Distribute
  • 71. Feature Set • Integrated in XenServer and XenCenter • Support failover and failback • Supports grouping and startup order through vApp functionality • Failover pre-checks ᵒPowerstate of source VM ᵒDuplicate VMs on target pool ᵒSR connectivity • Ability to start VMs paused (e.g. for dry-run) © 2012 Citrix | Confidential – Do Not Distribute
  • 72. How it Works • Depends on “Portable SR” technology ᵒDifferent from Metadata backup/restore functionality • Creates a logical volume on SR during setup • Logical Volume contains ᵒSR metadata information ᵒVDI metadata information for all VDIs stored on SR • Metadata information is read during failover sr-probe © 2012 Citrix | Confidential – Do Not Distribute
  • 73. Integrated Site Recovery - Screenshots © 2012 Citrix | Confidential – Do Not Distribute
  • 75. Terminology • OpenFlow ᵒAn open standard that separates the control and data paths for switching devices • OpenFlow switch ᵒCould be physical or virtual ᵒIncludes packet processing and remote configuration/control support via OpenFlow • Open vSwitch ᵒAn OSS Linux-based implementation of an OpenFlow virtual switch ᵒMaintained at www.openvswitch.org • vSwitch Controller ᵒA commercial implementation of a OpenFlow controller ᵒProvides integration with XenServer pools © 2012 Citrix | Confidential – Do Not Distribute
  • 76. Core Distributed Switch Objectives • Extend network management to virtual networks • Provide network monitoring using standard protocols • Define network policies on virtual objects • Support multi-tenant virtual data centers • Provide cross host private networking without VLANs • Answer to VMware VDS and Cisco Nexus 1000v © 2012 Citrix | Confidential – Do Not Distribute
  • 77. Understanding Policies • Access control VM ᵒBasic Layer 3 firewall rules ᵒDefinable by pool/network/VM ᵒInheritance controls VM VM VM VM © 2012 Citrix | Confidential – Do Not Distribute
  • 78. Understanding Policies • Access control VM • QoS ᵒRate limits to control bandwidth VM VM VM VM © 2012 Citrix | Confidential – Do Not Distribute
  • 79. Understanding Policies • Access control VM • QoS • RSPAN VM ᵒTransparent monitoring of VM level traffic VM VM VM © 2012 Citrix | Confidential – Do Not Distribute
  • 80. What is NetFlow? • Layer 3 monitoring protocol • UDP/SCTP based • Broadly adopted solution • Implemented in three parts ᵒExporter (DVS) ᵒCollector ᵒAnalyzer • DVSC is NetFlow v5 based ᵒEnabled at pool level © 2012 Citrix | Confidential – Do Not Distribute
  • 81. Performance Monitoring • Enabled via NetFlow • Dashboard ᵒThroughput ᵒPacket flow ᵒConnection flow • Flow Statistics ᵒSlice and dice reports ᵒSee top VM traffic ᵒData goes back 1 week © 2012 Citrix | Confidential – Do Not Distribute
  • 82. Bonus Features ***** • Jumbo Frames • Cross Server Private Networks • LACP • 4 NIC bonds © 2012 Citrix | Confidential – Do Not Distribute
  • 84. Protecting Workloads • Not just for mission critical applications anymore • Helps manage VM density issues • "Virtual" definition of HA a little different than physical • Low cost / complexity option to restart machines in case of failure © 2012 Citrix | Confidential – Do Not Distribute
  • 85. High Availability Operation • Pool-wide settings • Failure capacity – number of hosts to carry out HA Plan • Uses network and storage heartbeat to verify servers © 2012 Citrix | Confidential – Do Not Distribute
  • 86. VM Protection Options • Restart Priority ᵒDo not restart ᵒRestart if possible ᵒRestart • Start Order ᵒDefines a sequence and delay to ensure applications run correctly © 2012 Citrix | Confidential – Do Not Distribute
  • 87. HA Design – Hot Spares Simple Design ᵒ Similar to hot spare in disk array ᵒ Guaranteed available ᵒ Inefficient  Idle resources Failure Planning ᵒ If surviving hosts are fully loaded – VMs will be forced to start on spare ᵒ Could lead to restart delays due to resource plugs ᵒ Could lead to performance issues if spare is pool master ᵒ If using WLB, need to exclude spare from rebalancing © 2012 Citrix | Confidential – Do Not Distribute
  • 88. HA Design – Distributed Capacity Efficient Design ᵒAll hosts utilized ᵒWLB can ensure optimal performance Failure Planning ᵒImpacted VMs automatically placed for best fit ᵒRunning VMs undisturbed ᵒProvides efficient guaranteed availability © 2012 Citrix | Confidential – Do Not Distribute
  • 89. HA Design – Impact of Dynamic Memory Enhances Failure Planning ᵒDefine reduced memory which meets SLA ᵒOn restart, some VMs may “squeeze” their memory ᵒIncreases host efficiency © 2012 Citrix | Confidential – Do Not Distribute
  • 90. HA Design - Preventing Single Point of Failure • HA recovery may create single points of failure • WLB host exclusion minimizes impact © 2012 Citrix | Confidential – Do Not Distribute
  • 91. HA Enhancements in XenServer 6 • HA over NFS • HA with Application Packages ᵒDefine multi-VM services ᵒDefine VM startup order and delays ᵒApplication packages can be defined from running VMs • Auto-Start VMs are removed ᵒUsage conflicted with HA failure planning ᵒCreated situations when perceived host recovery wasn’t met © 2012 Citrix | Confidential – Do Not Distribute
  • 92. High Availability – No Excuses • Shared storage the hardest part of setup ᵒSimple wizard can have HA defined in minutes ᵒMinimally invasive technology • Protects your important workloads ᵒReduce on-call support incidents ᵒAddresses VM density risks ᵒNo performance, workload, configuration penalties • Compatible with resilient application designs • Fault tolerant options exist through ecosystem © 2012 Citrix | Confidential – Do Not Distribute
  • 94. Leverage Array Technologies • No file system overlay Array OS Array OS • Use Best-of-Breed technologies ᵒThin Provisioning Hypervisor Filesystem Snapshotting Snapshotting Provisioning Provisioning ᵒDeduplexing Snapshotting Cloning Cloning ᵒCloning Provisioning Cloning ᵒSnapshotting ᵒMirroring VM VM VM VM VM VM VM VM VM VM • Maximize array performance VM VM VM VM VM VM VM VM VM VM Traditional Approach Citrix StorageLink © 2012 Citrix | Confidential – Do Not Distribute
  • 95. No StorageLink – Inefficient LUN Usage Today 4 weeks 8 weeks 12 weeks Customer request Customer adds 5 VMs Customer adds 5 VMs Customer adds 5 VMs for 600GB with 50 GB each with 50 GB each with 50 GB each LUN 600GB  Customer 400 GB free 400 GB free 400 GB free requests new storage LUN 600GB LUN 600GB LUN 600GB LUN 600GB capacity 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 1 TB storage capacity © 2012 Citrix | Confidential – Do Not Distribute
  • 96. With StorageLink – Maximize Array Utilization Today 4 weeks 8 weeks 12 weeks Customer request Customer adds 5 VMs Customer adds 5 VMs Customer adds 5 VMs for 600 GB with 50 GB each with 50 GB each with 50 GB each 1 TB free 750 GB free 500 GB free 250 GB free 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 1 2012 Citrix Confidential – Do Not Distribute © TB storage |capacity
  • 97. StorageLink – Efficient Snapshot Management NO StorageLink With StorageLink VM Snapshot capacity limited to LUN VM Snapshot capacity limited storage size pool size 400 GB free 750 GB free Snapshot capacity LUN 600GB 350 GB 750 GB 50GB disk 50GB disk 50GB LUN 50GB disk 50GB LUN 50GB disk 50GB LUN 50GB disk 50GB LUN 50GB LUN © 2012 Citrix | Confidential – Do Not Distribute
  • 98. Integrated StorageLink Architecture XenServer Host XAPI Daemon SMAPI CSLG LVM NFS NetApp … Bridge EQL NTAP SMI-S … © 2012 Citrix | Confidential – Do Not Distribute
  • 100. Network Performance for GbE with PV drivers • XenServer PV drivers can sustain peak throughput on GbE ᵒHowever limited to 2.9Gb/s in total • But XenServer uses significantly more CPU cycles than Linux ᵒLess available cycles for application ᵒ10GbE networks: CPU saturation in dom0 prevents achieving line rate • Need to reduce I/O virtualization overhead in XenServer networking © 2012 Citrix | Confidential – Do Not Distribute
  • 101. I/O Virtualization Overview – Hardware Solution • VMDq (Virtual Machine Device Queue) ᵒSeparate Rx & Tx queue pairs of NIC for Network Only each VM, Software “switch”. • Direct I/O (VT-d) ᵒImproved I/O performance through direct VM exclusively assignment of a I/O device to a HVM or PV owns device workload • SR-IOV (Single Root I/O Virtualization) One Device, multiple ᵒChanges to I/O device silicon to support Virtual Functions multiple PCI device ID’s, thus one I/O device can support multiple direct assigned guests. Requires VT-d. © 2012 Citrix | Confidential – Do Not Distribute
  • 102. Where Does SR-IOV Fit In? Technique Efficiency Hardware Abstraction Applicability Scalability Characteristic Emulation Low Very high All device classes High Para-virtualization Medium High – requires installing paravirtual Block, network High drivers on the guest Acceleration (VMDq) High Medium: Network only, Medium (for -Transparent to apps hypervisor dependent accelerated interfaces) -May require device-specific accelerators PCI Pass-through High Low: All devices Low -Explicit device plug/unplug -Device specific drivers SR-IOV Addresses This © 2012 Citrix | Confidential – Do Not Distribute
  • 103. XenServer Solarflare SR-IOV Implementation Typical SR-IOV Implementation XS & Solarflare SR-IOV Model Guest Guest Guest App VM App VM VM App Plug-in Netfront driver VF driver VF driver driver VF dom0 dom0 Netback driver vSwitch vSwitch Physical driver Physical driver Virtual NIC Virtual NIC NIC Virtual NIC NIC Improved performance, but loss of services Improved performance AND full and management (e.g. live migration) use of services and management © 2012 Citrix | Confidential – Do Not Distribute
  • 105. XenMotion – Live VM Migration • Requires systems that have compatible CPUs ᵒMust be the same manufacturer ᵒCan be different speed ᵒMust support maskable features; or be of simlar type (e.g. 3450 and 3430) • Minimal Downtime ᵒGenerally sub 200 mS; mostly due to network switches • Requires shared storage ᵒVM state moves between hosts; underlying disks remain in existing location © 2012 Citrix | Confidential – Do Not Distribute
  • 107. Pre-Copy Migration: Round 1 • Systems verify correct storage and network setup on destination server • VM Resources Reserved on Destination Server Source Virtual Machine Destination © 2012 Citrix | Confidential – Do Not Distribute
  • 108. Pre-Copy Migration: Round 1 • While source VM is still running XenServer copies over memory image to destination server • XenServer keeps track of any memory changes during this process © 2012 Citrix | Confidential – Do Not Distribute
  • 109. Pre-Copy Migration: Round 1 © 2012 Citrix | Confidential – Do Not Distribute
  • 110. Pre-Copy Migration: Round 1 © 2012 Citrix | Confidential – Do Not Distribute
  • 111. Pre-Copy Migration: Round 1 • After first pass most of the memory image is now copied to the destination server • Any memory changes during initial memory copy are tracked © 2012 Citrix | Confidential – Do Not Distribute
  • 112. Pre-Copy Migration: Round 2 • XenServer now does another pass at copying over changed memory © 2012 Citrix | Confidential – Do Not Distribute
  • 113. Pre-Copy Migration: Round 2 © 2012 Citrix | Confidential – Do Not Distribute
  • 114. Pre-Copy Migration: Round 2 • Xen still tracks any changes during the second memory copy • Second copy moves much less data • Also less time for memory changes to occur © 2012 Citrix | Confidential – Do Not Distribute
  • 115. Pre-Copy Migration: Round 2 © 2012 Citrix | Confidential – Do Not Distribute
  • 116. Pre-Copy Migration • Xen will keep doing successive memory copies until minimal differences between source and destination © 2012 Citrix | Confidential – Do Not Distribute
  • 117. XenMotion: Final • Source VM is paused and last bit of memory and machine state copied over • Master unlocks storage from source system and locks to destination system • Destination VM is unpaused and attached to storage and network resources • Source VM resources cleared © 2012 Citrix | Confidential – Do Not Distribute
  • 119. Live Storage XenMotion Upgrading VMs from Local to Shared Storage Live Virtual Machine XenServer Hypervisor VDI(s) Local Storage FC, iSCSI, NFS SAN XenServer Pool © 2012 Citrix | Confidential – Do Not Distribute
  • 120. Live Storage XenMotion Moving VMs within a Pool with local-only storage Live Virtual Machine XenServer Hypervisor XenServer Hypervisor VDI(s) Local Local Storage Storage XenServer Pool © 2012 Citrix | Confidential – Do Not Distribute
  • 121. Live Storage XenMotion Moving or rebalancing VMs between Pools (Local  SAN) Live Virtual Machine XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor VDI(s) Local Storage FC, iSCSI, NFS SAN XenServer Pool 1 XenServer Pool 2 © 2012 Citrix | Confidential – Do Not Distribute
  • 122. Live Storage XenMotion Moving or rebalancing VMs between Pools (Local  Local) Live Virtual Machine XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor VDI(s) Local Local Storage Storage XenServer Pool 1 XenServer Pool 2 © 2012 Citrix | Confidential – Do Not Distribute
  • 123. VHD Benefits • Many SRs implement VDIs as VHD trees • VHDs are a copy-on-write format for storing virtual disks • VDIs are the leaves of VHD trees • Interesting VDI operation: snapshot (implemented as VHD “cloning”) RW RO A RO RW B A • A: Original VDI • B: Snapshot VDI © 2012 Citrix | Confidential – Do Not Distribute
  • 124. no color = empty VDI Mirroring Flow gradient = live SOURCE DESTINATION mirror VM VM root © 2012 Citrix | Confidential – Do Not Distribute
  • 125. Benefits of VDI Mirroring • Optimization: start with most similar VDI ᵒAnother VDI with the least number of different blocks ᵒOnly transfer blocks that are different • New VDI field: Content ID for each VDI ᵒEasy way to confirm that different VDIs have identical content ᵒPreserved across VDI copy, refreshed after VDI attached RW • Worst case is a full copy (common in server virtualization) • Best case occurs when you use VM “gold images” (i.e. XenDesktop) © 2012 Citrix | Confidential – Do Not Distribute
  • 126. Work better. Live better.

Notas do Editor

  1. Welcome to the XenServer Technical Presentation. In this presentation we’ll be covering many of the core features of XenServer, and we’ll have the option of diving a bit deeper in areas which you may be interested in.
  2. For those of you unfamiliar with XenServer, XenServer is a bare metal hypervisor which directly competes with vSphere, Hyper-V and KVM. It is derived from the open source Xen project, and has been in active development for over six years. In this section we’ll cover the core architectural items of Xen based deployments.
  3. Since XenServer is based on the open source Xen project, it’s important to understand how Xen itself works. Xen is a bare metal hypervisor which directly leverages virtualization features present in most CPUs from Intel and AMD since approximately 2007. These CPUs all feature VT-D or AMD-V instructions which allow virtual guests to run without needing performance robbing emulation. When Xen was first developed, the success of Vmware ESX was largely based on a series of highly optimized emulation routines. Those routines were needed to address shortcomings in the original x86 instruction set which created obstacles to running multiple general purpose “protected mode” operating systems such as Windows 2000 in parallel. With Xen, and XenServer, those obstacles were overcome through use of both the VT-D instruction set extensions and para-virtualization. Paravirtualization is a concept in which either the operating system is modified, or specific drivers are modified to become “virtualization aware”. Linux itself can optionally run as paravirtualized, while Windows requires the use of both hardware assistance and paravirtualized drivers to run at maximum potential on a hypervisor.These advances served to spur early adoption of Xen based platforms whose performance outstripped ESX in many critical applications. Eventually VMW released ESXi to leverage VT-D and paravirtualization, but it wasn’t until 2011 and vSphere 5 that ESXi became the only hypervisor for vSphere.
  4. This is a slide that shows a blowup of the Xen virtualization engine and the virtualization stack “Domain 0” with a Windows and Linux virtual machine. The green arrows show memory and CPU access which goes through the Xen engine down to the hardware. In many cases Xen will get out of the way of the virtual machine and allow it to go right to the hardware.Xen is a thin layer of software that runs right on top of the hardware, Xen is only around 50,000 lines of code. The lines show the path of I/O traffic on the server. The storage and network I/O connect through a high performance memory bus in Xen to the Domain 0 environment. In the domain 0 these requests are sent through standard Linux device drivers to the hardware below.
  5. Domain 0 is a Linux VM with higher priority to the hardware than the guest operating systems. Domain 0 manages the network and storage I/O of all guest VMs, and because it uses Linux device drivers, a broad range of physical devices are supported
  6. Linux VMs include paravirtualized kernels and drivers. Storage and network resources are accessed through Domain 0, while CPU and memory are accessed through Xen to the hardwarehttp://wiki.xen.org/wiki/Mainline_Linux_Kernel_Configs
  7. Windows VMs use paravirtualized drivers to access storage and network resources through Domain 0. XenServer is designed to utilize the virtualization capabilities of Intel VT and AMD-V enabled processors. Hardware virtualization enables high performance virtualization of the Windows kernel without using legacy emulation technology
  8. XenServer is designed to address the virtualization needs of three critical markets.Within the Enterprise Data Center, XenServer solves the traditional server virtualization objectives of server consolidation, hardware independence while providing a high performance platform with a very straight forward management model.Since XenServer is a Citrix product, it only stands to reason that it can draw upon the vast experience Citrix has in optimizing the desktop experience and provide optimizations specific to desktop workloads.Lastly, with the emergence of mainstream cloud infrastructures, XenServer can draw upon the heritage of Amazon Web Services and Rackspace to provide a highly optimized platform for cloud deployments of any scale.
  9. Since all these use cases depend on a solid data center platform, let’s start by exploring the features critical to successful enterprise virtualization
  10. Successful datacenter solutions require an easy to use management solution, and XenServer is no different. For XenServer this management solution is called XenCenter. If you’re familiar with vCenter for vSphere, you’ll see a number of common themes. XenCenter is the management console for all XenServer operations, and while there is a powerful CLI and API for XenServer, the vast majority of customers perform daily management tasks from within XenCenter. These tasks include starting and stopping VM, managing the core infrastructure such as storage and networks, through to configuring advanced features such as HA, workload placement and alerting. This single pane of glass also allows administrators to directly access the consoles of the virtual machines themselves. As you would expect, there is a fairly granular set of permissions which can be applied, and I’ll cover that topic in just a little bit.
  11. Of course any management solution which doesn’t have role based administration isn’t ready for the modern enterprise. XenServer fully supports granular access to objects and through the distributed management model ensures that access is uniformly applied across resource pools regardless of access method. In other words, the access available from within XenCenter is exactly the same access available via CLI or through API calls.
  12. What differentiates Live Storage Migration from Live VM Migration is that with Live Storage Migration the storage used for the virtual disks is moved from one storage location while the VM itself may not change virtualization hosts. In XenServer, Live VM Migration is branded XenMotion and logically Live Storage Migration became Storage XenMotion. With Storage XenMotion, live migration occurs using a shared nothing architecture which effectively means that other than having a reliable network connection between source and destination, no other elements of the virtualization infrastructure need be common. What this means is that with Storage XenMotion you can support a large number of storage agility tasks, all from within XenCenterFor example:Upgrade a storage arrayProvide tiered storage arraysUpgrade a pool with VMs on local storageRebalance VMs between XenServer pools, or CloudStack clusters
  13. One of the key problems facing virtualization admins is the introduction of newer servers into older resource pools. There are several ways vendors have chosen to solve this problem. They can either “downgrade” the cluster to a known level (say Pentium Pro or Core 2), disallow mixed CPU pools, or level the pool to the lowest common feature set. The core issue when selecting the correct solution is to understand how workloads actually leverage the CPU of the host. When a guest has direct access to the CPU (in other words there is no emulation shim in place), then that guest also has the ability to interrogate the CPU for its capabilities. Once those capabilities are known, the guest can optimize its execution to leverage the most advanced features it finds and thus maximize its performance. The downsize is that if the guest is migrated to a host which lacks a given CPU feature, the guest is likely to crash in a spectacular way. Vendors which define a specific processor architecture for the “base” are effectively deciding that feature set in advance and then hooking the CPU feature set instruction and returning that base set of features. The net result could be performance well below that possible with the “least capable” processor in the pool. XenServer takes a different approach and looks at the feature set capabilities of the CPU and leverages the FlexMigration instruction set within the CPU to create a feature mask. The idea is to ensure that only the specific features present in the newer processor are disabled and that the resource pool runs at its maximum potential. This model ensures that live migrations are completely safe, regardless of the processor architectures; so long as the processors come from the same vendor.
  14. The ability to overcommit memory in a hypervisor was born at a time when the ability to overcommit a CPU far outpaced the ability to populate physical memory in a server in a cost effective manner. The end objective of overcommiting memory is to increase the quantity of VMs which a given host can run. This lead to multiple ways of extracting more memory from a virtualization host than was physically present. The four most common ways of solving this problem are commonly referred to as “transparent page sharing”, “memory ballooning”, “page swap” and “memory compression”. While each has the potential to solve part of the problem, using multiple solutions often yielded the best outcome. Transparent page sharing which seeks to share the 4k memory pages used by an operating system to store its read-only code. Memory ballooning seeks to introduce a “memory balloon” which appears to consume some of the system memory and effectively share it between multiple virtual machines. “Page swap” is nothing more than placing memory pages which haven’t been accessed recently on a disk storage system, and “memory compression” seeks to compress the memory (either swapped or in memory) with a goal of creating additional free memory from commonalities in memory between virtual machines.Since this technology has been an evolutionary attempt to solve a specific problem, it stands to reason that several of the approaches offer minimal value in todays’ environment. For example, transparent page sharing assumes that the readonly memory pages in an operating system are common across VMs, but the reality is that the combination of large memory pages and memory page randomization and tainting have rendered the benefits from transparent page sharing largely ineffective. The same holds true for page swapping whose performance overhead often far exceeds the benefit. What this means is that the only truly effective solutions today are memory ballooning and memory compression. XenServer currently implements a memory balloning solution under the feature name of “dynamic memory control”. DMC leverages a balloon driver within the XenServer tools to present the guest with a known quantity of memory at system startup, and then will modify the amount of free memory seen by the guest in the even the host experiences memory pressure. It’s important to present the operating system with a known fixed memory value at system startup as that’s when the operating system defines key parameters such as cache values.
  15. Managing a single virtual machine at a time works perfectly fine when you’re evaluating a hypervisor, or when you’re a small shop, but eventually you’re going to want to manage applications which span a group of servers as a single item. Within XenServer, this is accomplished using a vApp. At its highest level, a vApp is a container which includes one or more VMs and their associated settings. This container is manageable using all the standard XenServer management options, and importantly can participate in HA and disaster recovery planning as well as backup export operations.
  16. VM Protection & Recovery GoalProvide a way to automatically protect VM memory and disk against failures Snapshot TypesDisk onlyDisk and memorySnapshot frequencyHourlyDailyWeekly (multiple days)Start timeSnapshot retention configurable (1-10)Archive frequencyAfter each snapshotDailyWeekly (multiple days)Start timeArchive locationCIFSNFSCompressed export
  17. As today's hosts get more powerful, they are often tasked with hosting increasing numbers of virtual machines. For example, only a few years ago server consolidation efforts were generating consolidation ratios of 4:1 or even 8:1, today’s faster processors coupled with greater memory densities can easily support over a 20:1 consolidation ratio without significantly overcommiting CPUs. This creates significant risk of application failure in the event of a single host failure. High availability within XenServer protects your investment in virtualization by ensuring critical resources are automatically restarted in the event of a host failure. There are multiple restart options allowing you to precisely define what critical means in your environment.
  18. The features we’ve just covered form the basis of a basic virtualized data center. Once your data center operations reach a point where you’re operating at scale which has many admins, or multiple resource pools, some of the advanced data center automation components within XenServer will start to become valuable.
  19. When looking at storage usage within virtualized environments, there typically is either a file based or block based model, but regardless of the model the shared storage is essentially treated as if it were nothing more than a large dumb disk. Advanced features of the storage arrays aren’t used, and storage usage might be inefficient as a result. StorageLink uses specialized adapters which are designed for a given array. These adapters take full advantage of the feature set contained within the storage array. Key advantages of StorageLink over simple block based storage repositories include: Thin-provisioning, deduplication and array based snapshot management.Note:Integrated StorageLink replaces the StorageLink Gateway technology used in previous editionsLUN-per-VDIUsing array “smarts”Does not require a (virtual) machine for running StorageLink componentsRemoves SPOFSupported AdaptersNetAppDell EqualLogicEMC VMX
  20. When resource pools are small, and the number of VMs under management are similarly low, it’s not unreasonable for a virtualization admin to make acceptable decisions about where to place a given guest for optimal performance. Once the number of VMs reaches a critical point, typically between 20-30, placement decisions and interdependencies become so complex that humans aren’t going to place VMs in the most optimal location. This is why VMW and others have implemented resource placement services, and if you’re familiar with vSphere DRS, then XenServer Workload Balancing will look very familiar. Like DRS, WLB takes into account CPU and RAM utilization when attempting to determine where the best host to start or rebalance a VM is, but unlike DRS, WLB also includes key IO metrics such as disk reads and writes and network reads and writes in those computations. This allows WLB to ensure IO dominant applications are rarely placed on the same host, and that overall resource pool operations are optimized.In addition to performing workload placement, WLB is also directly integrated into XenServer power management to perform workload consolidation on a scheduled basis. This feature allows for the consolidation of underutilized servers onto fewer hosts during evening hours, and the evacuated hosts powered down for the duration. When the morning schedule takes effect, the powered down hosts are automatically restarted and workloads rebalanced for optimal performance.Lastly, WLB incorporates a series of health and status reports suitable for both operations and audit purposes.Schedule pool policy based on time of day needsWhen starting guests, an option to “Start on optimal server” is available, and XenServer chooses the most appropriate server based on policyUsers have the ability to over-ride policy, or specify guests or hosts that are excluded from policy (eg high-demand applications)
  21. Planning for and supporting multi-site disaster recovery within a virtualized environment can be quite complex, but with XenServer’s integrated site recovery option, we’ve taken care of the hard parts. The key to site recovery is that we take care of the VM metadata, while your storage admins take care of the array replication piece. What this means is that every iSCSI or HBA storage solution on our HCL is supported for site recovery operations, providing that it either has built-in replication or can work with third party replication. When site recovery is enabled, the VM metadata corresponding to the VMs and/or vApps you wish to protect are written to the SR containing the disk images for the VMs. When the LUNs are replicated to the secondary site, the metadata required to reconfigure those VMs is also automatically replicated. Because we’re replicating the underlying VM disk images and associated metadata, if VMs in the secondary site are running from different LUNs Integrated Site Recovery can fully support active/active use models. Note that due to VM replication, active/active will require a minimum of two LUNs.Recovery from failure, failback and testing of failover is accomplished using a wizard within XenCenter. Each step of the wizard validates that the configuration is correct and that the system is in fact in a state of “failure”.
  22. XenServer Web Console GoalsEnable XenServer Mgmt from a Web based console Offer VM level delegation so end users can manage their VM’sWeb SS delivers Remote ManagementITadmins have long wanted a means to mange VM’s remotely via a browser based, non-windows platformEnd User Self ServiceWSS also allows IT to delegate routine management tasks to the application/VM ownerThis satisfies the more strategic goal of helping IT to enable customer self service in the datacenterFinally WSS also provides a foundation for future innovation in the areas of web based mgmt, self service and an opencloud director layer for x-platform mgmt
  23. Performing a snapshot of a running VM using live memory snapshot allows the full state of the VM to be captured during the snapshot, all with minimal impact to the running VM. Additionally, if the Volume Snapshot Service (VSS) is enabled within Windows VMs, any services which have registered themselves with VSS will automatically quiesce during the snapshot. Examples of services which register themselves include SQL Server.XenServer supports both parallel branches for the snapshot chains, and will automatically coalesce any chains if intermediate snapshots are deleted. Additionally, snapshots can be converted to custom templates.
  24. Desktop virtualization is a core topic in many organizations today, and while some vendors would have you believe that a general purpose hypervisor is the correct solution for desktop workloads, the reality is that desktop workloads present a very distinct usage pattern not seen with traditional server based workloads. This is one reason why when you look at Citrix XenDesktop you see it taking advantage of specific features of XenServer which are unique to desktop virtualization. In this section, we’ll cover what the Desktop Optimized XenServer looks like and what specific benefits XenServer has when XenDesktop is used as the desktop broker.
  25. Within desktop virtualization there are two distinct classes of users, those who are using general purpose applications and those who are using graphics intensive applications. Supporting the former is readily accomplished using the traditional emulated graphics adapters found in hypervisors, but when you need the full power of a GPU for CAD, graphic design or video processing those emulated adapters are far from sufficient. This is why XenServer implemented the GPU Pass-through feature. With GPU pass-through users requiring high performance graphics can be assigned a dedicated GPU contained within the XenServer host making GPU pass-through the highest performing option on the market.
  26. So this use traditionalcase is shown on the left. Each blade or workstation needed a GPU installed, and Windows was installed physically.On the right we have the GPU pass-thru use case. We can install a number of GPUs in the XenServer host, and assign them to the Virtual machines.The actual savings will be determined by the number of GPUs in the server, or the capabilities of the new “multi-GPU cards” coming from vendors such as nVidia.
  27. One of the biggest areas of concern when deploying desktop virtualization isn’t the overall license costs, but the impact of shared storage. On paper if you were considering a deployment requiring 1000 active desktops, and assumed an average of 5GB per desktop, if you happened to have space for a 5 TB LUN on an existing storage array, you might be tempted to carve out that LUN and leverage it for the desktop project. Unfortunately, were you to do so you’d quickly find that while you had the space for the storage you might not have the free IOPS to satisfy both the desktop load and whatever pre-existing users were leveraging the SAN. With XenServer, we recognized that this would be a barrier to XenDesktop adoption and implemented IntelliCache to leverage the local storage on the XenServer as a template cache for the desktop images running on that host.
  28. The key to IntelliCache is recognizing that with desktop virtualization the number of unique templates per host is minimal. In fact, to maximize the effect of IntelliCache target the minimum number of templates to the given host. At the extreme, if the number of active VMs per template requires more than a single host, then dedicating a resource pool per template might be optimal.
  29. Hidden as due to citrix.com website optimizations due at launch, the calculator will be offline
  30. When desktop virtualization is the target workload, the correct hypervisor solution will be one which not only provides a high performance platform, and has features designed to lower the overall deployment costs and address critical use cases, but one which offers flexibility in VM and host configurations while still offering a cost effective VM density. Since this is a classic case of use case matters, take a look at the Cisco Validated Design for XenDesktop on UCS with XenServerhttp://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/ucs_xd_xenserver_ntap.pdf
  31. As with desktop virtualization, there are unique characteristics of cloud workloads which make a general purpose hypervisor less than idea. The vast experience Citrix has with cloud operators such as Amazon, Rackspace and SoftLayer over the years has allowed us to develop features which directly address the scalability and serviceability of cloud infrastructure.
  32. When dealing with high VM density in cloud hosting, the standard 1Gb NICs of a few years ago simply don’t provide the level of network throughput needed for most hosting providers. This lead to 10 Gb NICs becoming commonplace, but the hypervisor overhead of processing packets for a 10 Gb network artificially limited the throughput as well. This meant that even with 10 Gb cards, wire speed was hard to attain. SR-IOV is the answer to this type of problem. Through the use of specialized hardware, the physical NIC can be divided into virtual NICs at the hardware layer and these virtual NICs, commonly referred to as virtual functions, are then presented directly into the hypervisor. The core objective of this PCI standard is to minimize the hypervisor overhead in high performance networks. While SRIOV can provide significant efficiencies with 10 Gb networks, there are a few downsides to the technology today, but each of these limitations is being addressed as the technology matures.
  33. It is through the use of SRIOV and other cloud optimizations that the NetScaler SDX platform is able to provide the level of throughput, scalability and tenant isolation that it can. The NetScaler SDX is a hardware Application Delivery Controller capable of sustained throughput over 50 Gbps, all powered by a stock Cloud Optimized XenServer 6 hypevisor.
  34. As you would expect from Citrix, and our historical relationship with Microsoft, XenServer has a strong integration with System Center
  35. CIMOM = Common Information Model Object ManagerXenServer uses the OpenPegasus CIMOM
  36. XenServer is available in a variety of product editions to meet your needs with price points ranging from Free, through “Included with purchase of management framework”, to standalone paid editions.
  37. Platinum: Integrated DREnterprise: Adds IntelliCache for improved TCO of XenDesktop installments and adds a monitoring pack for Systems Center Ops which can now be used to manage XenServerAdvanced: Adds Automated VM protection and recovery to protect VMs data in the event of an outage or failureXenServer: Improvements to capacity, networking, upgrading, and converting existing workloadsThe “Desktop Optimized XenServer” is available with the purchase of XenDesktop, and the “Cloud Optimized XenServer” is available with the purchase of CloudStack
  38. One of the most obvious comparisons is between vSphere and XenServer. A few years ago vSphere was the clear technical leader, but today the gap has closed considerably and there are clear differences in overall strategy and market potential. Key areas which XenServer had lagged, for example with live migration or advanced network switching are either being addressed or have already been addressed. Of course there will always be features which XenServer is unlikely to implement, such as serial port aggregation, or platforms it’s unlikely to support, such as legacy Windows operating systems, but for the majority of virtualization tasks both platforms are compelling solutions.
  39. Platinum Edition: Data protection and resiliency for enterprise-wide virtual environmentsEnterprise Edition: Automated, integrated, and production-ready offering for medium to large enterprise deploymentsAdvanced Edition: Highly available and memory optimized virtual infrastructure for improved TCO and host utilizationFree Edition: Free, enterprise-ready virtual infrastructure with management tools above & beyond alternatives .
  40. More information on Citrix Subscription Advantage: http://www.citrix.com/lang/English/lp/lp_2317284.asp
  41. Premier Support: http://www.citrix.com/lang/English/lp/lp_2321822.aspPremier Support Calculator: http://deliver.citrix.com/WWWB1201PREMIERSUPPORTCALCULATORINBOUND.html
  42. The single vendor lock-in model only benefits the vendor. Choose the correct hypervisor for your workloads to ensure the best performance as well as extending your IT budget. Use POCs to measure how well each solution performs in your environment so you can truly gauge how much ROI you will get from a given implementation. Support is a valuable asset when deploying any environment and understanding each vendors model will make sure you don’t get stuck with a costly services bill later on.Understand the requirements of each project so you can assess the best tool for the job. Know what features are needed for your applications so you can spend money on costly features wisely.
  43. Key items to note:GPU is attached to a VM at boot time, and stays attached as long as the VM is running.Mixing GPU and non-GPU workloads on a host will maximize VM densityThe number of GPUs which can be installed in a host is limited
  44. *Require the HP Graphics Expansion Blade moduleKey items: There is a tight relationship between the host and GPU in this model, and that means a much more limited HCL. In other words, you can’t simply install a series of GPUs into a host and expect it work; it might, but it might not. There are a lot of moving parts.Current list: http://hcl.xensource.com/GPUPass-throughDeviceList.aspx
  45. Pretty much read this slide. It’s important stuff
  46. Key items:If you haven’t told the host to use local storage for intellicache, it won’t
  47. Key items: Same idea when adding a host to desktop studio
  48. Key items:As you would expect, we did some testing and found that IntelliCache made a differenceThese next three slides go together, and it’s important to pay attention to the vertical scale
  49. Key items:While the best results were achieved with SSDs, this really is a spindle story so if you have a server which can host a number of high performance rotational disks, then you do get a significant benefit from IntelliCache. Live migrating a VM which is backed by IntelliCache can be done, but it does require additional configuration. By default since the disk is local, live migration won’t work.
  50. Just read it
  51. While not required for many private clouds, the concept of resource costing, billing and chargeback are core to determining the success of your cloud initiative. Eventually someone is going to be looking for usage stats, or better still capacity planning information. That information is going to be readily available in a solution which was designed to capture deployment details from the start. One important detail to bear in mind is that no billing solution is going to be perfect. Entire products are designed around the whole concept of “billing”, and XenServer isn’t such a product. Our approach is a bit different. It recognizes that there is going to be some requirement for external data (such as costing information), and that this information simply doesn’t belong in a billing system. What we’ve done is provide the billing primitives, and easy SQL data views to access the data. From this framework, custom billing and chargeback models can be developed without encumbering the cloud provisioning system with complex billing requirements.
  52. Key items:Prior to XenServer 6, there was a feature known as Site Recovery Manager. This feature was implemented using the StorageLink Gateway and had a very limited HCL. We removed that feature and replaced it with the Integrated Site Reocvery starting in version 6 of XenServer. This allowed us to support any iSCSI or HBA array on the HCL.
  53. Persist following SR information in the SR:name_labelname_descriptionallocationUUIDPersist the following VDI information for all VDIs in the SR:name_labelname_descriptionUUID is_a_snapshotsnapshot_ofsnapshot_timetypevdi_typeread_onlymanagedmetadata_of_poolThe metadata is stored in a logical volume called “MGT” in each SR. Writes to this volume are O_DIRECT and block size aligned, so the metadata is always consistent.
  54. Shows the entire flow of setup and failover.
  55. Core terminology used in this section
  56. Explain the problems we’re attempting to address with the new switch.Rich, flexible virtual switching for each host thatGuarantees separation & isolationParticipates in all standard switching protocol exchanges, just like a physical switchProvides full visibility into and control of the packet forwarding pathBy and for multiple VM tenantsProvides complete management of all switch features just like a hardware switchBy and for multiple managing tenantsIs inherently aware of virtualizationVM ACLs are a property of the VMMulti-tenancy<clikc>Pooled state from multiple virtual switches in a virtual infrastructure to permit the abstraction of a virtual port to be separated from a software virtual switch on a single serverBuilding block of multi-tenant virtual private data center overlayPreserve network state per-VM as VMs migrate between physical serversPermit unified management, visibility into and control of VM traffic from a single point in the infrastructurePermit multi-tenant aware management of the distributed virtual switchPermit per-flow timescale decisions to be made for control of traffic on any virtual portMulti-tenant aware & secure
  57. Key items: Access control is exactly the same as firewall rules in a traditional switch. What’s different here is that the definition of a virtual switch becomes analagous to a stack which allows network admins to define rulesets which apply regardless of what the virtualization admin might change.
  58. Key items:QoS with bursting
  59. Key items: Doing port mirroring on a VM without DVS requires filtering traffic from other VMs. With DVS, a couple of clicks of a mouse and you’re done.
  60. Key items: NetFlow is an industry standard for network monitoring, and DVS handles it out of the box. All you need to do is configure the collector address and you’re done.
  61. Key items: Built in to the DVSC is a basic NetFlow collector and analyzer. Good for small installations, but can be disabled for larger enterprises.
  62. Key items:DVS gives us jumbo framesDVS allows us to create private networks using management interfaces, and those private networks are secured using GRE. This means that there are no VLAN boundaries to worry abouthttp://en.wikipedia.org/wiki/Generic_Routing_Encapsulation
  63. Key items: Restart used for most critical servicesRestart if possible used if restart desired, but application design will ensure continued operation in the event the service can’t be restarted. This option also provides additonal failure “head room”Non-agile VMs can not be guaranteed to restart
  64. New in XenServer 5.6 - Dynamic Memory ControlMaximize investment in server hardwareSome configuration risk – VMs not guaranteed to bootCrashed VMs automatically placed on best fit serversRunning VMs could be "squeezed" to free up RAM
  65. When dealing with resilient applications and HA, there is always the potential for creating single points of failure which the application deployment guide cautioned against. For example, if you have a SQL Server cluster made up of two nodes, if both of those nodes end up on the same host and that host fails, the resiliency of SQL Server won’t save you. Here’s how to avoid such situations in XenServer by using both HA and WLB.Define a host in WLB to not participate in optimization and rebalancing activitiesPlace one node of the SQL Server cluster on that host (node A)Place the second node of the SQL Server cluster on any other host (node B)Configure HA to protect the second node of the SQL Server cluster using “restart if possible”, but not the first nodeLet’s explore the various automatic failure modes:If the host excluded from WLB activities fails, node A fails and does not restart. Node B continues to operate with no downtimeIf a host running node B fails, node B will be restarted on any surviving host except for the host excluded by WLB. If the only host with capacity to start node B is the excluded host, then node B won’t be started, otherwise it will be restarted without breaking resiliency
  66. Embed multiple VMs in a single management frameworkPackage is managed as an entity (e.g. backup)VM start order and delays contained in package
  67. Says it all
  68. Key items: When you overlay a file system on to manage VMs, there are inherent features that file system imposes, and those features might not be compatible with what a given storage array can offer. The core objective of StorageLink is to maximize the potential of a storage array without artificially imposing virtualization concepts upon it.
  69. So without StorageLink, you end up asking the storage admins for a LUN, and that LUN ends up being a storage repository in XenServer provisioned as LVM. LVM storage repositories are block based primitives which have the virtual disks contained within them and while a VM is running, LVM effectively requires that virtual disk to be fully provisioned. Obviously as you add more an more disks, there will come a point when the LUN will be full, but the virtual disks themselves might not be fully used. The net result being that additional VM capacity requires a second storage repository which in turn requires a new LUN.
  70. With StorageLink, StorageLink manages the array directly and provisions a LUN for each virtual disk. Since StorageLink has direct access to the array, it can provision the LUNs using key features such as thin-provisioning and thus make more efficient utilization of the array. This model is known as LUN per VDI
  71. In addition to LUN provisioning, since StorageLink has direct access to the array, it also can levearge the array’s native APIs to perform snapshots and clones. Without StorageLink, those snapshots live within the provisioned “fat LUN” and will compete for storage space with the primary virtual disks. StorageLink effectively frees the snapshot mechanism to leverage the entire space from the array.
  72. StorageLink uses an adapter based architecture where the XenServer host and control domain have a StorageLink Gateway Bridge into which the adapters plug. Legacy NetApp and Dell EQL adapters are still in the code, but mainly for users upgrading to XenServer 6 who are using the Legacy adapter today. New SRs created from XenCenter will use the new integrated SL adapter.iSL supports NetApp, Dell EqualLogic and EMC VNX arrays
  73. The primary driving force behind SRIOV support is that with the advent of 10Gb ethernet, the existing PV driver model simply can’t sustain full throughput on these cards. So while 1Gb line rate is possible, dom0 saturation prevents 10Gb from attaining line rate.
  74. Taking a bit of a step back, we see that the solution itself requires more than just SRIOV, but rahter a series of enhancements.Starting with VMDq we create separate RX and TX pairs for each VM:Device has multiple RX queuesDedicate one RX queue to a particular guestProgram device to demultiplex incoming packets to the dedicated queue using guest MAC addressPost RX descriptors pointing to guest memoryDevice places received packet directly into guest memory avoiding data copyWith direct IO (VTd) we now can map the IO directly into a guest VM and this allows for attainment of line rate on 10GbMoving past VTd, we have SRIOV which itself carves out virtual NICs from the physical NIC to form virtual functions. Each of the virtual functions can be mapped into a VM, but more importantly since each VM can itself be performing high levels of IO the line rates can be further extended. This is precisely how the NetScaler SDX can attain its high throuput using standard XenServer.
  75. When looking at the key objectives of virtualization and hardware, you can see that direct hardware access has historically provided limited scalability due to the inability of most devices to natively share access. With SRIOV, this scalability limitation is largely overcome.
  76. Of course, since you’re mapping a dedicated hardware resource to a VM you’ve now prevented it from participating in live migration. With XenServer 6 we’ve introduced experimental support for SolarFlare cards supporting SRIOV with live migration.By default the guest VM will use the “fast path” for network traffic, however a regular VIF backup path is available and the VM will fallback to this path during migration to a different host. If a Solarflare SR-IOV adapter is available on the target host, the guest will switch back to the “fast path” again after migration.