SlideShare uma empresa Scribd logo
1 de 17
Baixar para ler offline
IO Virtualization Performance
      HUANG Zhiteng (zhiteng.huang@intel.com)
Agenda
  • IO Virtualization Overview
    •    Software solution
    •    Hardware solution
    •    IO performance Trend

  • How IO Virtualization Performed in Micro Benchmark
    •    Network
    •    Disk
  • Performance in Enterprise Workloads
    • Web Server: PV, VT-d and Native performance
    • Database: VT-d vs. Native performance
    • Consolidated Workload: SR-IOV benefit
  • Direct IO (VT-d) Overhead Analysis
Software        2
& Services
        group
IO Virtualization Overview
                               IO Virtualization enables VMs to utilize
                               Input/Output resources of hardware
                        VMN
                               platform. In this session, we cover network
                  VM0
                               and storage.

                               Software Solutions:
                               Two solution we are familiar with on Xen:
                               •Emulated Devices – QEMU
                                 Good compatibility, very poor performance
                               •Para-Virtualized Devices
                                 Need driver support in guest, provides optimized
                               performance compared to QEMU.

                               Both require participation of Dom0 (driver domain)
                               to serve an VM IO request.

Software      3
& Services
      group
IO Virtualization Overview – Hardware Solution

   • VMDq (Virtual Machine Device Queue)
       • Separate Rx & Tx queue pairs of NIC for each VM,
                                                                 Network Only     Offer 3 kinds of
       Software “switch”. Requires specific OS and VMM
       support.                                                                   H/W assists to
                                                                                  accelerate IO.
   • Direct IO (VT-d)
                                                                 VM Exclusively   Single or
       •Improved IO performance through direct assignment of a
                                                                 Owns Device      combination of
       I/O device to an unmodified or paravirtualized VM.
                                                                                  technology can
                                                                                  be used to
   • SR-IOV (Single Root I/O Virtualization)                                      address various
                                                                 One Device,
       •Changes to I/O device silicon to support multiple PCI    Multiple         usages.
       device ID’s, thus one I/O device can support multiple     Virtual
       direct assigned guests. Requires VT-d.                    Function



Software      4
& Services
      group
IO Virtualization Overview - Trend                                                                                    Much Higher throughput,
                                                                                                                        denser IO capacity.




                                                                                       40Gb/s and 100Gb/s Ethernet
                                                                              Scheduled to release Draft 3.0 in Nov. 2009, Standard Approval in
                                                                        2010**.


                                                    Fibre Channel over Ethernet (FCoE)
                                                Unified IO consolidates network (IP) and storage (SAN) to single
                                              connection.



                                 Solid State Drive (SSD)
                              Provides hundreds MB/s bandwidth and >10,000 IOPS for single devices*.



                   PCIe 2.0
                  Doubles the bit rate from 2.5GT/s to 5.0GT/s.

Software      5
& Services          * http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403

      group         ** See http://en.wikipedia.org/wiki/100_Gigabit_Ethernet
How IO Virtualization Performed in Micro Benchmark – Network

                                                                                                                                       iperf: Transmiting Performance                       SR-IOV Dual-Port
    Iperf with 10Gb/s Ethernet NIC were                                                                                                                                                        10GbE NIC
                                                                                                                20                                                                                          100%
                                                                                                                18
    used to benchmark TCP bandwidth of                                                                          16
                                                                                                                                                                                                  19
                                                                                                                                                                                                            80%




                                                                                             Bandwidth (Gb/s)
                                                                                                                14
                                                                                                                12                                                                                          60%
    different device models(*).                                                                                 10
                                                                                                                 8                                             1.13x              9.54                      40%
                                                                                                                 6                                    8.47
                                                                                                                                    1.81x
    Thanks to VT-d, VM can easily                                                                                4
                                                                                                                          4.68
                                                                                                                                                                                                            20%
                                                                                                                 2
                                                                                                                 0                                                                                          0%
    achieved 10GbE line-rate in both cases                                                                           HVM + PV driver                PV Guest                   HVM + VT-d    30VMs+SR-IOV
                                                                                                                                                               perf          cpu%
    with relatively much lower resource
                                                                                                                                        iperf: Receiving Performance
    consumption.                                                                                            10                                                                                               100%

                                                                                                                8                                                                              9.43          80%



                                                                                 Bandwidth (Gb/s)
    Also with SR-IOV, we were able to get                                                                       6                                                                 3.04x                      60%

    19Gb/s transmitting performance with                                                                        4                                                                                            40%

                                                                                                                2                           2.12x                     3.10                                   20%
    30VFs assigned to 30 VMs.                                                                                   0
                                                                                                                             1.46
                                                                                                                                                                                                             0%
                                                                                                                       HVM + PV driver                           PV Guest                   HVM + VT-d
                                                                                                                                                               perf      cpu%


Software       6
                   * HVM+ VT-d uses 2.6.27 kernel while PV guest and HVM+PV driver uses 2.6.18.
                   * We turned off multiqueue support in NIC driver of HVM+VT-d due 2.6.18 kernel doesn’t have multi TX queue support. So for
& Services         iperf test, there was only one TX/RX queue in the NIC and all interrupts are sent to one physical core only.
      group        * ITR (Interrupt Throttle Rate) was set to 8000 for all cases.
How IO Virtualization Performed in Micro Benchmark – Network
  (cont.)
                                                                                                                Packet Transmitting performance is another
                                                 Packet Transmitting Performance                                essential aspect of high throughput network.

                                        9.00                                                        8.15        Using Linux kernel packet generator(pktgen)
                                        8.00
    Million Packet Per Second (mpp/s)




                                        7.00
                                                                                      2.1x
                                                                                                                with small UDP packets (128 Byte), HVM+VT-d
                                        6.00
                                        5.00                                  3.90
                                                                                                                can send over 4 million packets/s with 1 TX
                                        4.00
                                        3.00
                                                                                                                queue and >8 million packets/s with 4 TX
                                                               11.7x
                                        2.00
                                        1.00            0.33                                                    queue.
                                        0.00
                                               PV Guest 1 queue        HVM+VT-d 1 queue      HVM+VT-d 4 queue   PV performance was far behind due to its long
                                                                                                                packet processing path.

Software                                            7
& Services
                                        group
How IO Virtualization Performed in Micro Benchmark – Disk IO
                                       IOmeter: Disk Bandwidth
                      6,000                                                  100.00%
                                                                    4,911
                      5,000                                                  80.00%
                                                                                       We measured disk bandwidth with sequential
   Bandwidth (MB/s)




                      4,000
                                                                             60.00%
                      3,000                          2.9x
                                       1,711                                 40.00%    read and IOPS with random read to check block
                      2,000

                      1,000                                                  20.00%
                                                                                       device performance.
                         0                                                   0.00%
                                      PV Guest                  HVM + VT-d
                                                                                       HVM+VT-d out-perform PV guest to ~3.0x in
                                               IOmeter: Disk IOPS
                      20,000                                        18,725   100.00%
                                                                                       both tests.
                      16,000                                                 80.00%
    IO per Second




                      12,000                                                 60.00%
                                                      2.7x
                       8,000            7,056                                40.00%

                       4,000                                                 20.00%

                              0                                              0.00%
                                      PV Guest                  HVM + VT-d



Software                          8
& Services
                         group
Performance in Enterprise Workloads – Web Server
                                                                                                    Web Server simulates a support website
                                                                                                    where connected users browse and
                                Web Server Performance                                              download files.
               30,000                                                                     100.00%

                                                                               24,500               We measures maximum simultaneous user
               25,000
                                                                                          80.00%
                                                                                                    sessions that web server can support while
               20,000
                                                                                          60.00%
                                                                                                    satisfying the QoS criteria.
    Sessions




               15,000                                           2.7x
                                                                                          40.00%
               10,000                               9,000

                              5,000   1.8x
                                                                                                    Only HVM+VT-d was able to push server’s
                                                                                          20.00%
                5,000
                                                                                                    utilization to ~100%. PV solution hit some
                   -                                                                      0.00%
                        HVM + PV driver         PV guest                      HVM+ VT-d             bottleneck that they failed to pass QoS while
                                      performance           CPU utilization
                                                                                                    utilization is still <70%.


Software                  9
& Services
               group
Performance in Enterprise Workloads – Database
                     Decision Support DB Performnce                             OLTP DB Performance
                            11,443                                     199.71
          12000                                  10,762                                       184.08
                                                            200.00
          10000                      94.06%
                                                            150.00                                                    127.56
           8000                                                                    92.2%
   QphH




           6000                                             100.00
           4000
                                                             50.00                             63.9%
           2000

             0                                                0.00
                           Native             HVM + VT-d              Native        HVM + VT-d Storage & NIC   HVM + VT-d Storage




      Decision Support DB requires high disk bandwidth while OLTP DB is IOPS bound and requires certain
      network bandwidth to connect to clients.

      HVM+VT-d combination achieved > 90% of native performance in these two DB workloads.


Software              10
& Services
             group
Performance in Enterprise Workloads – Consolidation with SR-IOV
                                                                                                     Workload consolidates multiple tiles of servers
                                                                                                     to run on same physical machine. One tile
                                  SR-IOV Benefit: Consolidated Workload                              consists of 1 instance of Web Server, 1 J2EE
                           1.60                                                        1.49    100
                                                                                                     AppServer and 1 Mail Server, altogether 6 VMs.
                           1.40                                                                90

                           1.20
                                                                                               80    It’s a complex workload, which consumes CPU,
    Ratio of Performance




                                                                                               70
                           1.00
                                             1.00                                                    memory, disk and network.
                                                                                               60
                           0.80                                                                50

                           0.60                               1.49x                            40
                                                                                                     PV solution could only supports 4 tiles on two
                                                                                               30
                           0.40
                                                                                               20
                                                                                                     socket server and fails to pass QoS of Web
                           0.20                                                                10    Server criteria before it saturates CPU.
                           0.00                                                                0
                                           PV Guest                               HVM+SR-IOV         As a pilot, we enabled SR-IOV NIC for Web
                                                      Ratio       System Utilization                 Server. This brought >49% performance
                                                                                                     increase also allowed system to support two
                                                                                                     more tiles (12 VMs).

Software                              11
& Services
                            group
Direct IO (VT-d) Overhead
                                     VT-d cases: Utilization Breakdown                                VT-d: XEN Cycles Breakdown
                          100                                                                14.00%
                           90                               11.878                           12.00%
                           80
                           70                                                                10.00%
        Utilization (%)




                           60                                            Dom0                                    4.73%
                                                                                              8.00%                                    Interrupt Windows
                           50                                            Xen
                                                                                                                                       INTR
                           40                                            Guest Kernel         6.00%
                           30                                                                                                          APIC ACCESS
                                                                         Guest User           4.00%
                           20                                                                                                          IO INSTRUCTION
                                                                                                                 6.37%
                           10             5.94                                                2.00%
                            0
                                     Disk Bandwidth       Web Server                          0.00%
                                                                                                               SPECWeb
                                                                                                               Web Server



     APIC access and interrupt delivery consumed the most cycles.                       (Note that some amount of interrupts arrive when CPU is HLT thus they are not

     counted.)

     In various workloads, we’ve seen Xen brings in about 5~12% overhead, which is being mainly spent on serving
     interrupts. Intel OTC team has developed patch set to eliminate part of Xen software overhead. Check out
     Xiaowei’s session for details.

Software                        12
& Services
       group
CREDIT
  Great thanks to DUAN, Ronghui and XIANG, Kai for providing data
  of VT-d network and SR-IOV.


Software      13
& Services
      group
QUESTIONS?


Software      14   SSG/SSD/SPA/PRC Scalability Lab
& Services
      group
BACKUP


Software      15   SSG/SSD/SPA/PRC Scalability Lab
& Services
      group
Configuration
   Hardware Configuration:

   Intel® Nehalem-EP Server System                                            Test Case    VM Configuration
   CPU Info:
          2 socket Nehalem 2.66 GHZ with 8MB LLC Cache, C0 stepping.       Network Micro   4 vCPU, 64GB memory
          Hardware Prefetches OFF                                           benchmark
          Turbo mode OFF, EIST OFF.
   NIC device
          Intel 10Gb XF SR NIC (82598EB)—2 single port NIC installed on
                                                                           Storage Micro   2 vCPU, 12GB memory
          machine and one dual port NIC installed on server.
                                                                            Benchmark
   RAID bus controller:
          LSI Logic MegaRAID SAS 8888elp x3
                                                                                           4 vCPU, 64GB memory
          DISK array x6 (each with 70GB X 12 SAS HDD).                      Web Server
   Memory Info
          64GB memory (16x 4GB DDR3 1066MHz) , 32GB on each node.

                                                                             Database      4 vCPU, 12GB memory


   Software Configuration:
   Xen C/S:18771 for network/disk micro benchmark, 19591 for SR-IOV test




Software                  16
& Services
         group
17   SSG/SSD/SPA/PRC Scalability Lab

Mais conteúdo relacionado

Mais procurados

Nic teaming and converged fabric
Nic teaming and converged fabricNic teaming and converged fabric
Nic teaming and converged fabrichypervnu
 
Hostless : Simple Connectivity For WiMax Devices
Hostless : Simple Connectivity For WiMax DevicesHostless : Simple Connectivity For WiMax Devices
Hostless : Simple Connectivity For WiMax DevicesGreen Packet
 
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...Cisco Canada
 
Network Configuration Example: Configuring VPLS Pseudowires on MX Series Devi...
Network Configuration Example: Configuring VPLS Pseudowires on MX Series Devi...Network Configuration Example: Configuring VPLS Pseudowires on MX Series Devi...
Network Configuration Example: Configuring VPLS Pseudowires on MX Series Devi...Juniper Networks
 
SG Security Switch Brochure
SG Security Switch BrochureSG Security Switch Brochure
SG Security Switch BrochureShotaro Kaida
 
Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking ( EM...
Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking ( EM...Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking ( EM...
Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking ( EM...EMC
 
The Network\'s IN the (virtualised) Server: Virtualized Io In Heterogeneous M...
The Network\'s IN the (virtualised) Server: Virtualized Io In Heterogeneous M...The Network\'s IN the (virtualised) Server: Virtualized Io In Heterogeneous M...
The Network\'s IN the (virtualised) Server: Virtualized Io In Heterogeneous M...scarisbrick
 
Delivering the 'optimal mobile backhaul' experience
Delivering the 'optimal mobile backhaul' experienceDelivering the 'optimal mobile backhaul' experience
Delivering the 'optimal mobile backhaul' experienceAricent
 
Architectures and Technologies for Optimizing SP Video Networks
Architectures and Technologies for Optimizing SP Video NetworksArchitectures and Technologies for Optimizing SP Video Networks
Architectures and Technologies for Optimizing SP Video Networksrajeshra
 
CISCO Virtual Private LAN Service (VPLS) Technical Deployment Overview
CISCO Virtual Private LAN Service (VPLS) Technical Deployment OverviewCISCO Virtual Private LAN Service (VPLS) Technical Deployment Overview
CISCO Virtual Private LAN Service (VPLS) Technical Deployment OverviewAmeen Wayok
 
Hacia el Data Center virtualizado- Fabian Domínguez
Hacia el Data Center virtualizado- Fabian DomínguezHacia el Data Center virtualizado- Fabian Domínguez
Hacia el Data Center virtualizado- Fabian DomínguezEventos_PrinceCooke
 
Converged Data Center: FCoE, iSCSI and the Future of Storage Networking
Converged Data Center: FCoE, iSCSI and the Future of Storage NetworkingConverged Data Center: FCoE, iSCSI and the Future of Storage Networking
Converged Data Center: FCoE, iSCSI and the Future of Storage NetworkingEMC
 
Virtual Private LAN Service (VPLS)
Virtual Private LAN Service (VPLS)Virtual Private LAN Service (VPLS)
Virtual Private LAN Service (VPLS)Johnson Liu
 
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...Webcast: Reduce latency, improve analytics and maximize asset utilization in ...
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...Emulex Corporation
 
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )EMC
 
LTE-Operational Challenges & Deployment conundrum
LTE-Operational Challenges & Deployment conundrumLTE-Operational Challenges & Deployment conundrum
LTE-Operational Challenges & Deployment conundrumManas Ganguly
 

Mais procurados (20)

Nic teaming and converged fabric
Nic teaming and converged fabricNic teaming and converged fabric
Nic teaming and converged fabric
 
Hostless : Simple Connectivity For WiMax Devices
Hostless : Simple Connectivity For WiMax DevicesHostless : Simple Connectivity For WiMax Devices
Hostless : Simple Connectivity For WiMax Devices
 
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
Fiber Channel over Ethernet (FCoE) – Design, operations and management best p...
 
Network Configuration Example: Configuring VPLS Pseudowires on MX Series Devi...
Network Configuration Example: Configuring VPLS Pseudowires on MX Series Devi...Network Configuration Example: Configuring VPLS Pseudowires on MX Series Devi...
Network Configuration Example: Configuring VPLS Pseudowires on MX Series Devi...
 
SG Security Switch Brochure
SG Security Switch BrochureSG Security Switch Brochure
SG Security Switch Brochure
 
Sakar jain
Sakar jainSakar jain
Sakar jain
 
Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking ( EM...
Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking ( EM...Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking ( EM...
Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking ( EM...
 
Wan and VPN Solutions
Wan and VPN SolutionsWan and VPN Solutions
Wan and VPN Solutions
 
Bgp
BgpBgp
Bgp
 
The Network\'s IN the (virtualised) Server: Virtualized Io In Heterogeneous M...
The Network\'s IN the (virtualised) Server: Virtualized Io In Heterogeneous M...The Network\'s IN the (virtualised) Server: Virtualized Io In Heterogeneous M...
The Network\'s IN the (virtualised) Server: Virtualized Io In Heterogeneous M...
 
Delivering the 'optimal mobile backhaul' experience
Delivering the 'optimal mobile backhaul' experienceDelivering the 'optimal mobile backhaul' experience
Delivering the 'optimal mobile backhaul' experience
 
Architectures and Technologies for Optimizing SP Video Networks
Architectures and Technologies for Optimizing SP Video NetworksArchitectures and Technologies for Optimizing SP Video Networks
Architectures and Technologies for Optimizing SP Video Networks
 
VMware EMC Service Talk
VMware EMC Service TalkVMware EMC Service Talk
VMware EMC Service Talk
 
CISCO Virtual Private LAN Service (VPLS) Technical Deployment Overview
CISCO Virtual Private LAN Service (VPLS) Technical Deployment OverviewCISCO Virtual Private LAN Service (VPLS) Technical Deployment Overview
CISCO Virtual Private LAN Service (VPLS) Technical Deployment Overview
 
Hacia el Data Center virtualizado- Fabian Domínguez
Hacia el Data Center virtualizado- Fabian DomínguezHacia el Data Center virtualizado- Fabian Domínguez
Hacia el Data Center virtualizado- Fabian Domínguez
 
Converged Data Center: FCoE, iSCSI and the Future of Storage Networking
Converged Data Center: FCoE, iSCSI and the Future of Storage NetworkingConverged Data Center: FCoE, iSCSI and the Future of Storage Networking
Converged Data Center: FCoE, iSCSI and the Future of Storage Networking
 
Virtual Private LAN Service (VPLS)
Virtual Private LAN Service (VPLS)Virtual Private LAN Service (VPLS)
Virtual Private LAN Service (VPLS)
 
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...Webcast: Reduce latency, improve analytics and maximize asset utilization in ...
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...
 
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
FC/FCoE - Topologies, Protocols, and Limitations ( EMC World 2012 )
 
LTE-Operational Challenges & Deployment conundrum
LTE-Operational Challenges & Deployment conundrumLTE-Operational Challenges & Deployment conundrum
LTE-Operational Challenges & Deployment conundrum
 

Destaque

устройство компьютера
устройство компьютераустройство компьютера
устройство компьютераWarum19
 
Lexform Corso Avvocati 2008
Lexform Corso Avvocati 2008Lexform Corso Avvocati 2008
Lexform Corso Avvocati 2008Lexform
 
Xen Euro Par07
Xen Euro Par07Xen Euro Par07
Xen Euro Par07congvc
 
2009傳鬥秋讀書會第一週導讀
2009傳鬥秋讀書會第一週導讀2009傳鬥秋讀書會第一週導讀
2009傳鬥秋讀書會第一週導讀edenoot
 
Eclipse Summit Europe2008 Dtp
Eclipse Summit Europe2008 DtpEclipse Summit Europe2008 Dtp
Eclipse Summit Europe2008 DtpBrian Fitzpatrick
 
Redes san, Data center e virtualizacao
Redes san, Data center e virtualizacaoRedes san, Data center e virtualizacao
Redes san, Data center e virtualizacaoJohn Muconto
 

Destaque (7)

Parte1b
Parte1bParte1b
Parte1b
 
устройство компьютера
устройство компьютераустройство компьютера
устройство компьютера
 
Lexform Corso Avvocati 2008
Lexform Corso Avvocati 2008Lexform Corso Avvocati 2008
Lexform Corso Avvocati 2008
 
Xen Euro Par07
Xen Euro Par07Xen Euro Par07
Xen Euro Par07
 
2009傳鬥秋讀書會第一週導讀
2009傳鬥秋讀書會第一週導讀2009傳鬥秋讀書會第一週導讀
2009傳鬥秋讀書會第一週導讀
 
Eclipse Summit Europe2008 Dtp
Eclipse Summit Europe2008 DtpEclipse Summit Europe2008 Dtp
Eclipse Summit Europe2008 Dtp
 
Redes san, Data center e virtualizacao
Redes san, Data center e virtualizacaoRedes san, Data center e virtualizacao
Redes san, Data center e virtualizacao
 

Semelhante a Xensummit2009 Io Virtualization Performance

Why 10 Gigabit Ethernet Draft v2
Why 10 Gigabit Ethernet Draft v2Why 10 Gigabit Ethernet Draft v2
Why 10 Gigabit Ethernet Draft v2Vijay Tolani
 
Netsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfvNetsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfvIntel
 
IEEE 1588 Timing for Mobile Backhaul_Webinar
IEEE 1588 Timing for Mobile Backhaul_WebinarIEEE 1588 Timing for Mobile Backhaul_Webinar
IEEE 1588 Timing for Mobile Backhaul_WebinarSymmetricomSYMM
 
Windows server 8 hyper v networking (aidan finn)
Windows server 8 hyper v networking (aidan finn)Windows server 8 hyper v networking (aidan finn)
Windows server 8 hyper v networking (aidan finn)hypervnu
 
InfiniBand for the enterprise
InfiniBand for the enterpriseInfiniBand for the enterprise
InfiniBand for the enterpriseAnas Kanzoua
 
OpenStack and OpenFlow Demos
OpenStack and OpenFlow DemosOpenStack and OpenFlow Demos
OpenStack and OpenFlow DemosBrent Salisbury
 
Windows Server 8 Hyper V Networking
Windows Server 8 Hyper V NetworkingWindows Server 8 Hyper V Networking
Windows Server 8 Hyper V NetworkingAidan Finn
 
XS Boston 2008 Networking Direct Assignment
XS Boston 2008 Networking Direct AssignmentXS Boston 2008 Networking Direct Assignment
XS Boston 2008 Networking Direct AssignmentThe Linux Foundation
 
Mutating IP Network Model Ethernet-InfiniBand Interconnect
Mutating IP Network Model Ethernet-InfiniBand InterconnectMutating IP Network Model Ethernet-InfiniBand Interconnect
Mutating IP Network Model Ethernet-InfiniBand InterconnectNaoto MATSUMOTO
 
Presentation cloud computing and the internet
Presentation   cloud computing and the internetPresentation   cloud computing and the internet
Presentation cloud computing and the internetxKinAnx
 
IBM System Networking Overview - Jul 2013
IBM System Networking Overview - Jul 2013IBM System Networking Overview - Jul 2013
IBM System Networking Overview - Jul 2013Angel Villar Garea
 
Presentation from physical to virtual to cloud emc
Presentation   from physical to virtual to cloud emcPresentation   from physical to virtual to cloud emc
Presentation from physical to virtual to cloud emcxKinAnx
 
Industry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the TopIndustry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the TopIT Brand Pulse
 
Marvell SR-IOV Improves Server Virtualization Performance
Marvell SR-IOV Improves Server Virtualization PerformanceMarvell SR-IOV Improves Server Virtualization Performance
Marvell SR-IOV Improves Server Virtualization PerformanceMarvell
 
2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANLdgoodell
 
2011 intelligent operator_panels
2011 intelligent operator_panels2011 intelligent operator_panels
2011 intelligent operator_panelsadvantech2012
 
Discover Optical Ethernet V5
Discover Optical Ethernet V5Discover Optical Ethernet V5
Discover Optical Ethernet V5ss
 
Widyatama.lecture.applied networking.iv-week-13.future internet networking
Widyatama.lecture.applied networking.iv-week-13.future internet networkingWidyatama.lecture.applied networking.iv-week-13.future internet networking
Widyatama.lecture.applied networking.iv-week-13.future internet networkingDjadja Sardjana
 

Semelhante a Xensummit2009 Io Virtualization Performance (20)

Why 10 Gigabit Ethernet Draft v2
Why 10 Gigabit Ethernet Draft v2Why 10 Gigabit Ethernet Draft v2
Why 10 Gigabit Ethernet Draft v2
 
Netsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfvNetsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfv
 
IEEE 1588 Timing for Mobile Backhaul_Webinar
IEEE 1588 Timing for Mobile Backhaul_WebinarIEEE 1588 Timing for Mobile Backhaul_Webinar
IEEE 1588 Timing for Mobile Backhaul_Webinar
 
Mellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDNMellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDN
 
Windows server 8 hyper v networking (aidan finn)
Windows server 8 hyper v networking (aidan finn)Windows server 8 hyper v networking (aidan finn)
Windows server 8 hyper v networking (aidan finn)
 
InfiniBand for the enterprise
InfiniBand for the enterpriseInfiniBand for the enterprise
InfiniBand for the enterprise
 
OpenStack and OpenFlow Demos
OpenStack and OpenFlow DemosOpenStack and OpenFlow Demos
OpenStack and OpenFlow Demos
 
Windows Server 8 Hyper V Networking
Windows Server 8 Hyper V NetworkingWindows Server 8 Hyper V Networking
Windows Server 8 Hyper V Networking
 
XS Boston 2008 Networking Direct Assignment
XS Boston 2008 Networking Direct AssignmentXS Boston 2008 Networking Direct Assignment
XS Boston 2008 Networking Direct Assignment
 
Mutating IP Network Model Ethernet-InfiniBand Interconnect
Mutating IP Network Model Ethernet-InfiniBand InterconnectMutating IP Network Model Ethernet-InfiniBand Interconnect
Mutating IP Network Model Ethernet-InfiniBand Interconnect
 
Presentation cloud computing and the internet
Presentation   cloud computing and the internetPresentation   cloud computing and the internet
Presentation cloud computing and the internet
 
IBM System Networking Overview - Jul 2013
IBM System Networking Overview - Jul 2013IBM System Networking Overview - Jul 2013
IBM System Networking Overview - Jul 2013
 
Presentation from physical to virtual to cloud emc
Presentation   from physical to virtual to cloud emcPresentation   from physical to virtual to cloud emc
Presentation from physical to virtual to cloud emc
 
Industry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the TopIndustry Brief: Streamlining Server Connectivity: It Starts at the Top
Industry Brief: Streamlining Server Connectivity: It Starts at the Top
 
Marvell SR-IOV Improves Server Virtualization Performance
Marvell SR-IOV Improves Server Virtualization PerformanceMarvell SR-IOV Improves Server Virtualization Performance
Marvell SR-IOV Improves Server Virtualization Performance
 
2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL
 
2011 intelligent operator_panels
2011 intelligent operator_panels2011 intelligent operator_panels
2011 intelligent operator_panels
 
SR-IOV benchmark
SR-IOV benchmarkSR-IOV benchmark
SR-IOV benchmark
 
Discover Optical Ethernet V5
Discover Optical Ethernet V5Discover Optical Ethernet V5
Discover Optical Ethernet V5
 
Widyatama.lecture.applied networking.iv-week-13.future internet networking
Widyatama.lecture.applied networking.iv-week-13.future internet networkingWidyatama.lecture.applied networking.iv-week-13.future internet networking
Widyatama.lecture.applied networking.iv-week-13.future internet networking
 

Mais de The Linux Foundation

ELC2019: Static Partitioning Made Simple
ELC2019: Static Partitioning Made SimpleELC2019: Static Partitioning Made Simple
ELC2019: Static Partitioning Made SimpleThe Linux Foundation
 
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...The Linux Foundation
 
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...The Linux Foundation
 
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...The Linux Foundation
 
XPDDS19 Keynote: Unikraft Weather Report
XPDDS19 Keynote:  Unikraft Weather ReportXPDDS19 Keynote:  Unikraft Weather Report
XPDDS19 Keynote: Unikraft Weather ReportThe Linux Foundation
 
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...The Linux Foundation
 
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxXPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxThe Linux Foundation
 
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...The Linux Foundation
 
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
XPDDS19: Memories of a VM Funk - Mihai Donțu, BitdefenderXPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
XPDDS19: Memories of a VM Funk - Mihai Donțu, BitdefenderThe Linux Foundation
 
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19:  The Road to Safety Certification: Overcoming Community Challeng...OSSJP/ALS19:  The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...The Linux Foundation
 
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
 OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making... OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...The Linux Foundation
 
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, CitrixXPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, CitrixThe Linux Foundation
 
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltdXPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltdThe Linux Foundation
 
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...The Linux Foundation
 
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&DXPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&DThe Linux Foundation
 
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM SystemsXPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM SystemsThe Linux Foundation
 
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...The Linux Foundation
 
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...The Linux Foundation
 
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...The Linux Foundation
 
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSEXPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSEThe Linux Foundation
 

Mais de The Linux Foundation (20)

ELC2019: Static Partitioning Made Simple
ELC2019: Static Partitioning Made SimpleELC2019: Static Partitioning Made Simple
ELC2019: Static Partitioning Made Simple
 
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
 
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
 
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
 
XPDDS19 Keynote: Unikraft Weather Report
XPDDS19 Keynote:  Unikraft Weather ReportXPDDS19 Keynote:  Unikraft Weather Report
XPDDS19 Keynote: Unikraft Weather Report
 
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
 
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxXPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
 
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
 
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
XPDDS19: Memories of a VM Funk - Mihai Donțu, BitdefenderXPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
 
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19:  The Road to Safety Certification: Overcoming Community Challeng...OSSJP/ALS19:  The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
 
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
 OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making... OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
 
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, CitrixXPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
 
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltdXPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
 
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
 
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&DXPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
 
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM SystemsXPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
 
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
 
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
 
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
 
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSEXPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
 

Último

Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...itnewsafrica
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Alkin Tezuysal
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesManik S Magar
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...itnewsafrica
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integrationmarketing932765
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observabilityitnewsafrica
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Nikki Chapple
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 

Último (20)

Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
Unleashing Real-time Insights with ClickHouse_ Navigating the Landscape in 20...
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotesMuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
MuleSoft Online Meetup Group - B2B Crash Course: Release SparkNotes
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...Zeshan Sattar- Assessing the skill requirements and industry expectations for...
Zeshan Sattar- Assessing the skill requirements and industry expectations for...
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
 
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security ObservabilityGlenn Lazarus- Why Your Observability Strategy Needs Security Observability
Glenn Lazarus- Why Your Observability Strategy Needs Security Observability
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
Microsoft 365 Copilot: How to boost your productivity with AI – Part one: Ado...
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 

Xensummit2009 Io Virtualization Performance

  • 1. IO Virtualization Performance HUANG Zhiteng (zhiteng.huang@intel.com)
  • 2. Agenda • IO Virtualization Overview • Software solution • Hardware solution • IO performance Trend • How IO Virtualization Performed in Micro Benchmark • Network • Disk • Performance in Enterprise Workloads • Web Server: PV, VT-d and Native performance • Database: VT-d vs. Native performance • Consolidated Workload: SR-IOV benefit • Direct IO (VT-d) Overhead Analysis Software 2 & Services group
  • 3. IO Virtualization Overview IO Virtualization enables VMs to utilize Input/Output resources of hardware VMN platform. In this session, we cover network VM0 and storage. Software Solutions: Two solution we are familiar with on Xen: •Emulated Devices – QEMU Good compatibility, very poor performance •Para-Virtualized Devices Need driver support in guest, provides optimized performance compared to QEMU. Both require participation of Dom0 (driver domain) to serve an VM IO request. Software 3 & Services group
  • 4. IO Virtualization Overview – Hardware Solution • VMDq (Virtual Machine Device Queue) • Separate Rx & Tx queue pairs of NIC for each VM, Network Only Offer 3 kinds of Software “switch”. Requires specific OS and VMM support. H/W assists to accelerate IO. • Direct IO (VT-d) VM Exclusively Single or •Improved IO performance through direct assignment of a Owns Device combination of I/O device to an unmodified or paravirtualized VM. technology can be used to • SR-IOV (Single Root I/O Virtualization) address various One Device, •Changes to I/O device silicon to support multiple PCI Multiple usages. device ID’s, thus one I/O device can support multiple Virtual direct assigned guests. Requires VT-d. Function Software 4 & Services group
  • 5. IO Virtualization Overview - Trend Much Higher throughput, denser IO capacity. 40Gb/s and 100Gb/s Ethernet Scheduled to release Draft 3.0 in Nov. 2009, Standard Approval in 2010**. Fibre Channel over Ethernet (FCoE) Unified IO consolidates network (IP) and storage (SAN) to single connection. Solid State Drive (SSD) Provides hundreds MB/s bandwidth and >10,000 IOPS for single devices*. PCIe 2.0 Doubles the bit rate from 2.5GT/s to 5.0GT/s. Software 5 & Services * http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403 group ** See http://en.wikipedia.org/wiki/100_Gigabit_Ethernet
  • 6. How IO Virtualization Performed in Micro Benchmark – Network iperf: Transmiting Performance SR-IOV Dual-Port Iperf with 10Gb/s Ethernet NIC were 10GbE NIC 20 100% 18 used to benchmark TCP bandwidth of 16 19 80% Bandwidth (Gb/s) 14 12 60% different device models(*). 10 8 1.13x 9.54 40% 6 8.47 1.81x Thanks to VT-d, VM can easily 4 4.68 20% 2 0 0% achieved 10GbE line-rate in both cases HVM + PV driver PV Guest HVM + VT-d 30VMs+SR-IOV perf cpu% with relatively much lower resource iperf: Receiving Performance consumption. 10 100% 8 9.43 80% Bandwidth (Gb/s) Also with SR-IOV, we were able to get 6 3.04x 60% 19Gb/s transmitting performance with 4 40% 2 2.12x 3.10 20% 30VFs assigned to 30 VMs. 0 1.46 0% HVM + PV driver PV Guest HVM + VT-d perf cpu% Software 6 * HVM+ VT-d uses 2.6.27 kernel while PV guest and HVM+PV driver uses 2.6.18. * We turned off multiqueue support in NIC driver of HVM+VT-d due 2.6.18 kernel doesn’t have multi TX queue support. So for & Services iperf test, there was only one TX/RX queue in the NIC and all interrupts are sent to one physical core only. group * ITR (Interrupt Throttle Rate) was set to 8000 for all cases.
  • 7. How IO Virtualization Performed in Micro Benchmark – Network (cont.) Packet Transmitting performance is another Packet Transmitting Performance essential aspect of high throughput network. 9.00 8.15 Using Linux kernel packet generator(pktgen) 8.00 Million Packet Per Second (mpp/s) 7.00 2.1x with small UDP packets (128 Byte), HVM+VT-d 6.00 5.00 3.90 can send over 4 million packets/s with 1 TX 4.00 3.00 queue and >8 million packets/s with 4 TX 11.7x 2.00 1.00 0.33 queue. 0.00 PV Guest 1 queue HVM+VT-d 1 queue HVM+VT-d 4 queue PV performance was far behind due to its long packet processing path. Software 7 & Services group
  • 8. How IO Virtualization Performed in Micro Benchmark – Disk IO IOmeter: Disk Bandwidth 6,000 100.00% 4,911 5,000 80.00% We measured disk bandwidth with sequential Bandwidth (MB/s) 4,000 60.00% 3,000 2.9x 1,711 40.00% read and IOPS with random read to check block 2,000 1,000 20.00% device performance. 0 0.00% PV Guest HVM + VT-d HVM+VT-d out-perform PV guest to ~3.0x in IOmeter: Disk IOPS 20,000 18,725 100.00% both tests. 16,000 80.00% IO per Second 12,000 60.00% 2.7x 8,000 7,056 40.00% 4,000 20.00% 0 0.00% PV Guest HVM + VT-d Software 8 & Services group
  • 9. Performance in Enterprise Workloads – Web Server Web Server simulates a support website where connected users browse and Web Server Performance download files. 30,000 100.00% 24,500 We measures maximum simultaneous user 25,000 80.00% sessions that web server can support while 20,000 60.00% satisfying the QoS criteria. Sessions 15,000 2.7x 40.00% 10,000 9,000 5,000 1.8x Only HVM+VT-d was able to push server’s 20.00% 5,000 utilization to ~100%. PV solution hit some - 0.00% HVM + PV driver PV guest HVM+ VT-d bottleneck that they failed to pass QoS while performance CPU utilization utilization is still <70%. Software 9 & Services group
  • 10. Performance in Enterprise Workloads – Database Decision Support DB Performnce OLTP DB Performance 11,443 199.71 12000 10,762 184.08 200.00 10000 94.06% 150.00 127.56 8000 92.2% QphH 6000 100.00 4000 50.00 63.9% 2000 0 0.00 Native HVM + VT-d Native HVM + VT-d Storage & NIC HVM + VT-d Storage Decision Support DB requires high disk bandwidth while OLTP DB is IOPS bound and requires certain network bandwidth to connect to clients. HVM+VT-d combination achieved > 90% of native performance in these two DB workloads. Software 10 & Services group
  • 11. Performance in Enterprise Workloads – Consolidation with SR-IOV Workload consolidates multiple tiles of servers to run on same physical machine. One tile SR-IOV Benefit: Consolidated Workload consists of 1 instance of Web Server, 1 J2EE 1.60 1.49 100 AppServer and 1 Mail Server, altogether 6 VMs. 1.40 90 1.20 80 It’s a complex workload, which consumes CPU, Ratio of Performance 70 1.00 1.00 memory, disk and network. 60 0.80 50 0.60 1.49x 40 PV solution could only supports 4 tiles on two 30 0.40 20 socket server and fails to pass QoS of Web 0.20 10 Server criteria before it saturates CPU. 0.00 0 PV Guest HVM+SR-IOV As a pilot, we enabled SR-IOV NIC for Web Ratio System Utilization Server. This brought >49% performance increase also allowed system to support two more tiles (12 VMs). Software 11 & Services group
  • 12. Direct IO (VT-d) Overhead VT-d cases: Utilization Breakdown VT-d: XEN Cycles Breakdown 100 14.00% 90 11.878 12.00% 80 70 10.00% Utilization (%) 60 Dom0 4.73% 8.00% Interrupt Windows 50 Xen INTR 40 Guest Kernel 6.00% 30 APIC ACCESS Guest User 4.00% 20 IO INSTRUCTION 6.37% 10 5.94 2.00% 0 Disk Bandwidth Web Server 0.00% SPECWeb Web Server APIC access and interrupt delivery consumed the most cycles. (Note that some amount of interrupts arrive when CPU is HLT thus they are not counted.) In various workloads, we’ve seen Xen brings in about 5~12% overhead, which is being mainly spent on serving interrupts. Intel OTC team has developed patch set to eliminate part of Xen software overhead. Check out Xiaowei’s session for details. Software 12 & Services group
  • 13. CREDIT Great thanks to DUAN, Ronghui and XIANG, Kai for providing data of VT-d network and SR-IOV. Software 13 & Services group
  • 14. QUESTIONS? Software 14 SSG/SSD/SPA/PRC Scalability Lab & Services group
  • 15. BACKUP Software 15 SSG/SSD/SPA/PRC Scalability Lab & Services group
  • 16. Configuration Hardware Configuration: Intel® Nehalem-EP Server System Test Case VM Configuration CPU Info: 2 socket Nehalem 2.66 GHZ with 8MB LLC Cache, C0 stepping. Network Micro 4 vCPU, 64GB memory Hardware Prefetches OFF benchmark Turbo mode OFF, EIST OFF. NIC device Intel 10Gb XF SR NIC (82598EB)—2 single port NIC installed on Storage Micro 2 vCPU, 12GB memory machine and one dual port NIC installed on server. Benchmark RAID bus controller: LSI Logic MegaRAID SAS 8888elp x3 4 vCPU, 64GB memory DISK array x6 (each with 70GB X 12 SAS HDD). Web Server Memory Info 64GB memory (16x 4GB DDR3 1066MHz) , 32GB on each node. Database 4 vCPU, 12GB memory Software Configuration: Xen C/S:18771 for network/disk micro benchmark, 19591 for SR-IOV test Software 16 & Services group
  • 17. 17 SSG/SSD/SPA/PRC Scalability Lab