SlideShare uma empresa Scribd logo
1 de 163
4.1 New Features: Network
Network Receive Side Scaling (RSS) Support Enhancements Improvements to RSS support for guests via enhancements to VMXNET3.  Enhanced VM to VM Communication Further, inter-VM throughput performance will be improved under conditions where VMs are communicating directly with one another over the same virtual switch on the same ESX/ESXi host (inter-VM traffic). This is achieved through networking asynchronous TX processing architecture which enables the leveraging of additional physical CPU cores for processing inter-VM traffic.  VM – VM throughput improved by 2X, to up to 19 Gbps 10% improvement when going out to physical network
Other Improvements – Network Performance NetQueue Support Extension NetQueue support is extended to include support for hardware based LRO (large receive off-load) further improving CPU and throughput performance in 10 GE environments.  LRO support Large Receive Offload Each packets transmitted causes CPU to react  Lots of small packets received from physical media result in high CPU load LRO merges packets and transmits them at once Receive tests indicate 5-30% improvement in throughput 40 - 60% decrease in CPU cost Enabled for pNICs Broadcoms bnx2x and Intels Niantic Enabled for vNIC vmxnet2 and vmxnet3, but only recent Linux guestOS 3
IPv6—Progress towards full NIST “Host” Profile Compliance VI 3 (ESX 3.5) IPv6 supported in guests vSphere 4.0 IPv6 support for  ESX 4  vSphere Client  vCenter vMotion IP Storage (iSCSI, NFS) — EXPERIMENTAL Not supported for vSphere vCLI, HA, FT, Auto Deploy vSphere 4.1  NIST compliance with “Host” Profile (http://www.antd.nist.gov/usgv6/usgv6-v1.pdf) Including IPSEC, IKEv2, etc. Not supported for vSphere vCLI, HA, FT
Cisco Nexus 1000V—Planned Enhancements  Easier software upgrade In Service Software Upgrade (ISSU) for VSM and VEM Binary compatibility Weighted Fair Queuing (s/w scheduler) Increased Scalability, inline with vDS scalability SPAN to and from Port Profile VLAN pinning to PNIC Installer app for VSM HA and L3 VEM/VSM communication Start of EAL4 Common Criteria certification 4094 active VLANs Scale Port Profiles > 512 Always check with Cisco for latest info.
Network I/O Control
1GigE pNICs 10 GigE pNICs Network Traffic Management—Emergence of 10 GigE iSCSI iSCSI FT vMotion NFS FT vMotion NFS TCP/IP TCP/IP vSwitch vSwitch 10 GigE 1GigE Traffic Types compete. Who gets what share of the vmnic? ,[object Object]
Bandwidth assured by dedicated physical NICs
Traffic typically converged to two 10 GigE NICs
Some traffic types & flows could dominate others through oversubscription,[object Object]
Traffic Shaping Traffic Shaping Disadvantages Limits are fixed- even if there is bandwidth available it will not be used for other services bandwidth cannot be guaranteed without limiting other traffic (like reservations) VMware recommended to have separate pNICs for iSCSI/ NFS/ vMotion/ COS to have enough bandwidth available for these traffic types Customers don’t want to waste 8-9Gbit/s if this pNIC is dedicated for vMotion Instead of 6 1Gbit pNICs customers might have two 10Gbit pNICs sharing traffic Guaranteed bandwidth for vMotion limits bandwidth for other traffic even in the case where there is no vMotion active Traffic shaping is only a static way to control traffic iSCSI unused COS unused vMotion VMs 10Gbit/s NIC
Network I/O Control Network I/O Control Goals Isolation One flow should not dominate others Flexible Partitioning Allow isolation and over commitment Guarantee Service Levels when flows compete Note: This feature is only available with vDS (Enterprise Plus)
Overall Design
Parameters Limits and Shares  Limits specify the absolute maximumbandwidth for a flow over a Team Specified in Mbps Traffic from a given flow will never exceed its specified limit Egress from ESX host Shares specify the relative importance of an egress flow on a vmnic i.e. guaranteed minimum Specified in abstract units, from 1-100 Presets for Low (25 shares), Normal (50 shares), High (100 shares), plus Custom Bandwidth divided between flows based on their relative shares Controls apply to output from ESX host  Shares apply to a given vmnic Limits apply across the team
Configuration from vSphere Client  Limits Maximum bandwidth for traffic class/type Shares Guaranteed minimum service level vDS only feature! Preconfigured Traffic Classes e.g. VM traffic in this example: - limited to max of 500 Mbps (aggregate of all VMs)  - with minimum of 50/400 of pNIC bandwidth (50/(100+100+50+50+50+50)
Resource Management Shares Normal = 50 Low = 25 High = 100 Custom = any values between 1 and 100 Default values VM traffic = High (100) All others = Normal (50) No limit set
Implementation Each host calculates the shares separately or independantly One host might have only 1Gbit/s NICs while another one has already 10Gbit/s ones So resulting guaranteed bandwidth is different Only outgoing traffic is controlled Inter-switch traffic is not controlled, only the pNICs are affected Limits are still valid even if the pNIC is opted out Scheduler uses a static “Packets-In-Flight” window inFlightPackets: Packets that are actually in flight and in transmit process in the pNIC Window size is 50 kB No more than 50 kB are in flight (to the wire) at a given moment
Excluding a physical NIC Physical NICs per hosts can be excluded from Network Resource Management Host configuration -> Advanced Settings -> Net -> Net.ResMgmtPnicOptOut Will exclude specified NICs from shares calculation, notfromlimits!
Results With QoS in place, performance is less impacted
Load-Based Teaming
Current Teaming Policy In vSphere 4.0 three policies Port ID IP hash MAC Hash Disadvantages Static mapping No load balancing Could cause unbalanced load on pNICs Did not differ between pNIC bandwidth
NIC Teaming Enhancements—Load Based Teaming (LBT) Note: adjacent physical switch configuration is same as other teaming types (except IP-hash). i.e. same L2 domain LBT invoked if saturation detected on Tx or Rx (>75% mean utilization over 30s period) 30 sec period—long period avoids MAC address flapping issues with adjacent physical switches
Load Based Teaming Initial mapping Like PortID Balanced mapping between ports and pNICs Mapping not based on load (as initially no load existed) Adjusting the mapping Based on time frames; the load on a pNIC during a timeframe is taken into account In case load is unbalanced one VM (to be precise: the vSwitch port) will get re-assigned to a different pNIC Parameters Time frames and load threshold Default frame 30 seconds, minimum value 10 seconds Default load threshold 75%, possible values 0-100 Both Configurable through command line tool (only for debug purpose - not for customer)
Load Based Teaming Advantages Dynamic adjustments to load Different NIC speeds are taken into account as this is based on % load Can have a mix of 1 Gbit, 10 Gbit and even 100 Mbit NICs Dependencies LBT works independent from other algorithms Does not take limits or reservation from traffic shaping or Network I/O Management into account Algorithm based on the local host only DRS has to take care of cluster wide balancing  Implemented on vNetwork Distributed Switch only Edit dvPortGroup to change setting
4.1 New Features: Storage
NFS & HW iSCSI in vSphere 4.1  Improved NFS performance Up to 15% reduction in CPU cost for both read & write Up to 15% improvement in Throughput cost for both read & write Broadcom iSCSI HW Offload Support 89% improvement in CPU read cost! 83% improvement in CPU write cost!
VMware Data Recovery: New Capabilities Backup and Recovery Appliance ,[object Object]
File Level Restore client for Linux VMsVMware vSphere 4.1 ,[object Object],Destination Storage ,[object Object]
Improved deduplication performancevSphere Client Plug-In ,[object Object]
Improved usability and user experienceVMware vCenter
ParaVirtual SCSI (PVSCSI)  We will now support PVSCSI when used with these guest OS:  Windows XP (32bit and 64bit)  Vista (32bit and 64bit)  Windows 7 (32bit and 64bit)  /vmimages/floppies Point the VM Floppy Driver at the .FLP file When installing press F6 key to read the floppy
ParaVirtual SCSI  VM configured with a PVSCSI adapter can be part of an Fault Tolerant cluster. PVSCSI adapters already support hot-plugging or hot-unplugging of virtual devices, but the guest OS is not notified of any changes on the SCSI bus.  Consequently, any addition/removal of devices need to be followed by a manual rescan of the bus from within the guest.
Storage IO Control
The I/O Sharing Problem Low priority VM can limit I/O bandwidth for high priority VMs  Storage I/O allocation should be in line with VM priorities What you want to see What you see MicrosoftExchange MicrosoftExchange online store online store data mining data mining datastore datastore
Solution: Storage I/O Control CPU shares: Low Memory shares: Low CPU shares: High Memory shares: High CPU shares: High Memory shares: High I/O shares: High I/O shares: Low I/O shares: High 32GHz 16GB MicrosoftExchange online store data mining Datastore A
Setting I/O Controls
Enabling Storage I/O Control
Enabling Storage I/O Control Click the Storage I/O Control ‘Enabled’ checkbox to turn the feature on for that volume.
Enabling Storage I/O Control ,[object Object]
If the latency rises above this value, Storage I/O Control will kick in, and prioritize a VM’s I/O based on its shares value.,[object Object]
Allocate I/O Resources Shares translate into ESX I/O queue slots VMs with more shares are allowed to send more I/O’s at a time Slot assignment is dynamic, based on VM shares and current load Total # of slots available is dynamic, based on level of congestion data mining MicrosoftExchange online store I/O’s in flight STORAGE ARRAY
Experimental Setup
14% 21% 42% 15% Without Storage I/O Control (Default) Performance without Storage IO Control
14% 22% 8% 500 shares 500 shares 750 shares 750 shares 4000 shares With Storage I/O Control (Congestion Threshold: 25ms) Performance with Storage IO Control
Storage I/O Control in Action: Example #2 Two Windows VMs running SQL Server on two hosts 250 GB data disk, 50 GB log disk VM1: 500 shares VM2: 2000 shares Result: VM2 with higher shares gets more orders/min & lower latency!
Step 1: Detect Congestion No benefit beyond certain load Throughput (IOPS or MB/s) Congestion signal: ESX-array response time > threshold Default threshold: 35ms We will likely recommend different defaults for SSD and SATA Changing default threshold (not usually recommended) Low latency goal: set lower if latency is critical for some VMs High throughput goal: set close to IOPS maximization point Total Datastore Load (# of IO’s in flight)
Storage I/O Control Internals ,[object Object]
The first is the local VMI/O scheduler. This is called SFQ, the start-time fair queuing scheduler. This scheduler ensures share-based allocation of I/O resources between VMs on a per host basis.
The second is the distributed I/O scheduler for ESX hosts. This is called PARDA, the Proportional Allocation of Resources for Distributed Storage Access.
PARDA
carves out the array queue amongst all the VMs which are sending I/O to the datastore on the array.
adjusts the per host per datastore queue size (aka LUN queue/device queue) depending on the sum of the per VM shares on the host.
communicates this adjustment to each ESX via VSI nodes.
ESX servers also share cluster wide statistics between each other via a statsfile,[object Object]
Host-LevelIssue Queues Array Queue Storage Array SFQ SFQ SFQ Queue lengths varied dynamically Storage I/O Control Architecture PARDA PARDA PARDA
Requirements Storage I/O Control  supported on FC or iSCSI storage. NFS datastores are not supported. not supported on datastores with multiple extents. Array with Automated Storage Tiering capability Automated storage tiering is the ability of an array (or group of arrays) to automatically migrate LUNs/volumes or parts of LUNs/volumes to different types of storage media (SSD, FC, SAS, SATA) based on user-set policies and current I/O patterns.  Before using Storage I/O Control on datastores that are backed by arrays with automated storage tiering capabilities, check the VMware Storage/SAN Compatibility Guide to verify whether your automated tiered storage array has been certified to be compatible with Storage I/O Control No special certification is required for arrays that do not have any such automatic migration/tiering feature, including those that provide the ability to manually migrate data between different types of storage media
Hardware-Assist Storage Operation Formally known as vStorage API for Array Integration
vStorage APIs for Array Integration (VAAI)	 Improves performance by leveraging efficient array-based operations as an alternative to host-based solutions Three Primitives include: Full Copy – Xcopy like function to offload work to the array Write Same -Speeds up zeroing out of blocks or writing repeated content Atomic Test and Set – Alternate means to locking the entire LUN Helping function such as: Storage vMotion Provisioning VMs from Template Improves thin provisioning disk performance VMFS share storage pool scalability Notes: Requires firmware from Storage Vendors (6 participating) supports block based storage only. NFS not yet supported in 4.1
Array Integration Primitives: Introduction Atomic Test & Set (ATS) A mechanism to modify a disk sector to improve the performance of the ESX when doing metadata updates. Clone Blocks/Full Copy/XCOPY Full copy of blocks and ESX is guaranteed to have full space access to the blocks. Default offloaded clone size is 4MB. Zero Blocks/Write Same Write Zeroes. This will address the issue of time falling behind in a VM when the guest operating system writes to previously unwritten regions of its virtual disk: http://kb.vmware.com/kb/1008284 This primitive will improve MSCS in virtualization environment solutions where we need to zero out the virtual disk. Default zeroing size is 1MB.
Hardware Acceleration All vStorage support will be grouped into one attribute, called "Hardware Acceleration".  Not Supported implies one or more Hardware Acceleration primitives failed. Unknown implies Hardware Acceleration primitives have not yet been attempted.
VM Provisioning from Template with Full Copy Benefits Reduce installation time Standardize to ensure efficient                                                     management, protection & control Challenges Requires a full data copy 100 GB template (10 GB to copy): 5-20 minutes FT requires additional zeroing of blocks Improved Solution Use array’s native copy/clone & zeroing functions Up to 10-20x speedup in provisioning time
Storage vMotion with Array Full Copy Function  Benefits Zero-downtime migration Eases array maintenance, tiering, load balancing, upgrades, space mgmt  Challenges Performance impact on host, array, network Long migration time (0.5 - 2.5 hrs for 100GB VM) Best practice: use infrequently  Improved solution Use array’s native copy/clone functionality
VAAI Speeds Up Storage vMotion - Example 42:27 - 39:12 =  2 Min 21 sec w/out (141 seconds) 33:04 - 32:37  = 27 Sec with VAAI 141 sec vs. 27 sec
Copying Data – Optimized Cloning with VAAI ,[object Object]
Much less time!
Up to 95% reduction
Dramatic reduction in load on:
Servers
Network
Storage,[object Object]
Moving a VM with vMotion
Creating a new VM or deploying a VM from a template
Powering a VM on or off
Creating a template
Creating or deleting a file, including snapshots
A new VAAI feature, atomic_test_and_set allows the  ESX Server to offload the management of the required locks to thestorage and avoids locking the entire VMFS file system.,[object Object]
VMFS Scalability with Atomic Test and Set (ATS) Makes VMFS more scalable overall, by offloading block locking mechanism Using Atomic Test and Set (ATS) capability provides an alternate option to use of SCSI reservations to protect the VMFS metadata from being written to by two separate ESX Servers at one time. Normal VMware Locking (No ATS) Enhanced VMware Locking (With ATS)
For more details on VAAI vSphere 4.1 Documentation also describes use of this features in the ESX Configuration Guide Chapter 9 (pages 124 - 125) Listed in TOC as “Storage Hardware Acceleration” Three setting under advanced settings: DataMover.HardwareAcceratedMove 	- Full Copy DataMover.HardwareAcceratedInit	- Write Same VMFS3.HarwareAccerated Locking	- Atomic Test Set Additional Collateral planned for release after GA Frequently Asked Questions Datasheet or webpage content Partners include: Dell/EQL, EMC, HDS, HP, IBM and NetApp
Requirements The VMFS data mover will not leverage hardware offloads, and will use software data movement instead, in the following cases:  If the source and destination VMFS volumes have different block size; in such situations data movement will fall back to the generic FSDM layer, which will only do software data movement.  If the source file type is RDM and the destination file type is non-RDM (regular file)  If the source VMDK type is eagerzeroedthick and the destination VMDK type is thin.  If either source or destination VMDK is any sort of sparse or hosted format.  If the logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device.
VMFS Data Movement Caveats VMware supports VAAI primitives on VMFS with multiple LUNs/extents, if they are all on the same array and the array supports offloading. VMware does not support VAAI primitives on VMFS with multiple LUNs/extents, if they are all on different arrays, but all arrays support offloading.  HW cloning between arrays (even if it's within the same VMFS volume) won't work, so that would fall back to Software data movement.
vSphere 4.1 New Features: Management Management related features
Management – New Features Summary vCenter 32-bit to 64-bit data migration Enhanced Scalability Faster response time Update Manager Host Profile Enhancements Orchestrator Active Directory Support (Host and vMA) VMware Converter Hyper-V Import. Win08 R2 and Win7 convert Virtual Serial Port Concentrator
Scripting & Automation Host Profiles, Orchestrator, vMA, CLI, PowerCLI
Summary Host Profiles VMware Orchestrator VMware vMA PowerShell esxtop vscsiStats VMware Tools
Host Profiles Enhancements Host Profiles Cisco support PCI device ordering (support for selecting NICs) iSCSI support Admin password (setting root password)  Logging on the host File is at C:ocuments and Settingsll Userspplication DataMwareMware VirtualCenterogsyVmomiServer.log Config not covered by Host Profiles are:  Licensing  vDS policy configuration (however you can do non-policy vDS stuff)  iSCSI  Multipathing
Host Profiles Enhancements Lbtd Lsassd (Part of AD. See the AD preso) Lwiod (Part of AD) Netlogond (part of AD) vSphere 4.1 vSphere 4.0
Orchestrator Enhancements provides a client and server for 64-bit installations, with an optional 32-bit client. performance enhancements due to 64-bit installation
VMware Tools Command Line Utility This feature provides an alternative to the VMware Tools control panel (the GUI dialog box) The command line based toolbox will allow for administrators to automate the use of the toolbox functionalities by writing their own scripts
vSphere Management Assistant (vMA) A convenient place to perform administration Virtual Appliance packaged as an OVF Distributed, maintained and supported by VMware  Not included with ESXi – must be downloaded separately The environment has the following pre-installed: 64-bit Enterprise Linux OS VMware Tools Perl Toolkit vSphere Command Line Interface (VCLI) JRE (to run applications built with the vSphere SDK) VI Fast Pass (authentication service for scripts) VI Logger (log aggregator)
vMA Improvements in 4.1 Improved authentication capability – Active Directory support  Transition from RHEL to CentOS Security The security hole that exposed clear text passwords on ESX(i) or vCenter hosts when using vifpinit (vi-fastpass) is fixed vMA as netdump server You can configure ESXi host to get the netcoredump onto a remote server in case of crash or panic. Each ESXi must be configured to write the core dump.
For Tech Partner: VMware CIM API What it is: for developers building management applications. With the VMware CIM APIs, developers can use standards-based CIM-compliant applications to manage ESX/ESXi hosts. The VMware Common Information Model (CIM) APIs allow you to: view VMs and resources using profiles defined by the Storage Management Initiative Specification (SMI-S) manage hosts using the System Management Architecture for Server Hardware (SMASH) standard. SMASH profiles allow CIM clients to monitor system health of a managed server.  What’s new in 4.1 www.vmware.com/support/developer/cim-sdk/4.1/cim_410_releasenotes.html
vCLI and PowerCLI: primary scripting interfaces vCLI and PowerCLI built on same API as vSphere Client Same authentication (e.g. Active Directory), roles and privileges, event logging API is secure, optimized for remote environments, firewall-friendly, standards-based vCLI vSpherePowerCLI Other utility scripts Otherlanguages vSphere SDK vSphere Client vSphere Web Service API
vCLI for Administrative and Troubleshooting Tasks Areas of functionality Host Configuration: NTP, SNMP, Remote syslog, ESX conf, Kernel modules, local users Storage Configuration: NAS, SAN, iSCSI, vmkfstools, storage pathing, VMFS volume management Network Configuration: vSwitches (standard and distributed), physical NICs, Vmkernel NICs, DNS, Routing Miscellaneous: Monitoring, File management, VM Management, host backup, restore, and update vCLI can point to an ESXi host or to vCenter vMA is a convenient way for accessing vCLI Remote CLI now run faster in 4.1 relative to 4.0
Anatomy of a vCLI command Run directly on ESXi Host vicfg-nics --server hostname --user username --password mypassword options Hostname of ESXi host User defined locally on ESXi host Run through vCenter vicfg-nics --server hostname --user username --password mypassword  --vihost hostname options Hostname of vCenter host User defined in vCenter (AD) Target ESXi host
Additional vCLI configuration commands in 4.1 Storage esxcliswiscsi session: Manage iSCSI sessions  esxcliswiscsinic: Manage iSCSI NICs esxcliswiscsivmknic: List VMkernel NICs available for binding to particular iSCSI adapter  esxcliswiscsivmnic: List available uplink adapters for use with a specified iSCSI adapter esxclivaai device: Display information about devices claimed by the VMware VAAI (vStorage APIs for Array Integration) Filter Plugin. esxclicorestorage device: List devices or plugins. Used in conjunction with hardware acceleration.
Additional vCLI commands Network esxcli network: List active connections or list active ARP table entries.  vicfg-authconfig --server=<ESXi_IP_Adress> --username=root --password '' --authscheme AD --joindomain <ad_domain_name> --adusername=<ad_user_name> --adpassword=<ad_user_password> Storage NFS statistics available in resxtop VM esxclivms: Forcibly stop VMs that do not respond to normal stop operations, by using kill commands. # esxcli vms vm kill --type <kill_type> --world-id <ID> Note: designed to kill VMs in a reliable way (not dependent upon well-behaving system) Eliminating one of the most common reasons for wanting to use TSM.
esxcli  - New Namespaces esxcli has got 3 new namespaces – network, vaai and vms [root@cs-tse-i132 ~]# esxcli Usage: esxcli [disp options] <namespace> <object> <command> For esxcli help please run esxcli –help Available namespaces: corestorage  VMware core storage commands. network      VMware networking commands. nmp          VMware Native Multipath Plugin (NMP). This is the VMware default              implementation of the Pluggable Storage Architecture. swiscsi      VMware iSCSI commands. vaai         Vaai Namespace containing vaai code. vms          Limited Operations on VMs.
Control VM Operations # esxcli vms vm Usage: esxcli [disp options] vms vm <command> For esxcli help please run esxcli –help Available commands: kill  Used to forcibly kill VMs that are stuck and not responding to normal stop operations. list  List the VMs on this system.  This command currently will only list running VMs on the system. [root@cs-tse-i132 ~]# esxcli vms vm list vSphere Management Assistant (vMA)     World ID: 5588     Process ID: 27253     VMX Cartel ID: 5587     UUID: 42 01 a1 98 d6 65 6b e8-79 3b 2a 7c 9d 88 70 05     Display Name: vSphere Management Assistant (vMA)     Config File: /vmfs/volumes/4b1e10ed-8ce9ce16-f692-00215e364468/vSphere Management Assistant (vM/vSphere Management Assistant (vM.vmx
esxtop – Disk Devices View Use the ‘u’ option to display ‘Disk Devices’. NFS statistics can now be observed.  Here we are looking at throughput and latency stats for the devices.
New VAAI Statistics in esxtop (1 of 2) ,[object Object]
Each of the three primitives has their own unique set of statistics.
Toggle VAAI fields (‘O’ and ‘P’) to on for VAAI specific statistics.,[object Object],[object Object]
VSI # vsish /> cat /vmkModules/nfsclient/mnt/isos/properties mount point information {    server name:rhtraining.vmware.com    server IP:10.21.64.206    server volume:/mnt/repo/isos    UUID:4f125ca5-de4ee74d    socketSendSize:270336    socketReceiveSize:131072 reads:7    writes:0    readBytes:92160    writeBytes:0    readTime:404366    writeTime:0    aborts:0    active:0    readOnly:1    isMounted:1    isAccessible:1    unstableWrites:0    unstableNoCommit:0 } NFS I/O statistics are also available via the VSI nodes
vm-support enhancements vm-support now enables user to run 3rd party scripts.  To make vm-support run such scripts, add the scripts to "/etc/vmware/vm-support/command-files.d" directory and run vm-support.  The results will be added to the vm-support archive. Each script that is run will have its own directory which contain output and log files for that script in the vm-support archive.  These directories are stored in top-level directory "vm-support-commands-output".
Power CLI Feature Highlights: Easier to customize and extend PowerCLI, especially for reporting  Output objects can be customized by adding extra properties Better readability and less typing in scripts based on Get-View. Each output object has its associated view as nested property. Less typing is required to call Get-View and convert between PowerCLI object IDs and managed object IDs. Basic vDS support – moving VMs from/to vDS, adding/removing hosts from/to vDS More reporting: new getter cmdlets, new properties added to existing output objects, improvements in Get-Stat. Cmdlets for host HBAs PowerCLI Cmdlet Reference now documents all output types Cmdlets to control host routing tables Faster Datastore provider http://blogs.vmware.com/vipowershell/2010/07/powercli-41-is-out.html
If you are really really curious….  Additional commands (not supported) http://www.petri.co.il/vmware-esxi4-console-secret-commands.htm
vCenter specific
vCenter improvement Better load balancing with improved DRS/DPM algorithm effectiveness Improved performance at higher vCenter inventory limits – up to 7x higher throughput and up to 75% reduced latency Improved performance at higher cluster inventory limits – up to 3x higher throughput and up to 60% reduced latency Faster vCenter startup – around 5 minutes for maximum vCenter inventory size Better vSphere Client responsiveness, quicker user interaction, and faster user login Faster host operations and VM operations on standalone hosts – up to 60% reduction in latency Lower resource usage by vCenter agents by up to 40% Reduced VM group power-on latency by up to 25% Faster VM recovery with HA – up to 60% reduction in total recovery time for 1.6x more VMs
88 Enhanced vCenter Scalability
vCenter 4.1 install New option: Managing the RAM of JVM
vCenter Server: Changing JVM Sizing The same change should be visible by launching "Configure Tomcat" from the program menu (Start->Programs->VMware->VMware Tomcat).
vCenter: Services in Windows The following are not shown as services Licence Reporting manager
New Alarms
Predefined Alarms
Remote Console to VM Formally known as Virtual Serial Port Concentrator
Overview Many customers rely on managing physical hosts by connecting to the target machine over the serial port.  Physical serial port concentrators are used by such admins to multiplex connections to multiple hosts.  Provides a suitable way to remote a VM’s serial port(s) over a network connection, and supporting a “virtual serial port concentrator” utility. Using VMs you lose this functionality and the ability to do remote management using scripted installs and management.  Virtual Serial Port Concentrator Communicate between VMs and IP-enabled serial devices.  Connect to VM's serial port over the network, using telnet /ssh.  Have this connection uninterrupted during vmotion and other similar events.
Virtual Serial Port Concentrator What it is Redirect VM serial ports over a standard network link vSPC aggregates traffic from multiple serial ports onto one management console. It behaves similarly as physical serial port concentrators.  Benefits Using a vSPC also allows network connections to a VM's serial ports to migrate seamlessly when the VM is migrated using vMotion Management efficiencies Lower costs for multi-host management Enables 3rd party concentrator integration if required
Example (using Avocent) ACS 6000 Advanced Console Server running as a vSPC.  There is not a serial port or virtual serial port in the ACS6000 console server.  ACS6000 console server has a telnet daemon (server) listen to connections coming from ESX.  ESX will make one telnet connection for each virtual serial port configured to send data to ACS6000 console server.  The serial daemon will implement the telnet server with support to all telnet extensions implemented by VMware.
Configuring Virtual Ports on a VM
Configuring Virtual Ports on a VM Enables two VMs or a VM and a process on the host to communicate as if they were physical machines connected by a serial cable. For example, this can be used for remote debugging on a VM vSPC, which will act as proxy.
Configuring Virtual Ports on a VM Example (for Avocent): Type ACSID://ttySxx in the Port URI, where xx is between 1 to 48.  It defines which virtual serial port from the ACS6000 console server this serial port will connect to. 1 VM 1 port. ACS6000 has 48 ports only Type telnet://<IP of  Avocent VM>:8801
Configuring Virtual Ports on a VM
Configure VM to redirect Console Login Check your system's serial support Check operating system recognizes serial ports in your hardware Configure your /etc/inittab to support serial console logins Add the following lines to the /etc/inittab # Run agetty on COM1/ttyS0 s0:2345:respawn:/sbin/agetty -L -f /etc/issueserial 9600 ttyS0 vt100
Configure VM to redirect Console Login Activate the changes that you made in /etc/inittab # init q If you want to be able to login via serial console as the root user, you will need to edit the /etc/securetty configuration file. Add ttyS0 as an entry in the /etc/securetty 		console ttyS0 		vc/1  		vc/2
Configure serial port as the system console Use options in /etc/grub.conf to redirect console output to one of your serial ports Enables you to see all of the bootup and shutdown messages from your terminal. The text to add to the config file is highlighted :
Accessing the Serial Port of the Virtual Machine Open a Web connection to the AvocentACS6000 Click on the Portsfolder and click SerialPorts Based on the SerialPort connection configured in the VirtualMachine, you should see Signals of CTS|DSR|CD|RI
Accessing the Serial Port of the Virtual Machine ,[object Object]
Enter password of avocent and hit the Enterkey to establish the connection,[object Object]
UI > Performance > Advanced  vSphere 4.1 vSphere 4.0 Additional Chart Options in vSphere 4.1 around storage performance statistics: Datastore, Power, Storageadapter & Storage path.
110 Performance Graphs Additional Performance GraphViews added to vSphere 4.1 Host – Datastore, Management Agent, Power, Storage Adapter, Storage Path VM – Datastore, Power, Virtual Disk 110
Storage Statistics: vCenter & esxtop Not available in this timeframe: Aggregation at cluster level in vCenter (possible through APIs) *Network-based storage (NFS, iSCSI) I/O breakdown still being researched ** Not applicable to NFS; datastore is the equivalent   ESXTOP publishes throughput and latency  for LUN, if  datastore has only one LUN then LUN will be  equal datastore
Volume Stats for NFS Device
Datastore Activity Per Host
Other Host Stats
Datastore Activity per VM
Virtual Disk Activity per VM
VMware Update Manager
Update Manager ,[object Object]
Define, track, and enforce software update  compliance for ESX hosts/clusters, 3rd party ESX extensions, Virtual Appliances, VMTools/VM Hardware, online*/offline VMs, templates
Patch notification and recall
Cluster level pre-remediation check analysis and report
Framework to support 3rd party IHV/ISV updates, customizations: mass install, /update of EMC’s PowerPath  module
Enhanced compatibility with DPM for cluster level patch operations
Performance and scalability enhancements to match vCenter,[object Object]
Define, track, and enforce software update compliance and support for :
ESX/ESXi hosts
VMs
Virtual  Appliances
3rd Party ESX Modules
Online/Offline VMs, Templates
Automate and Generate Reports using Update Manager Database Views ESX/ESXi  VM Virtual Appliance VMTools VM  H/W VMTools VM H/W Online/offline ; Templates 3rd party extensions vCenter Update Manager
Deployment Components vCenter Server VI Client Update Manager Server Update Manager Components: Update Manager Server + DB 2.   Update Manager VI Client Plug-in  3.   Update Manager Download Service Virtualized Infrastructure External  Patch Feeds Confidential
New Features in 4.1 Update Manager now provides management of host upgrade packages. Provisioning, patching, and upgrade support for third-party modules. Offline bundles. Recalled patches Enhanced cluster operation. Better handling of low bandwidth and high latency network PowerCLI Better support for virtual vCenter
Notifications As we have already seen with the notification Schedule, Update Manager 4.1 contacts VMware at regular intervals to download notifications about patch recalls, new fixes and alerts.   If patches with problems/potential issues are released, these patches are recalled in the metadata and VUM marks them as recalled.  If you try to install a recalled patch, Update Manager notifies you that the patch is recalled and does not install it on the host.  If you have already installed such a patch, VUM notifies you that the recalled patch is installed on certain hosts, but does not remove the recalled patch from the host. Update Manager also deletes all the recalled patches from the Update Manager patch repository. When a patch fixing the problem is released, Update Manager 4.1 downloads the new patch and prompts you to install it.
Notifications Notifications which Update Manager downloads are displayed on the Notifications tab of the Update Manager Administration view. ,[object Object]
Update Manager shows the patch as recalled,[object Object]
Notifications Alarms posted for recalled and fixed Patches RecalledPatches are represented by a Flag
VUM 4.1 Feature - Notification Check Schedule By default Update Manager checks for notifications about patch recalls, patch fixes and alerts at certain time intervals. Edit Notifications to define the Frequency (hourly, daily, weekly, Monthly) and the Start time ( minutes after hour ), the Interval and the email address of who to Notify for recalled Patches
VUM 4.1 Feature  - ESX Host/Cluster Settings  When Remediating objects in a cluster with Distributed Power Management (DPM), High Availability (HA), and Fault Tolerance (FT) you should temporarilydisable these features for the entire cluster. VUM does not remediate hosts on which these features are enabled.  When the update completes, VUM restores these features These settings become the default failure response. You can specify differentsettings when you configure individual remediation tasks.
VUM 4.1 Feature - ESX Host/Cluster Settings Update Manager can not remediate hosts where VMs have connected CD/DVD drives. CD/DVD drives that are connected to the VMs on a host might prevent the host from entering maintenancemode and interrupt remediation. Select Temporarily disable any CD-ROMs that may prevent a host from entering maintenance mode.
Baselines and Groups Baselines might be upgrade, extension or patchbaselines. Baselines contain a collection of one or more patches, servicepacks and bugfixes, extensions or upgrades. Baseline groups are assembled from existingbaselines and might contain one upgradebaseline per type and one or more patch and extensionbaselines, or a combination of multiple patch and extension baselines. Preconfigured Baselines Hosts – 2 Baselines VM/VA – 6 Baselines
Baselines and Groups Update Manager 4.1 introduces a new Host Extension Baseline Host Extension baselines contain additional software for ESX/ESXi hosts. This additional software might be VMwaresoftware or third-partysoftware.

Mais conteúdo relacionado

Mais procurados

C3 Citrix Cloud Center
C3 Citrix Cloud CenterC3 Citrix Cloud Center
C3 Citrix Cloud CenterRui Lopes
 
VMware vSphere Performance Troubleshooting
VMware vSphere Performance TroubleshootingVMware vSphere Performance Troubleshooting
VMware vSphere Performance TroubleshootingDan Brinkmann
 
Win2k8 cluster kaliyan
Win2k8 cluster kaliyanWin2k8 cluster kaliyan
Win2k8 cluster kaliyanKaliyan S
 
Advanced performance troubleshooting using esxtop
Advanced performance troubleshooting using esxtopAdvanced performance troubleshooting using esxtop
Advanced performance troubleshooting using esxtopAlan Renouf
 
VMware Site Recovery Manager (SRM) 6.0 Lab Manual
VMware Site Recovery Manager (SRM) 6.0 Lab ManualVMware Site Recovery Manager (SRM) 6.0 Lab Manual
VMware Site Recovery Manager (SRM) 6.0 Lab ManualSanjeev Kumar
 
VMware Advance Troubleshooting Workshop - Day 2
VMware Advance Troubleshooting Workshop - Day 2VMware Advance Troubleshooting Workshop - Day 2
VMware Advance Troubleshooting Workshop - Day 2Vepsun Technologies
 
Realtime scheduling for virtual machines in SKT
Realtime scheduling for virtual machines in SKTRealtime scheduling for virtual machines in SKT
Realtime scheduling for virtual machines in SKTThe Linux Foundation
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3Vepsun Technologies
 
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...VMworld
 
20 christian ferber xen_server_6_workshop
20 christian ferber xen_server_6_workshop20 christian ferber xen_server_6_workshop
20 christian ferber xen_server_6_workshopDigicomp Academy AG
 
Hyper-V Best Practices & Tips and Tricks
Hyper-V Best Practices & Tips and TricksHyper-V Best Practices & Tips and Tricks
Hyper-V Best Practices & Tips and TricksAmit Gatenyo
 
VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5Vepsun Technologies
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationStephen Foskett
 

Mais procurados (15)

Xen community update
Xen community updateXen community update
Xen community update
 
C3 Citrix Cloud Center
C3 Citrix Cloud CenterC3 Citrix Cloud Center
C3 Citrix Cloud Center
 
VMware vSphere Performance Troubleshooting
VMware vSphere Performance TroubleshootingVMware vSphere Performance Troubleshooting
VMware vSphere Performance Troubleshooting
 
Win2k8 cluster kaliyan
Win2k8 cluster kaliyanWin2k8 cluster kaliyan
Win2k8 cluster kaliyan
 
Advanced performance troubleshooting using esxtop
Advanced performance troubleshooting using esxtopAdvanced performance troubleshooting using esxtop
Advanced performance troubleshooting using esxtop
 
VMware Site Recovery Manager (SRM) 6.0 Lab Manual
VMware Site Recovery Manager (SRM) 6.0 Lab ManualVMware Site Recovery Manager (SRM) 6.0 Lab Manual
VMware Site Recovery Manager (SRM) 6.0 Lab Manual
 
VMware Advance Troubleshooting Workshop - Day 2
VMware Advance Troubleshooting Workshop - Day 2VMware Advance Troubleshooting Workshop - Day 2
VMware Advance Troubleshooting Workshop - Day 2
 
Realtime scheduling for virtual machines in SKT
Realtime scheduling for virtual machines in SKTRealtime scheduling for virtual machines in SKT
Realtime scheduling for virtual machines in SKT
 
VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3VMware Advance Troubleshooting Workshop - Day 3
VMware Advance Troubleshooting Workshop - Day 3
 
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
 
20 christian ferber xen_server_6_workshop
20 christian ferber xen_server_6_workshop20 christian ferber xen_server_6_workshop
20 christian ferber xen_server_6_workshop
 
Hyper-V Best Practices & Tips and Tricks
Hyper-V Best Practices & Tips and TricksHyper-V Best Practices & Tips and Tricks
Hyper-V Best Practices & Tips and Tricks
 
Xen ATG case study
Xen ATG case studyXen ATG case study
Xen ATG case study
 
VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5
 
Rearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server VirtualizationRearchitecting Storage for Server Virtualization
Rearchitecting Storage for Server Virtualization
 

Semelhante a VMware vSphere 4.1 deep dive - part 2

Cisco UCS vs HP Virtual Connect
Cisco UCS vs HP Virtual ConnectCisco UCS vs HP Virtual Connect
Cisco UCS vs HP Virtual ConnectStefano Soliani
 
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld
 
Creating Competitive Advantage by Revolutionizing I/O
Creating Competitive Advantage by Revolutionizing I/OCreating Competitive Advantage by Revolutionizing I/O
Creating Competitive Advantage by Revolutionizing I/OEmulex Corporation
 
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД. Кон...
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД.  Кон...Современные сетевые аспекты, которые нужно учитывать при построении ЦОД.  Кон...
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД. Кон...Nick Turunov
 
#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout1CloudRoad.com
 
VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead VMworld
 
Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualizationSDN Hub
 
Technology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IOTechnology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IOIT Brand Pulse
 
Hosted Solutions Hi-Touch Services Guide
Hosted Solutions Hi-Touch Services GuideHosted Solutions Hi-Touch Services Guide
Hosted Solutions Hi-Touch Services GuideHosted Solutions
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep diveVepsun Technologies
 
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld
 
Renaissance in VM Network Connectivity
Renaissance in VM Network ConnectivityRenaissance in VM Network Connectivity
Renaissance in VM Network ConnectivityIT Brand Pulse
 
vPC techonology for full ha from dc core to baremetel server.
vPC techonology for full ha from dc core to baremetel server.vPC techonology for full ha from dc core to baremetel server.
vPC techonology for full ha from dc core to baremetel server.Ajeet Singh
 
#IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal" #IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal" Brocade
 
IBM Flex Systems Interconnect Fabric
IBM Flex Systems Interconnect FabricIBM Flex Systems Interconnect Fabric
IBM Flex Systems Interconnect FabricAngel Villar Garea
 
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...SkillFactory
 
vmwarenetworkingnexus1000vm-fex-v2-140125071045-phpapp01
vmwarenetworkingnexus1000vm-fex-v2-140125071045-phpapp01vmwarenetworkingnexus1000vm-fex-v2-140125071045-phpapp01
vmwarenetworkingnexus1000vm-fex-v2-140125071045-phpapp01Chrysostomos Christofi
 

Semelhante a VMware vSphere 4.1 deep dive - part 2 (20)

Cisco UCS vs HP Virtual Connect
Cisco UCS vs HP Virtual ConnectCisco UCS vs HP Virtual Connect
Cisco UCS vs HP Virtual Connect
 
Inf net2227 heath
Inf net2227 heathInf net2227 heath
Inf net2227 heath
 
Решения NFV в контексте операторов связи
Решения NFV в контексте операторов связиРешения NFV в контексте операторов связи
Решения NFV в контексте операторов связи
 
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
 
Creating Competitive Advantage by Revolutionizing I/O
Creating Competitive Advantage by Revolutionizing I/OCreating Competitive Advantage by Revolutionizing I/O
Creating Competitive Advantage by Revolutionizing I/O
 
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД. Кон...
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД.  Кон...Современные сетевые аспекты, которые нужно учитывать при построении ЦОД.  Кон...
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД. Кон...
 
#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout
 
VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead
 
Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualization
 
Technology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IOTechnology Brief: Flexible Blade Server IO
Technology Brief: Flexible Blade Server IO
 
Hosted Solutions Hi-Touch Services Guide
Hosted Solutions Hi-Touch Services GuideHosted Solutions Hi-Touch Services Guide
Hosted Solutions Hi-Touch Services Guide
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep dive
 
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
 
Renaissance in VM Network Connectivity
Renaissance in VM Network ConnectivityRenaissance in VM Network Connectivity
Renaissance in VM Network Connectivity
 
06 - Intel 10 Gb For Dc
06 - Intel 10 Gb For Dc06 - Intel 10 Gb For Dc
06 - Intel 10 Gb For Dc
 
vPC techonology for full ha from dc core to baremetel server.
vPC techonology for full ha from dc core to baremetel server.vPC techonology for full ha from dc core to baremetel server.
vPC techonology for full ha from dc core to baremetel server.
 
#IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal" #IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal"
 
IBM Flex Systems Interconnect Fabric
IBM Flex Systems Interconnect FabricIBM Flex Systems Interconnect Fabric
IBM Flex Systems Interconnect Fabric
 
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
 
vmwarenetworkingnexus1000vm-fex-v2-140125071045-phpapp01
vmwarenetworkingnexus1000vm-fex-v2-140125071045-phpapp01vmwarenetworkingnexus1000vm-fex-v2-140125071045-phpapp01
vmwarenetworkingnexus1000vm-fex-v2-140125071045-phpapp01
 

Mais de Louis Göhl

Citrix vision and product highlights november 2011
Citrix vision and product highlights november 2011Citrix vision and product highlights november 2011
Citrix vision and product highlights november 2011Louis Göhl
 
Citrix vision & strategy overview november 2011
Citrix vision & strategy overview november 2011Citrix vision & strategy overview november 2011
Citrix vision & strategy overview november 2011Louis Göhl
 
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.Louis Göhl
 
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...Louis Göhl
 
Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...Louis Göhl
 
Security best practices for hyper v and server virtualisation [svr307]
Security best practices for hyper v and server virtualisation [svr307]Security best practices for hyper v and server virtualisation [svr307]
Security best practices for hyper v and server virtualisation [svr307]Louis Göhl
 
Hyper v and live migration on cisco unified computing system - virtualized on...
Hyper v and live migration on cisco unified computing system - virtualized on...Hyper v and live migration on cisco unified computing system - virtualized on...
Hyper v and live migration on cisco unified computing system - virtualized on...Louis Göhl
 
HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009Louis Göhl
 
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...Louis Göhl
 
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...Louis Göhl
 
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...Louis Göhl
 
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?Louis Göhl
 
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...Louis Göhl
 
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...Louis Göhl
 
MGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
MGT300 Using Microsoft System Center to Manage beyond the Trusted DomainMGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
MGT300 Using Microsoft System Center to Manage beyond the Trusted DomainLouis Göhl
 
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...Louis Göhl
 
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...Louis Göhl
 
Windows Virtual Enterprise Centralized Desktop
Windows Virtual Enterprise Centralized DesktopWindows Virtual Enterprise Centralized Desktop
Windows Virtual Enterprise Centralized DesktopLouis Göhl
 
Optimized Desktop, Mdop And Windows 7
Optimized Desktop, Mdop And Windows 7Optimized Desktop, Mdop And Windows 7
Optimized Desktop, Mdop And Windows 7Louis Göhl
 

Mais de Louis Göhl (19)

Citrix vision and product highlights november 2011
Citrix vision and product highlights november 2011Citrix vision and product highlights november 2011
Citrix vision and product highlights november 2011
 
Citrix vision & strategy overview november 2011
Citrix vision & strategy overview november 2011Citrix vision & strategy overview november 2011
Citrix vision & strategy overview november 2011
 
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
 
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
 
Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...Storage and hyper v - the choices you can make and the things you need to kno...
Storage and hyper v - the choices you can make and the things you need to kno...
 
Security best practices for hyper v and server virtualisation [svr307]
Security best practices for hyper v and server virtualisation [svr307]Security best practices for hyper v and server virtualisation [svr307]
Security best practices for hyper v and server virtualisation [svr307]
 
Hyper v and live migration on cisco unified computing system - virtualized on...
Hyper v and live migration on cisco unified computing system - virtualized on...Hyper v and live migration on cisco unified computing system - virtualized on...
Hyper v and live migration on cisco unified computing system - virtualized on...
 
HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009HP Bladesystem Overview September 2009
HP Bladesystem Overview September 2009
 
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
 
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
 
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
 
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
 
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
 
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
 
MGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
MGT300 Using Microsoft System Center to Manage beyond the Trusted DomainMGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
MGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
 
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
 
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
 
Windows Virtual Enterprise Centralized Desktop
Windows Virtual Enterprise Centralized DesktopWindows Virtual Enterprise Centralized Desktop
Windows Virtual Enterprise Centralized Desktop
 
Optimized Desktop, Mdop And Windows 7
Optimized Desktop, Mdop And Windows 7Optimized Desktop, Mdop And Windows 7
Optimized Desktop, Mdop And Windows 7
 

VMware vSphere 4.1 deep dive - part 2

  • 2. Network Receive Side Scaling (RSS) Support Enhancements Improvements to RSS support for guests via enhancements to VMXNET3. Enhanced VM to VM Communication Further, inter-VM throughput performance will be improved under conditions where VMs are communicating directly with one another over the same virtual switch on the same ESX/ESXi host (inter-VM traffic). This is achieved through networking asynchronous TX processing architecture which enables the leveraging of additional physical CPU cores for processing inter-VM traffic. VM – VM throughput improved by 2X, to up to 19 Gbps 10% improvement when going out to physical network
  • 3. Other Improvements – Network Performance NetQueue Support Extension NetQueue support is extended to include support for hardware based LRO (large receive off-load) further improving CPU and throughput performance in 10 GE environments. LRO support Large Receive Offload Each packets transmitted causes CPU to react Lots of small packets received from physical media result in high CPU load LRO merges packets and transmits them at once Receive tests indicate 5-30% improvement in throughput 40 - 60% decrease in CPU cost Enabled for pNICs Broadcoms bnx2x and Intels Niantic Enabled for vNIC vmxnet2 and vmxnet3, but only recent Linux guestOS 3
  • 4. IPv6—Progress towards full NIST “Host” Profile Compliance VI 3 (ESX 3.5) IPv6 supported in guests vSphere 4.0 IPv6 support for ESX 4 vSphere Client vCenter vMotion IP Storage (iSCSI, NFS) — EXPERIMENTAL Not supported for vSphere vCLI, HA, FT, Auto Deploy vSphere 4.1 NIST compliance with “Host” Profile (http://www.antd.nist.gov/usgv6/usgv6-v1.pdf) Including IPSEC, IKEv2, etc. Not supported for vSphere vCLI, HA, FT
  • 5. Cisco Nexus 1000V—Planned Enhancements Easier software upgrade In Service Software Upgrade (ISSU) for VSM and VEM Binary compatibility Weighted Fair Queuing (s/w scheduler) Increased Scalability, inline with vDS scalability SPAN to and from Port Profile VLAN pinning to PNIC Installer app for VSM HA and L3 VEM/VSM communication Start of EAL4 Common Criteria certification 4094 active VLANs Scale Port Profiles > 512 Always check with Cisco for latest info.
  • 7.
  • 8. Bandwidth assured by dedicated physical NICs
  • 9. Traffic typically converged to two 10 GigE NICs
  • 10.
  • 11. Traffic Shaping Traffic Shaping Disadvantages Limits are fixed- even if there is bandwidth available it will not be used for other services bandwidth cannot be guaranteed without limiting other traffic (like reservations) VMware recommended to have separate pNICs for iSCSI/ NFS/ vMotion/ COS to have enough bandwidth available for these traffic types Customers don’t want to waste 8-9Gbit/s if this pNIC is dedicated for vMotion Instead of 6 1Gbit pNICs customers might have two 10Gbit pNICs sharing traffic Guaranteed bandwidth for vMotion limits bandwidth for other traffic even in the case where there is no vMotion active Traffic shaping is only a static way to control traffic iSCSI unused COS unused vMotion VMs 10Gbit/s NIC
  • 12. Network I/O Control Network I/O Control Goals Isolation One flow should not dominate others Flexible Partitioning Allow isolation and over commitment Guarantee Service Levels when flows compete Note: This feature is only available with vDS (Enterprise Plus)
  • 14. Parameters Limits and Shares Limits specify the absolute maximumbandwidth for a flow over a Team Specified in Mbps Traffic from a given flow will never exceed its specified limit Egress from ESX host Shares specify the relative importance of an egress flow on a vmnic i.e. guaranteed minimum Specified in abstract units, from 1-100 Presets for Low (25 shares), Normal (50 shares), High (100 shares), plus Custom Bandwidth divided between flows based on their relative shares Controls apply to output from ESX host Shares apply to a given vmnic Limits apply across the team
  • 15. Configuration from vSphere Client Limits Maximum bandwidth for traffic class/type Shares Guaranteed minimum service level vDS only feature! Preconfigured Traffic Classes e.g. VM traffic in this example: - limited to max of 500 Mbps (aggregate of all VMs) - with minimum of 50/400 of pNIC bandwidth (50/(100+100+50+50+50+50)
  • 16. Resource Management Shares Normal = 50 Low = 25 High = 100 Custom = any values between 1 and 100 Default values VM traffic = High (100) All others = Normal (50) No limit set
  • 17. Implementation Each host calculates the shares separately or independantly One host might have only 1Gbit/s NICs while another one has already 10Gbit/s ones So resulting guaranteed bandwidth is different Only outgoing traffic is controlled Inter-switch traffic is not controlled, only the pNICs are affected Limits are still valid even if the pNIC is opted out Scheduler uses a static “Packets-In-Flight” window inFlightPackets: Packets that are actually in flight and in transmit process in the pNIC Window size is 50 kB No more than 50 kB are in flight (to the wire) at a given moment
  • 18. Excluding a physical NIC Physical NICs per hosts can be excluded from Network Resource Management Host configuration -> Advanced Settings -> Net -> Net.ResMgmtPnicOptOut Will exclude specified NICs from shares calculation, notfromlimits!
  • 19. Results With QoS in place, performance is less impacted
  • 21. Current Teaming Policy In vSphere 4.0 three policies Port ID IP hash MAC Hash Disadvantages Static mapping No load balancing Could cause unbalanced load on pNICs Did not differ between pNIC bandwidth
  • 22. NIC Teaming Enhancements—Load Based Teaming (LBT) Note: adjacent physical switch configuration is same as other teaming types (except IP-hash). i.e. same L2 domain LBT invoked if saturation detected on Tx or Rx (>75% mean utilization over 30s period) 30 sec period—long period avoids MAC address flapping issues with adjacent physical switches
  • 23. Load Based Teaming Initial mapping Like PortID Balanced mapping between ports and pNICs Mapping not based on load (as initially no load existed) Adjusting the mapping Based on time frames; the load on a pNIC during a timeframe is taken into account In case load is unbalanced one VM (to be precise: the vSwitch port) will get re-assigned to a different pNIC Parameters Time frames and load threshold Default frame 30 seconds, minimum value 10 seconds Default load threshold 75%, possible values 0-100 Both Configurable through command line tool (only for debug purpose - not for customer)
  • 24. Load Based Teaming Advantages Dynamic adjustments to load Different NIC speeds are taken into account as this is based on % load Can have a mix of 1 Gbit, 10 Gbit and even 100 Mbit NICs Dependencies LBT works independent from other algorithms Does not take limits or reservation from traffic shaping or Network I/O Management into account Algorithm based on the local host only DRS has to take care of cluster wide balancing Implemented on vNetwork Distributed Switch only Edit dvPortGroup to change setting
  • 25. 4.1 New Features: Storage
  • 26. NFS & HW iSCSI in vSphere 4.1 Improved NFS performance Up to 15% reduction in CPU cost for both read & write Up to 15% improvement in Throughput cost for both read & write Broadcom iSCSI HW Offload Support 89% improvement in CPU read cost! 83% improvement in CPU write cost!
  • 27.
  • 28.
  • 29.
  • 30. Improved usability and user experienceVMware vCenter
  • 31. ParaVirtual SCSI (PVSCSI) We will now support PVSCSI when used with these guest OS: Windows XP (32bit and 64bit) Vista (32bit and 64bit) Windows 7 (32bit and 64bit) /vmimages/floppies Point the VM Floppy Driver at the .FLP file When installing press F6 key to read the floppy
  • 32. ParaVirtual SCSI VM configured with a PVSCSI adapter can be part of an Fault Tolerant cluster. PVSCSI adapters already support hot-plugging or hot-unplugging of virtual devices, but the guest OS is not notified of any changes on the SCSI bus. Consequently, any addition/removal of devices need to be followed by a manual rescan of the bus from within the guest.
  • 34. The I/O Sharing Problem Low priority VM can limit I/O bandwidth for high priority VMs Storage I/O allocation should be in line with VM priorities What you want to see What you see MicrosoftExchange MicrosoftExchange online store online store data mining data mining datastore datastore
  • 35. Solution: Storage I/O Control CPU shares: Low Memory shares: Low CPU shares: High Memory shares: High CPU shares: High Memory shares: High I/O shares: High I/O shares: Low I/O shares: High 32GHz 16GB MicrosoftExchange online store data mining Datastore A
  • 38. Enabling Storage I/O Control Click the Storage I/O Control ‘Enabled’ checkbox to turn the feature on for that volume.
  • 39.
  • 40.
  • 41. Allocate I/O Resources Shares translate into ESX I/O queue slots VMs with more shares are allowed to send more I/O’s at a time Slot assignment is dynamic, based on VM shares and current load Total # of slots available is dynamic, based on level of congestion data mining MicrosoftExchange online store I/O’s in flight STORAGE ARRAY
  • 43. 14% 21% 42% 15% Without Storage I/O Control (Default) Performance without Storage IO Control
  • 44. 14% 22% 8% 500 shares 500 shares 750 shares 750 shares 4000 shares With Storage I/O Control (Congestion Threshold: 25ms) Performance with Storage IO Control
  • 45. Storage I/O Control in Action: Example #2 Two Windows VMs running SQL Server on two hosts 250 GB data disk, 50 GB log disk VM1: 500 shares VM2: 2000 shares Result: VM2 with higher shares gets more orders/min & lower latency!
  • 46. Step 1: Detect Congestion No benefit beyond certain load Throughput (IOPS or MB/s) Congestion signal: ESX-array response time > threshold Default threshold: 35ms We will likely recommend different defaults for SSD and SATA Changing default threshold (not usually recommended) Low latency goal: set lower if latency is critical for some VMs High throughput goal: set close to IOPS maximization point Total Datastore Load (# of IO’s in flight)
  • 47.
  • 48. The first is the local VMI/O scheduler. This is called SFQ, the start-time fair queuing scheduler. This scheduler ensures share-based allocation of I/O resources between VMs on a per host basis.
  • 49. The second is the distributed I/O scheduler for ESX hosts. This is called PARDA, the Proportional Allocation of Resources for Distributed Storage Access.
  • 50. PARDA
  • 51. carves out the array queue amongst all the VMs which are sending I/O to the datastore on the array.
  • 52. adjusts the per host per datastore queue size (aka LUN queue/device queue) depending on the sum of the per VM shares on the host.
  • 53. communicates this adjustment to each ESX via VSI nodes.
  • 54.
  • 55. Host-LevelIssue Queues Array Queue Storage Array SFQ SFQ SFQ Queue lengths varied dynamically Storage I/O Control Architecture PARDA PARDA PARDA
  • 56. Requirements Storage I/O Control supported on FC or iSCSI storage. NFS datastores are not supported. not supported on datastores with multiple extents. Array with Automated Storage Tiering capability Automated storage tiering is the ability of an array (or group of arrays) to automatically migrate LUNs/volumes or parts of LUNs/volumes to different types of storage media (SSD, FC, SAS, SATA) based on user-set policies and current I/O patterns. Before using Storage I/O Control on datastores that are backed by arrays with automated storage tiering capabilities, check the VMware Storage/SAN Compatibility Guide to verify whether your automated tiered storage array has been certified to be compatible with Storage I/O Control No special certification is required for arrays that do not have any such automatic migration/tiering feature, including those that provide the ability to manually migrate data between different types of storage media
  • 57. Hardware-Assist Storage Operation Formally known as vStorage API for Array Integration
  • 58. vStorage APIs for Array Integration (VAAI) Improves performance by leveraging efficient array-based operations as an alternative to host-based solutions Three Primitives include: Full Copy – Xcopy like function to offload work to the array Write Same -Speeds up zeroing out of blocks or writing repeated content Atomic Test and Set – Alternate means to locking the entire LUN Helping function such as: Storage vMotion Provisioning VMs from Template Improves thin provisioning disk performance VMFS share storage pool scalability Notes: Requires firmware from Storage Vendors (6 participating) supports block based storage only. NFS not yet supported in 4.1
  • 59. Array Integration Primitives: Introduction Atomic Test & Set (ATS) A mechanism to modify a disk sector to improve the performance of the ESX when doing metadata updates. Clone Blocks/Full Copy/XCOPY Full copy of blocks and ESX is guaranteed to have full space access to the blocks. Default offloaded clone size is 4MB. Zero Blocks/Write Same Write Zeroes. This will address the issue of time falling behind in a VM when the guest operating system writes to previously unwritten regions of its virtual disk: http://kb.vmware.com/kb/1008284 This primitive will improve MSCS in virtualization environment solutions where we need to zero out the virtual disk. Default zeroing size is 1MB.
  • 60. Hardware Acceleration All vStorage support will be grouped into one attribute, called "Hardware Acceleration". Not Supported implies one or more Hardware Acceleration primitives failed. Unknown implies Hardware Acceleration primitives have not yet been attempted.
  • 61. VM Provisioning from Template with Full Copy Benefits Reduce installation time Standardize to ensure efficient management, protection & control Challenges Requires a full data copy 100 GB template (10 GB to copy): 5-20 minutes FT requires additional zeroing of blocks Improved Solution Use array’s native copy/clone & zeroing functions Up to 10-20x speedup in provisioning time
  • 62. Storage vMotion with Array Full Copy Function Benefits Zero-downtime migration Eases array maintenance, tiering, load balancing, upgrades, space mgmt Challenges Performance impact on host, array, network Long migration time (0.5 - 2.5 hrs for 100GB VM) Best practice: use infrequently Improved solution Use array’s native copy/clone functionality
  • 63. VAAI Speeds Up Storage vMotion - Example 42:27 - 39:12 = 2 Min 21 sec w/out (141 seconds) 33:04 - 32:37 = 27 Sec with VAAI 141 sec vs. 27 sec
  • 64.
  • 66. Up to 95% reduction
  • 70.
  • 71. Moving a VM with vMotion
  • 72. Creating a new VM or deploying a VM from a template
  • 73. Powering a VM on or off
  • 75. Creating or deleting a file, including snapshots
  • 76.
  • 77. VMFS Scalability with Atomic Test and Set (ATS) Makes VMFS more scalable overall, by offloading block locking mechanism Using Atomic Test and Set (ATS) capability provides an alternate option to use of SCSI reservations to protect the VMFS metadata from being written to by two separate ESX Servers at one time. Normal VMware Locking (No ATS) Enhanced VMware Locking (With ATS)
  • 78. For more details on VAAI vSphere 4.1 Documentation also describes use of this features in the ESX Configuration Guide Chapter 9 (pages 124 - 125) Listed in TOC as “Storage Hardware Acceleration” Three setting under advanced settings: DataMover.HardwareAcceratedMove - Full Copy DataMover.HardwareAcceratedInit - Write Same VMFS3.HarwareAccerated Locking - Atomic Test Set Additional Collateral planned for release after GA Frequently Asked Questions Datasheet or webpage content Partners include: Dell/EQL, EMC, HDS, HP, IBM and NetApp
  • 79. Requirements The VMFS data mover will not leverage hardware offloads, and will use software data movement instead, in the following cases: If the source and destination VMFS volumes have different block size; in such situations data movement will fall back to the generic FSDM layer, which will only do software data movement. If the source file type is RDM and the destination file type is non-RDM (regular file) If the source VMDK type is eagerzeroedthick and the destination VMDK type is thin. If either source or destination VMDK is any sort of sparse or hosted format. If the logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device.
  • 80. VMFS Data Movement Caveats VMware supports VAAI primitives on VMFS with multiple LUNs/extents, if they are all on the same array and the array supports offloading. VMware does not support VAAI primitives on VMFS with multiple LUNs/extents, if they are all on different arrays, but all arrays support offloading. HW cloning between arrays (even if it's within the same VMFS volume) won't work, so that would fall back to Software data movement.
  • 81. vSphere 4.1 New Features: Management Management related features
  • 82. Management – New Features Summary vCenter 32-bit to 64-bit data migration Enhanced Scalability Faster response time Update Manager Host Profile Enhancements Orchestrator Active Directory Support (Host and vMA) VMware Converter Hyper-V Import. Win08 R2 and Win7 convert Virtual Serial Port Concentrator
  • 83. Scripting & Automation Host Profiles, Orchestrator, vMA, CLI, PowerCLI
  • 84. Summary Host Profiles VMware Orchestrator VMware vMA PowerShell esxtop vscsiStats VMware Tools
  • 85. Host Profiles Enhancements Host Profiles Cisco support PCI device ordering (support for selecting NICs) iSCSI support Admin password (setting root password) Logging on the host File is at C:ocuments and Settingsll Userspplication DataMwareMware VirtualCenterogsyVmomiServer.log Config not covered by Host Profiles are: Licensing vDS policy configuration (however you can do non-policy vDS stuff) iSCSI Multipathing
  • 86. Host Profiles Enhancements Lbtd Lsassd (Part of AD. See the AD preso) Lwiod (Part of AD) Netlogond (part of AD) vSphere 4.1 vSphere 4.0
  • 87. Orchestrator Enhancements provides a client and server for 64-bit installations, with an optional 32-bit client. performance enhancements due to 64-bit installation
  • 88. VMware Tools Command Line Utility This feature provides an alternative to the VMware Tools control panel (the GUI dialog box) The command line based toolbox will allow for administrators to automate the use of the toolbox functionalities by writing their own scripts
  • 89. vSphere Management Assistant (vMA) A convenient place to perform administration Virtual Appliance packaged as an OVF Distributed, maintained and supported by VMware Not included with ESXi – must be downloaded separately The environment has the following pre-installed: 64-bit Enterprise Linux OS VMware Tools Perl Toolkit vSphere Command Line Interface (VCLI) JRE (to run applications built with the vSphere SDK) VI Fast Pass (authentication service for scripts) VI Logger (log aggregator)
  • 90. vMA Improvements in 4.1 Improved authentication capability – Active Directory support Transition from RHEL to CentOS Security The security hole that exposed clear text passwords on ESX(i) or vCenter hosts when using vifpinit (vi-fastpass) is fixed vMA as netdump server You can configure ESXi host to get the netcoredump onto a remote server in case of crash or panic. Each ESXi must be configured to write the core dump.
  • 91. For Tech Partner: VMware CIM API What it is: for developers building management applications. With the VMware CIM APIs, developers can use standards-based CIM-compliant applications to manage ESX/ESXi hosts. The VMware Common Information Model (CIM) APIs allow you to: view VMs and resources using profiles defined by the Storage Management Initiative Specification (SMI-S) manage hosts using the System Management Architecture for Server Hardware (SMASH) standard. SMASH profiles allow CIM clients to monitor system health of a managed server. What’s new in 4.1 www.vmware.com/support/developer/cim-sdk/4.1/cim_410_releasenotes.html
  • 92. vCLI and PowerCLI: primary scripting interfaces vCLI and PowerCLI built on same API as vSphere Client Same authentication (e.g. Active Directory), roles and privileges, event logging API is secure, optimized for remote environments, firewall-friendly, standards-based vCLI vSpherePowerCLI Other utility scripts Otherlanguages vSphere SDK vSphere Client vSphere Web Service API
  • 93. vCLI for Administrative and Troubleshooting Tasks Areas of functionality Host Configuration: NTP, SNMP, Remote syslog, ESX conf, Kernel modules, local users Storage Configuration: NAS, SAN, iSCSI, vmkfstools, storage pathing, VMFS volume management Network Configuration: vSwitches (standard and distributed), physical NICs, Vmkernel NICs, DNS, Routing Miscellaneous: Monitoring, File management, VM Management, host backup, restore, and update vCLI can point to an ESXi host or to vCenter vMA is a convenient way for accessing vCLI Remote CLI now run faster in 4.1 relative to 4.0
  • 94. Anatomy of a vCLI command Run directly on ESXi Host vicfg-nics --server hostname --user username --password mypassword options Hostname of ESXi host User defined locally on ESXi host Run through vCenter vicfg-nics --server hostname --user username --password mypassword --vihost hostname options Hostname of vCenter host User defined in vCenter (AD) Target ESXi host
  • 95. Additional vCLI configuration commands in 4.1 Storage esxcliswiscsi session: Manage iSCSI sessions esxcliswiscsinic: Manage iSCSI NICs esxcliswiscsivmknic: List VMkernel NICs available for binding to particular iSCSI adapter esxcliswiscsivmnic: List available uplink adapters for use with a specified iSCSI adapter esxclivaai device: Display information about devices claimed by the VMware VAAI (vStorage APIs for Array Integration) Filter Plugin. esxclicorestorage device: List devices or plugins. Used in conjunction with hardware acceleration.
  • 96. Additional vCLI commands Network esxcli network: List active connections or list active ARP table entries. vicfg-authconfig --server=<ESXi_IP_Adress> --username=root --password '' --authscheme AD --joindomain <ad_domain_name> --adusername=<ad_user_name> --adpassword=<ad_user_password> Storage NFS statistics available in resxtop VM esxclivms: Forcibly stop VMs that do not respond to normal stop operations, by using kill commands. # esxcli vms vm kill --type <kill_type> --world-id <ID> Note: designed to kill VMs in a reliable way (not dependent upon well-behaving system) Eliminating one of the most common reasons for wanting to use TSM.
  • 97. esxcli - New Namespaces esxcli has got 3 new namespaces – network, vaai and vms [root@cs-tse-i132 ~]# esxcli Usage: esxcli [disp options] <namespace> <object> <command> For esxcli help please run esxcli –help Available namespaces: corestorage VMware core storage commands. network VMware networking commands. nmp VMware Native Multipath Plugin (NMP). This is the VMware default implementation of the Pluggable Storage Architecture. swiscsi VMware iSCSI commands. vaai Vaai Namespace containing vaai code. vms Limited Operations on VMs.
  • 98. Control VM Operations # esxcli vms vm Usage: esxcli [disp options] vms vm <command> For esxcli help please run esxcli –help Available commands: kill Used to forcibly kill VMs that are stuck and not responding to normal stop operations. list List the VMs on this system. This command currently will only list running VMs on the system. [root@cs-tse-i132 ~]# esxcli vms vm list vSphere Management Assistant (vMA) World ID: 5588 Process ID: 27253 VMX Cartel ID: 5587 UUID: 42 01 a1 98 d6 65 6b e8-79 3b 2a 7c 9d 88 70 05 Display Name: vSphere Management Assistant (vMA) Config File: /vmfs/volumes/4b1e10ed-8ce9ce16-f692-00215e364468/vSphere Management Assistant (vM/vSphere Management Assistant (vM.vmx
  • 99. esxtop – Disk Devices View Use the ‘u’ option to display ‘Disk Devices’. NFS statistics can now be observed. Here we are looking at throughput and latency stats for the devices.
  • 100.
  • 101. Each of the three primitives has their own unique set of statistics.
  • 102.
  • 103. VSI # vsish /> cat /vmkModules/nfsclient/mnt/isos/properties mount point information { server name:rhtraining.vmware.com server IP:10.21.64.206 server volume:/mnt/repo/isos UUID:4f125ca5-de4ee74d socketSendSize:270336 socketReceiveSize:131072 reads:7 writes:0 readBytes:92160 writeBytes:0 readTime:404366 writeTime:0 aborts:0 active:0 readOnly:1 isMounted:1 isAccessible:1 unstableWrites:0 unstableNoCommit:0 } NFS I/O statistics are also available via the VSI nodes
  • 104. vm-support enhancements vm-support now enables user to run 3rd party scripts. To make vm-support run such scripts, add the scripts to "/etc/vmware/vm-support/command-files.d" directory and run vm-support. The results will be added to the vm-support archive. Each script that is run will have its own directory which contain output and log files for that script in the vm-support archive. These directories are stored in top-level directory "vm-support-commands-output".
  • 105. Power CLI Feature Highlights: Easier to customize and extend PowerCLI, especially for reporting Output objects can be customized by adding extra properties Better readability and less typing in scripts based on Get-View. Each output object has its associated view as nested property. Less typing is required to call Get-View and convert between PowerCLI object IDs and managed object IDs. Basic vDS support – moving VMs from/to vDS, adding/removing hosts from/to vDS More reporting: new getter cmdlets, new properties added to existing output objects, improvements in Get-Stat. Cmdlets for host HBAs PowerCLI Cmdlet Reference now documents all output types Cmdlets to control host routing tables Faster Datastore provider http://blogs.vmware.com/vipowershell/2010/07/powercli-41-is-out.html
  • 106. If you are really really curious….  Additional commands (not supported) http://www.petri.co.il/vmware-esxi4-console-secret-commands.htm
  • 108. vCenter improvement Better load balancing with improved DRS/DPM algorithm effectiveness Improved performance at higher vCenter inventory limits – up to 7x higher throughput and up to 75% reduced latency Improved performance at higher cluster inventory limits – up to 3x higher throughput and up to 60% reduced latency Faster vCenter startup – around 5 minutes for maximum vCenter inventory size Better vSphere Client responsiveness, quicker user interaction, and faster user login Faster host operations and VM operations on standalone hosts – up to 60% reduction in latency Lower resource usage by vCenter agents by up to 40% Reduced VM group power-on latency by up to 25% Faster VM recovery with HA – up to 60% reduction in total recovery time for 1.6x more VMs
  • 109. 88 Enhanced vCenter Scalability
  • 110. vCenter 4.1 install New option: Managing the RAM of JVM
  • 111. vCenter Server: Changing JVM Sizing The same change should be visible by launching "Configure Tomcat" from the program menu (Start->Programs->VMware->VMware Tomcat).
  • 112. vCenter: Services in Windows The following are not shown as services Licence Reporting manager
  • 115. Remote Console to VM Formally known as Virtual Serial Port Concentrator
  • 116. Overview Many customers rely on managing physical hosts by connecting to the target machine over the serial port. Physical serial port concentrators are used by such admins to multiplex connections to multiple hosts. Provides a suitable way to remote a VM’s serial port(s) over a network connection, and supporting a “virtual serial port concentrator” utility. Using VMs you lose this functionality and the ability to do remote management using scripted installs and management. Virtual Serial Port Concentrator Communicate between VMs and IP-enabled serial devices. Connect to VM's serial port over the network, using telnet /ssh. Have this connection uninterrupted during vmotion and other similar events.
  • 117. Virtual Serial Port Concentrator What it is Redirect VM serial ports over a standard network link vSPC aggregates traffic from multiple serial ports onto one management console. It behaves similarly as physical serial port concentrators. Benefits Using a vSPC also allows network connections to a VM's serial ports to migrate seamlessly when the VM is migrated using vMotion Management efficiencies Lower costs for multi-host management Enables 3rd party concentrator integration if required
  • 118. Example (using Avocent) ACS 6000 Advanced Console Server running as a vSPC. There is not a serial port or virtual serial port in the ACS6000 console server. ACS6000 console server has a telnet daemon (server) listen to connections coming from ESX. ESX will make one telnet connection for each virtual serial port configured to send data to ACS6000 console server. The serial daemon will implement the telnet server with support to all telnet extensions implemented by VMware.
  • 119.
  • 121. Configuring Virtual Ports on a VM Enables two VMs or a VM and a process on the host to communicate as if they were physical machines connected by a serial cable. For example, this can be used for remote debugging on a VM vSPC, which will act as proxy.
  • 122. Configuring Virtual Ports on a VM Example (for Avocent): Type ACSID://ttySxx in the Port URI, where xx is between 1 to 48. It defines which virtual serial port from the ACS6000 console server this serial port will connect to. 1 VM 1 port. ACS6000 has 48 ports only Type telnet://<IP of Avocent VM>:8801
  • 124. Configure VM to redirect Console Login Check your system's serial support Check operating system recognizes serial ports in your hardware Configure your /etc/inittab to support serial console logins Add the following lines to the /etc/inittab # Run agetty on COM1/ttyS0 s0:2345:respawn:/sbin/agetty -L -f /etc/issueserial 9600 ttyS0 vt100
  • 125. Configure VM to redirect Console Login Activate the changes that you made in /etc/inittab # init q If you want to be able to login via serial console as the root user, you will need to edit the /etc/securetty configuration file. Add ttyS0 as an entry in the /etc/securetty console ttyS0 vc/1 vc/2
  • 126. Configure serial port as the system console Use options in /etc/grub.conf to redirect console output to one of your serial ports Enables you to see all of the bootup and shutdown messages from your terminal. The text to add to the config file is highlighted :
  • 127. Accessing the Serial Port of the Virtual Machine Open a Web connection to the AvocentACS6000 Click on the Portsfolder and click SerialPorts Based on the SerialPort connection configured in the VirtualMachine, you should see Signals of CTS|DSR|CD|RI
  • 128.
  • 129.
  • 130. UI > Performance > Advanced vSphere 4.1 vSphere 4.0 Additional Chart Options in vSphere 4.1 around storage performance statistics: Datastore, Power, Storageadapter & Storage path.
  • 131. 110 Performance Graphs Additional Performance GraphViews added to vSphere 4.1 Host – Datastore, Management Agent, Power, Storage Adapter, Storage Path VM – Datastore, Power, Virtual Disk 110
  • 132. Storage Statistics: vCenter & esxtop Not available in this timeframe: Aggregation at cluster level in vCenter (possible through APIs) *Network-based storage (NFS, iSCSI) I/O breakdown still being researched ** Not applicable to NFS; datastore is the equivalent ESXTOP publishes throughput and latency for LUN, if datastore has only one LUN then LUN will be equal datastore
  • 133. Volume Stats for NFS Device
  • 139.
  • 140. Define, track, and enforce software update compliance for ESX hosts/clusters, 3rd party ESX extensions, Virtual Appliances, VMTools/VM Hardware, online*/offline VMs, templates
  • 142. Cluster level pre-remediation check analysis and report
  • 143. Framework to support 3rd party IHV/ISV updates, customizations: mass install, /update of EMC’s PowerPath module
  • 144. Enhanced compatibility with DPM for cluster level patch operations
  • 145.
  • 146. Define, track, and enforce software update compliance and support for :
  • 148. VMs
  • 150. 3rd Party ESX Modules
  • 152. Automate and Generate Reports using Update Manager Database Views ESX/ESXi VM Virtual Appliance VMTools VM H/W VMTools VM H/W Online/offline ; Templates 3rd party extensions vCenter Update Manager
  • 153. Deployment Components vCenter Server VI Client Update Manager Server Update Manager Components: Update Manager Server + DB 2. Update Manager VI Client Plug-in 3. Update Manager Download Service Virtualized Infrastructure External Patch Feeds Confidential
  • 154. New Features in 4.1 Update Manager now provides management of host upgrade packages. Provisioning, patching, and upgrade support for third-party modules. Offline bundles. Recalled patches Enhanced cluster operation. Better handling of low bandwidth and high latency network PowerCLI Better support for virtual vCenter
  • 155. Notifications As we have already seen with the notification Schedule, Update Manager 4.1 contacts VMware at regular intervals to download notifications about patch recalls, new fixes and alerts. If patches with problems/potential issues are released, these patches are recalled in the metadata and VUM marks them as recalled. If you try to install a recalled patch, Update Manager notifies you that the patch is recalled and does not install it on the host. If you have already installed such a patch, VUM notifies you that the recalled patch is installed on certain hosts, but does not remove the recalled patch from the host. Update Manager also deletes all the recalled patches from the Update Manager patch repository. When a patch fixing the problem is released, Update Manager 4.1 downloads the new patch and prompts you to install it.
  • 156.
  • 157.
  • 158. Notifications Alarms posted for recalled and fixed Patches RecalledPatches are represented by a Flag
  • 159. VUM 4.1 Feature - Notification Check Schedule By default Update Manager checks for notifications about patch recalls, patch fixes and alerts at certain time intervals. Edit Notifications to define the Frequency (hourly, daily, weekly, Monthly) and the Start time ( minutes after hour ), the Interval and the email address of who to Notify for recalled Patches
  • 160. VUM 4.1 Feature - ESX Host/Cluster Settings When Remediating objects in a cluster with Distributed Power Management (DPM), High Availability (HA), and Fault Tolerance (FT) you should temporarilydisable these features for the entire cluster. VUM does not remediate hosts on which these features are enabled. When the update completes, VUM restores these features These settings become the default failure response. You can specify differentsettings when you configure individual remediation tasks.
  • 161. VUM 4.1 Feature - ESX Host/Cluster Settings Update Manager can not remediate hosts where VMs have connected CD/DVD drives. CD/DVD drives that are connected to the VMs on a host might prevent the host from entering maintenancemode and interrupt remediation. Select Temporarily disable any CD-ROMs that may prevent a host from entering maintenance mode.
  • 162. Baselines and Groups Baselines might be upgrade, extension or patchbaselines. Baselines contain a collection of one or more patches, servicepacks and bugfixes, extensions or upgrades. Baseline groups are assembled from existingbaselines and might contain one upgradebaseline per type and one or more patch and extensionbaselines, or a combination of multiple patch and extension baselines. Preconfigured Baselines Hosts – 2 Baselines VM/VA – 6 Baselines
  • 163. Baselines and Groups Update Manager 4.1 introduces a new Host Extension Baseline Host Extension baselines contain additional software for ESX/ESXi hosts. This additional software might be VMwaresoftware or third-partysoftware.
  • 164. Patch Download Settings Update Manager can downloadpatches and extensions either from the Internet ( vmware.com ) or from a shared repository. A new feature of Update Manager 4.1 allows you to import both VMware and Third-party patches manually from a ZIP file, called an Offline Bundle. You download these patches from the Internet or copy them from a media drive, and then save them as offline bundle ZIP files on a local drive. Use the Import Patches to upload to the Update Manager Repository
  • 165. Patch Download Settings Click Import Patches at the bottom of the Patch Download Sources pane. Browse to locate the ZIP file containing the patches you want to import in the Update Manager patch repository.
  • 166. Patch Download Settings The patches are successfully imported into the Update Manager Patch Repository. Use the Search box to filter e.g. ThirdPartyRight Mouse Click Patch and select Show Patch Detail
  • 167. VUM 4.1 Feature - Host Upgrade Releases You can upgrade the hosts in your environment using HostUpgrade ReleaseBaselines which is a new feature of Update Manager 4.1. This feature facilitates fasterremediation of hosts by having the Upgrade Release media already uploaded to the VUM Repository. Previously, the media had to be uploaded for each remediation. To create a Host Upgrade Release Baseline, download the hostupgradefiles from vmware.com and then upload themto the Update Manager Repository. Each upgradefile that you upload contains information about the targetversion to which it will upgrade the host. Update Manager distinguishes the target release versions and combines the uploaded Host Upgrade files into Host Upgrade Releases. A host upgrade release is a combination of host upgrade files, which allows you to upgradehosts to a particular release.
  • 168. VUM 4.1 Feature - Host Upgrade Releases You cannot delete an Host Upgrade Release if it is included in a baseline. First delete any Baselines that have the Host Upgrade Release included. Update Manager 4.1 supportsupgrades from versions ESX 3.0.x and later as well as ESXi 3.5 and later to versions ESX4.0.x and ESX4.1. The remediation from ESX 4.0 to ESX 4.0.x is a patching operation, while the remediation from ESX 4.0.x to ESX 4.1 is considered an upgrade.
  • 169. VUM 4.1 Feature - Host Upgrade Releases The Upgrade files that you upload are ISO or ZIP files. The file type depends on the host type, host version and on the upgrade that you want to perform. The following Table represents the types of the upgradefiles that you must upload for upgrading the ESX/ESXi hosts in your environment.
  • 170. VUM 4.1 Feature - Host Upgrade Releases Depending on the files that you upload, host upgrade releases can be partial or complete. Partial upgrade releases are host upgrade releases that do not contain all of the upgrade files required for an upgrade of both the ESX and ESXi hosts. Complete upgrade releases are host upgrade releases that contain all of the upgrade files required for an upgrade of both the ESX and ESXi hosts. To upgrade all of the ESX/ESXi hosts in your vSphere environment to version 4.1, you must upload all of the files required for this upgrade (three ZIP files and one ISO file): esx-DVD-4.1.0-build_number.iso for ESX 3.x hosts upgrade-from-ESXi3.5-to-4.1.0.build_number.zip for ESXi 3.x hosts upgrade-from-ESX-4.0-to-4.1.0-0.0.build_number-release.zip for ESX 4.0.x hosts upgrade-from-ESXi4.0-to-4.1.0-0.0.build_number-release.zip for ESXi 4.0.x hosts
  • 171. VUM 4.1 Feature - Host Upgrade Releases You can upgrademultiple ESX/ESXi hosts of different versions simultaneously if you import a complete release bundle. You import and manage host upgrade files from the Host Upgrade Releases tab of the Update Manager Administration view.
  • 172.
  • 173. VUM 4.1 Feature - Host Upgrade Releases Host Upgrade Releases are stored in the <patchStore> location specified in the vci-integrity.xml file in the host_upgrade_packages folder. We can use the Update Manager Database View called VUMV_HOST_UPGRADES to locate them.
  • 174. Patch Repository Patch and extensionmetadata is kept in the Update Manager Patch Repository. You can use the repository to managepatches and extensions, check on new patches and extensions, view patch and extension details, view in which baseline a patch or an extension is included, view the recalled patches and import patches. 141
  • 175. Import Offline Patch to Repository From the Patch Repository you can include available, recentlydownloadedpatches and extensions in a baseline you select. Instead of using a shared repository or the Internet as a patch download source, you can import patches manually by using an offline bundle.
  • 176. Notifications As we have already seen with the notification Schedule, Update Manager 4.1 contacts VMware at regular intervals to downloadnotifications about patch recalls, new fixes and alerts. If patches with problems/potential issues are released, these patches are recalled in the metadata and VUM marks them as recalled. If you try to install a recalled patch, Update Manager notifies you that the patch is recalled and does not install it on the host. If you have already installed such a patch, VUM notifies you that the recalled patch is installed on certain hosts, but does notremove the recalled patch from the host. Update Manager also deletes all the recalledpatches from the Update Manager patch repository. When a patch fixing the problem is released, Update Manager 4.1 downloads the new patch and prompts you to install it.
  • 177.
  • 178.
  • 179. Notifications Alarms posted for recalled and fixed Patches RecalledPatches are represented by a Flag
  • 181. Converter 4.2 (not 4.1) Physical to VM conversion support for Linux sources including: Red Hat Enterprise Linux 2.1, 3.0, 4.0, and 5.0 SUSE Linux Enterprise Server 8.0, 9.0, 10.0, and 11.0 Ubuntu 5.x, 6.x, 7.x, and 8.x Hot cloning improvements to clone any incremental changes to physical machine during the P2V conversion process Support for converting new third-party image formats including Parallels Desktop VMs, newer versions of Symantec, Acronis, and StorageCraft Workflow automation enhancements: automatic source shutdown, automatic start-up of the destination VM as well as shutting down one or more services at the source and starting up selected services at the destination Destination disk selection and the ability to specify how the volumes are laid out in the new destination VM Destination VM configuration, including CPU, memory, and disk controller type Support for importing powered-off Microsoft Hyper-V R1 and Hyper-V R2 VMs Support for importing Windows 7 sources Ability to throttle the data transfer from source to destination based on network bandwidth or CPU
  • 182. Converter – Hyper-V Import Microsoft Hyper-V Import Hyper-V can be compared to VMware Server Runs on top of operating system By default only manageable locally Up to now import went through P2V inside of the VM Converter imports VMs from Hyper-V now as V2V Collects information from the Hyper-V server re VMs Does not go through Hyper-V administration tools Uses default Windows methods to access the VM Requirements Converter needs administrator credentials to import a VM Hyper-V must be able to create a network connection to destination ESX host VM to be imported must be powered off VM OS must be supported guestOS by vSphere
  • 184. Support Info VMware Converter plug-in. vSphere 4.1 and its updates/patches are the last releases for the VMware Converter plug-in for vSphere Client. We will continue to update and support the free Converter Standalone product VMware Guided Consolidation. vSphere 4.1 and its update/patch are the last major releases for VMware Guided Consolidation. VMware Update Manager: Guest OS patching Update Manager 4.1 and its update are the last releases to support scanning and remediation of patches for Windows and Linux guest OS. The ability to perform VM operations such as upgrade of VMware Tools and VM hardware will continue to be supported and enhanced. VMware Consolidated Backup 1.5 U2 VMware has extended the end of availability timeline for VCB and added VCB support for vSphere 4.1. VMware supports VCB 1.5 U2 for vSphere 4.1 and its update/patch through the end of their lifecycles. VMware Host Update utility No longer used. Use Update Manager or CLI to patch ESX vSphere Client no longer bundled with ESX/ESXi Reduced size by around 160 MB.
  • 185. Support Info VMI Paravirtualized Guest OS support. vSphere 4.1 is the last release to support the VMI guest OS paravirtualization interface. For information about migrating VMs that are enabled for VMI so that they can run on future vSphere releases, see Knowledge Base article 1013842. vSphere Web Access. Support is now on best effort basis. Linux Guest OS Customization. vSphere 4.1 is the last release to support customization for these Linux guest OS: RedHat Enterprise Linux (AS/ES) 2.1, RedHat Desktop 3, RedHat Enterprise Linux (AS/ES) 3.0, SUSE Linux Enterprise Server 8 Ubuntu 8.04, Ubuntu 8.10, Debian 4.0, Debian 5.0 Microsoft Clustering with Windows 2000 is not supported in vSphere 4.1. See the Microsoft Website for additional information. Likely due to MSCS with Win2K EOL. Need to double confirm.
  • 186. vCenter MUST be hosted on 64-bit Windows OS 32-bit OS NOT supported as a host OS with vCenter vSphere 4.1 Why the change? Scalability is restricted by the x86 32 bit virtual address space and moving to 64 bit will eliminate this problem Reduces dev and QA cycles and resources (faster time to market) Two Options vCenter in a VM running 64-bit Windows OS vCenter install on a 64-bit Windows OS Best Practice – Use Option 1 http://kb.vmware.com/kb/1021635 vCenter – Migration to 64-bit
  • 187. Data Migration Tool - What is backed up ? vCenter LDAP data Configuration Port settings HTTP/S ports Heartbeat port Web services HTTP/S ports LDAP / LDAP SSL ports Certificates SSL folder Database Bundled SQL Server Express only Install Data License folder
  • 188. Data Migration Tool - Steps to Backup the Configuration Example of the start of the backup.bat command running
  • 189. Compatibility vSphere Client compatibility Can use the “same” client to access 4.1, 4.0 and 3.5 vCenter LinkedMode vCenter 4.1 and 4.0 can co-exist in Linked Mode After both versions of vSphere Client are installed, you can access vCenter linked objects with either client. For Linked Mode environments with vCenter 4.0 and vCenter 4.1, you must have vSphere Client 4.0 Update 1 and vSphere Client 4.1. MS SQL Server Unchanged. 4.1, 4.0 U2, 4.0 U1 and 4.0 have identical support 32 bit DB is also supported.
  • 190. Compatibility vCenter 4.0 does not support ESX 4.1 Upgrade vCenter before upgrading ESX vCenter 4.1 does not support ESX 2.5 ESX 2.5 has reached the limited/non support status vCenter 4.1 adds support for ESX 3.0.3 U1 Storage: No change in VMFS format Network Distributed Switch 4.1 needs ESX 4.1 Quiz: how to upgrade?
  • 191. Upgrading Distributed Switch Source: Manual. ESX Configuration Guide, see “Upgrade a vDS to a Newer Version”
  • 192. Compatibility View Need to upgrade to 4.5 View 4.0 composer is a 32-bit application, while vCenter 4.1 is 64 bit. SRM need to upgrade to SRM 4.1 SRM 4.1 supports vSphere 4.0 U1, 4.0 U2 and 3.5 U5 SRM 4.1 needs vCenter 4.1 SRM 4.1 needs 64 bit OS. SRM 4.1 adds support for Win08 R2 CapacityIQ CapacityIQ 1.0.3 (the current shipping release) is not known to have any issues with VC 4.1 but you need to use a “–NoVersionCheck” flag when registering CIQ with it. CapacityIQ 1.0.4 will be released soon to address that.
  • 193. Compatibility: Win08 R2 This is for R2, not R1 This is to run the VMware products on Windows, not to host Win08 as Guest OS Win08 as guest is supported on 4.0 Minimum vSphere products version to run on Windows 2008 R2: vSphere Client 4.1 vCenter 4.1 Guest OS Customization for 4.0 and 4.1 vCenter Update Manager as its server. It is not yet supported for patching Win08 R2. Update Manager also does not patch Win7 vCenter Converter Vmware Orchestrator vCO: Client and Server 4.1 SRM 4.1
  • 194. Known Issues Full list: https://www.vmware.com/support/vsphere4/doc/vsp_esxi41_vc41_rel_notes.html#sdk IPv6 Disabled by Default when installing ESXi 4.1. Hardware iSCSI. Broadcom Hardware iSCSI does not support Jumbo Frames or IPv6. Dependent hardware iSCSI does not support iSCSI access to the same LUN when a host uses dependent and independent hardware iSCSI adapters simultaneously. VM MAC address conflicts Each vCenter system has a vCenter instance ID. This ID is a number between 0 and 63 that is randomly generated at installation time but can be reconfigured after installation. vCenter uses the vCenter instance ID to generate MAC addresses and UUIDs for VMs. If two vCenter systems have the same vCenter instance ID, they might generate identical MAC addresses for VMs. This can cause conflicts if the VMs are on the same network, leading to packet loss and other problems.
  • 195. Thank You I’m sure you are tired too 
  • 196. Useful references http://vsphere-land.com/news/tidbits-on-the-new-vsphere-41-release.html ]http://www.petri.co.il/virtualization.htm http://www.petri.co.il/vmware-esxi4-console-secret-commands.htm http://www.petri.co.il/vmware-data-recovery-backup-and-restore.htm http://www.delltechcenter.com/page/VMware+Tech http://www.kendrickcoleman.com/index.php?/Tech-Blog/vm-advanced-iso-free-tools-for-advanced-tasks.html http://www.ntpro.nl/blog/archives/1461-Storage-Protocol-Choices-Storage-Best-Practices-for-vSphere.html http://www.ntpro.nl/blog/archives/1539-vSphere-4.1-Virtual-Serial-Port-Concentrator.html http://www.virtuallyghetto.com/2010/07/vsphere-41-is-gift-that-keeps-on-giving.html http://www.virtuallyghetto.com/2010/07/script-automate-vaai-configurations-in.html http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1516821,00.html http://vmware-land.com/esxcfg-help.html http://virtualizationreview.com/blogs/everyday-virtualization/2010/07/esxi-hosts-ad-integrated-security-gotcha.aspx http://www.MS.com/licensing/about-licensing/client-access-license.aspx#tab=2 http://www.MSvolumelicensing.com/userights/ProductPage.aspx?pid=348