The Next Generation of Microsoft Virtualization With Windows Server 2012
Vcpfaq
1. Virtual Machine Maximums
Table 1 contains configuration maximums related to virtual machines.
SCSI controllers per virtual machine 4
Devices per SCSI controller 15
Devices per virtual machine (Windows) 60
Devices per virtual machine (Linux) 60
Size of SCSI disk 2TB
Number of virtual CPUs per virtual machine 4
Size of RAM per virtual machine 16384MB
Number of NICs per virtual machine 4
Number of IDE devices per virtual machine 4
Number of floppy devices per virtual machine 2
Number of parallel ports per virtual machine 2
Number of serial ports per virtual machine 2
Size of a virtual machine swap file 16384MB
Number of virtual PCI devices: NICs, SCSI controllers, audio devices
(VMware Server only), and video cards (exactly one is present in every
virtual machine). 6
Number of remote consoles to a virtual machine 10
Storage Maximums
Table 2 contains configuration maximums related to ESX Server host storage.
Maximum
Block size (MB) 8
Raw Device Mapping size (TB) 2
Simultaneous power ons of virtual machines on different hosts against a
single VMFS volume (measured in number of hosts).
32
Number of hosts per virtual cluster 32
Number of volumes configured per server 256
Number of extents per volume 32
VMFS-2
Volume size 2TB x number ofextents1
File size (block size=1 MB) 456GB
File size (block size=8 MB) 2TB
File size (block size=64MB) 27TB
File size (block size=256MB) 64TB
Number of files per volume 256 + (64 x number of extents)
VMFS‐3
Volume size (block size = 1MB) ~16TB‐4GB2
Volume size (block size = 2MB) ~32TB‐8GB
Volume size (block size = 4MB) ~64TB‐16GB
Volume size (block size = 8MB) 64TB
File size (block size=1MB) 256GB
File size (block size=8MB) 2TB
Number of files per directory unlimited
Number of directories per volume unlimited
Number of files per volume unlimited
2. Fibre Channel
LUNs per server 256
SCSI controllers per server 16
Devices per SCSI controller 16
Number of paths to a LUN 32
LUNs concurrently opened by all virtual machines 256
LUN ID 255
Storage Maximums (Continued)
NFS
LUNs per server 256
SCSI controllers per server 2
LUNs concurrently opened by all virtual machines 256
Hardware & software iSCSI
LUNs per server 256
SCSI controllers per server 2
1 Minimum = 100MB
2 ~ denotes an approximate value.
Compute Maximums
contains configuration maximums related to ESX Server host compute resources.
Maximum
Number of virtual CPUs per server 128
Number of cores per server 32
Number of (hyper threaded) logical processors per server 32
Number of virtual CPUs per core 8
Memory Maximums
Contains configuration maximums related to ESX Server host memory.
Maximum
Size of RAM per server 64GB
RAM allocated to service console 800MB
Networking Maximums
Contains configuration maximums related to ESX Server host networking.
Physical NICs
Number of e100 NICs 26
Number of e1000 NICs 32
Number of Broadcom NICs 20
Advanced, physical traits
Number of port groups 512
Number of NICs in a team 32
Number of Ethernet ports 32
3. Virtual NICs/switches
Number of virtual NICs per virtual switch 1016
Number of virtual switches 127
Virtual Center Maximums
Contains configuration maximums related to Virtual Center.
Number of virtual machines (for management server scalability) 1500
Number of hosts per DRS cluster 32
Number of hosts per HA cluster 16
Number of hosts per Virtual Center server 100
VMware Infrastructure Introduction
VMware Infrastructure is a full infrastructure virtualization suite that provides comprehensive (complete)
virtualization, management, resource optimization, application availability, and operational automation
capabilities in an integrated offering. VMware Infrastructure virtualizes and aggregates (collective) the
underlying (original, basic) physical hardware resources across multiple systems and provides pools of
virtual resources to datacenter in the virtual environment.
In addition, VMware Infrastructure brings about a set of distributed services that
Enables fine-grain, policy-driven resource allocation, high availability, and consolidated backup of the
entire virtual datacenter. These distributed services enable an IT organization to establish and meet their
production Service Level Agreements with their customers in a cost effective manner.
VMware Infrastructure includes the following components shown in Figure 1‐1:
4. VMware ESX Server. A robust, production-proven virtualization layer run on physical servers that
abstracts processor, memory, storage, and networking resources into multiple virtual machines.
Virtual Center Management Server (Virtual Center Server). The central point for
Configuring, provisioning, and managing virtualized IT environments.
Virtual Infrastructure Client (VI Client). An interface that allows users to connect
Remotely to the Virtual Center Server or individual ESX Servers from any Windows PC.
Virtual Infrastructure Web Access (VI Web Access). A Web interface that allows
Virtual machine management and access to remote consoles.
VMware Virtual Machine File System (VMFS). A high-performance cluster file
System for ESX Server virtual machines.
VMware Virtual Symmetric Multi-Processing (SMP). Feature that enables a single
virtual machine to use multiple physical processors simultaneously.
VMware VMotion . Feature that enables the live migration of running virtual machines from one physical
server to another with zero down time, continuous service availability, and complete transaction integrity.
VMware HA . Feature that provides easy-to-use, cost-effective high availability for applications running in
virtual machines. In the event of server failure, affected virtual
Machines are automatically restarted on other production servers that have spare capacity.
VMware Distributed Resource Scheduler (DRS) . Feature that allocates and balances
computing capacity dynamically across collections of hardware resources for virtual
machines.
VMware Consolidated Backup (Consolidated Backup) . Feature that provides an Easy-to-use,
centralized facility for agent-free backup of virtual machines. It simplifies backup administration and
reduces the load on ESX Servers.
VMware Infrastructure SDK . Feature that provides a standard interface for VMware
and third-party solutions to access the VMware Infrastructure.
Cluster
A number of similarly configured x86 servers can be grouped together with connections
to the same network and storage subsystems to provide an aggregate(total) set of resources in the virtual
environment, called a cluster.
Storage Networks and Arrays
Fiber Channel SAN arrays, iSCSI SAN arrays, and NAS arrays are widely used storage technologies
supported by VMware Infrastructure to meet different datacenter storage needs. Sharing the storage arrays
between (by connecting them to) groups of servers via storage area networks allows aggregation of the
storage resources and provides more flexibility in provisioning them to virtual machines.
Management Server
The Virtual Center Management Server provides a convenient single point of control to the datacenter. It
runs on top Windows 2003 Server to provide many necessary datacenter services such as access control,
performance monitoring, and configuration. It unifies the resources from the individual computing servers
to be shared among virtual machines in the entire datacenter. It accomplishes this by managing the
5. assignment of virtual machines to the computing servers and the assignment of resources to the virtual
machines within a given computing server based on the policies set by the system administrator.
Virtual Datacenter Architecture
VMware Infrastructure virtualizes the entire IT infrastructure including servers, storage, and networks. It
aggregates these heterogeneous resources and presents a simple and uniform set of elements in the virtual
environment. With VMware Infrastructure, IT resources can be managed like a shared utility and
dynamically provisioned to different business units and projects without worrying about the underlying
hardware differences and limitations.
Resources are provisioned to virtual machines based on the policies set by the system administrator who
owns the resources. The policies can reserve a set of resources for a particular virtual machine to guarantee
its performance. The policies can also prioritize and set a variable portion of the total resources to each
virtual machine. A virtual machine will be prevented from being powered-on (to consume resources) if
doing so would violate the resource allocation policies.
Hosts, Clusters, and Resource Pools
Hosts, clusters, and resources pools provide flexible and dynamic ways to organize the
aggregated computing and memory resources in the virtual environment and link them back to the
underlying physical resources.
A cluster acts and can be managed much like a host. It represents the aggregate computing and memory
resources of a group of physical x86 servers sharing the same network and storage arrays. For example, if
the group contains eight servers, each server has four dual‐core CPUs running at 4 gigahertz each and 32
gigabytes of memory. The cluster will then have 256 gigahertz of computing power and 256 gigabytes of
memory available for the running virtual machines assigned to it.
Resource pools are partitions of computing and memory resources from a single host or a cluster. Any
resource pool can be partitioned into smaller resource pools to further divide and assign resources to
different groups or for different purposes. In other words, resource pools can be hierarchical and nested.
VMware VMotion
VMware VMotion, DRS, and HA are distributed services that enable efficient and automated resource
management and high virtual machine availability.
Virtual machines run on and consume resources from ESX Server. VMotion enables the migration of
running virtual machines from one physical server to another without service interruption, This allows
virtual machines to move from a heavily loaded server to a lightly loaded one. The effect is a more efficient
assignment of resources. With VMotion, resources can be dynamically reallocated to virtual machines
across physical servers.
VMware DRS
VMware DRS aids in resource control and management capability in the virtual datacenter. A cluster can
be viewed as an aggregation of the computing and memory resources of the underlying physical hosts put
together in a single pool. Virtual machines can be assigned to that pool. DRS monitors the workload of the
running virtual machines and the resource utilization of the hosts to assign resources.
Using VMotion and an intelligent resource scheduler, VMware DRS automates the task of assigning virtual
machines to servers within the cluster to use the computing and memory resources of that server. DRS does
6. the calculation and automates the pairing. If a new physical server is made available, DRS automatically
redistribute the virtual machines using VMotion to balance the workloads. If a physical server must be
taken down for any reason, DRS automatically reassigns its virtual machines to other servers.
VMware HA
VMware HA offers a simple and low cost high availability alternative to application clustering. It enables
quick restart of virtual machines on a different physical server within a cluster automatically if the hosting
server fails. All applications within the virtual machines enjoy the high availability benefit, not just one
(through application clustering).
HA monitors all physical hosts in a cluster and detects host failures. An agent placed on each physical host
maintains a heartbeat with the other hosts in the resource pool, and loss of a heartbeat initiates the process
of restarting all affected virtual machines on other hosts. HA ensures that sufficient resources are available
in the cluster at all times to restart virtual machines on different physical hosts in the event of host failure.
Network Architecture
A virtual switch works like a layer 2 physical switch. Each server has its own virtual switches. On one side
of the virtual switch are port groups that connect to virtual machines. On the other side are uplink
connections to physical Ethernet adapters on the server where the virtual switch resides. Virtual machines
connect to the outside world through the physical Ethernet adapters that are connected to the virtual switch
uplinks.
A virtual switch can connect its uplinks to more than one physical Ethernet adapter to enable NIC teaming.
With NIC teaming, two or more physical adapters can be used to share the traffic load or provide passive
failover in the event of a physical adapter hardware failure or a network outage
Port group is a unique concept in the virtual environment. A port group is a mechanism for setting policies
that govern the network connected to it. A vSwitch can have multiple port groups. Instead of connecting to
a particular port on the vSwitch, a virtual machine connects its vNIC to a port group. All virtual machines
that connect to the same port group belong to the same network inside the virtual environment even if they
are on different physical servers.
Port groups can be configured to enforce a number of policies that provide enhanced
networking security, network segmentation, better performance, higher availability,
and traffic management:
Layer 2 security options . Enforces what vNICs in a virtual machine can do by controlling promiscuous
mode, MAC address change, or forged transmits.
VLAN support . Allows virtual networks to join a physical VLANs or support QOS policies.
Traffic shaping . Defines average bandwidth, peak bandwidth, and burst size.
These are policies that can be set to improve traffic management.
NIC teaming . Sets the NIC teaming policies for an individual port group or network to share traffic load
or provide failover in case of hardware failure.
Storage Architecture
7. The VMware Infrastructure storage architecture consists of layers of abstraction that hide and manage the
complexity and differences among physical storage subsystems.
To the applications and guest operating systems inside each virtual machine, the storage subsystem is a
simple virtual Bus Logic or LSI SCSI host bus adapter connected to one or more virtual SCSI
.
The virtual SCSI disks are provisioned from datastore elements in the datacenter. A datastore is like a
storage appliance that serves up storage space for many virtual machines across multiple physical hosts.
The datastore provides a simple model to allocate storage space to the individual virtual machines without
exposing them to the complexity of the variety of physical storage technologies available, such as Fibre
Channel SAN, iSCSI SAN, direct attached storage, and NAS.
A virtual machine is stored as a set of files in a directory in the datastore. A virtual disk inside each virtual
machine is one or more files in the directory. As a result, you can operate on a virtual disk (copy, move,
back up, and so on) just like a file. New virtual disks can be .hot‐added. to a virtual machine without
powering it down. In that case, a virtual disk file (.vmdk) is created in VMFS to provide new storage for the
hot-added virtual disk or an existing virtual disk file is associated with a virtual machine.
VMFS is a clustered file system that leverages shared storage to allow multiple physical hosts to read and
write to the same storage simultaneously. VMFS provides on-disk locking to ensure that the same virtual
machine is not powered on by multiple servers
at the same time. If a physical host fails, the on-disk lock for each virtual machine is
released so that virtual machines can be restarted on other physical hosts.
VMFS also features enterprise-class crash consistency and recovery mechanisms, such as distributed
journaling, a crash consistent virtual machine I/O path, and machine state snapshots. These mechanisms
can aid quick root-ause and recovery from virtual machine, physical host, and storage subsystem failures.
VMFS also supports raw device mapping (RDM). RDM provides a mechanism for a virtual machine to
have direct access to a LUN on the physical storage subsystem (Fibre Channel or iSCSI only). RDM is
useful for supporting two typical types of applications:
SAN snapshot or other layered applications that run in the virtual machines. RDM better enables scalable
backup offloading systems using features inherent to the SAN.
Any use of Microsoft Clustering Services (MSCS) that spans physical hosts:
Virtual-to-virtual clusters as well as physical‐to‐virtual clusters. Cluster data and quorum disks should be
configured as RDMs rather than as files on a shared VMFS.
VMware Consolidated Backup
VMware Infrastructure-s storage architecture enables a simple virtual machine backup solution: VMware
Consolidated Backup. Consolidated Backup provides a centralized facility for LAN-free backup of virtual
machines.
Consolidated Backup works in conjunction with a third-party backup agent residing on a separate backup
proxy server (not on the server running ESX Server) but does not require an agent inside the virtual
machines.
The third-party backup agent manages the backup schedule. It starts Consolidated Backup when it is time
to do a back up. When started, Consolidated Backup runs a set of pre-backup scripts to quiesce the virtual
disks to take their snapshots. It then runs a set of post-thaw scripts to restore the virtual machine back to
normal operation. At the same time, it mounts the disk snapshot to the backup proxy server. Finally, the
third-party backup agent backs up the files on the mounted snapshot to its backup targets. By taking
8. snapshots of the virtual disks and backing them up through a separate backup proxy server, Consolidated
Backup provides a simple, less intrusive, and low-overhead backup solution for the virtual environment.
VirtualCenter Management Server
The VirtualCenter Management Server components are user access control, core services, distributed
services, and various interfaces.
The User Access Control allows the system administrator to create and manage different levels of access to
the VirtualCenter for different users.
For example, there might be a user class that manages configuring the physical servers in the datacenter
and there might be a different user class that manages only virtual resources within a particular resource
pool.
Core Services are basic management services for a virtual datacenter. They include services such as:
VM Provisioning . Guides and automates the provisioning of virtual machines
Host and VM Configuration . Allows the configuration of hosts and virtual Machines
Resources and Virtual Machine Inventory Management . Organizes virtual machines and resources in
the virtual environment and facilities their management
Statistics and Logging . Logs and reports on the performance and resource utilization statistics of
datacenter elements, such as virtual machines, hosts, and clusters
Alarms and Event Management . Tracks and warns users on potential resource Over-utilization or event
conditions.
Task Scheduler . Schedules actions such as VMotion to happen at a given time Distributed Services are
solutions that extend VMware Infrastructure-s capabilities to the next level such as VMware DRS, VMware
HA, and VMware VMotion. Distributed Services allow the configuration and management of these
solutions centrally from VirtualCenter Management Server.
VirtualCenter Server has four key interfaces:
ESX Server management . Interfaces with the VirtualCenter agent to manage each
physical server in the datacenter.
VMware Infrastructure API . Interfaces with VMware management clients and
Third-party solutions.
Database interface . Connects to Oracle or Microsoft SQL Server to store information, such as virtual
machine configurations, host configurations, resources and virtual machine inventory, performance
statistics, events, alarms, user permissions, and roles.
Active Directory interface . Connects to Active Directory to obtain user access
control information.
9. Communication Between Virtual Center and ESX Server
The Virtual Center communicates with ESX Server.s host agent through the VMware
Infrastructure API (VI API). When a host is first added to Virtual Center, Virtual Center
sends a Virtual Center agent to run on the host. That agent communicates with the host
agent
The Virtual Center agent acts as a mini‐Virtual Center Server to perform the following
functions:
Relays and enforces resource allocation decisions made in Virtual Center, including
those sent by the DRS engine
Passes virtual machine provisioning and configuration change commands to the
host agent
Passes host configuration change commands to the host agent
Collects performance statistics, alarms, and error conditions from the host agent
and sends them to the Virtual Center Management Server
Accessing the Virtual Datacenter
Users can manage the VMware Infrastructure datacenter or access the virtual machine
console through three different means: the VI Client, Web Access through a Web
browser, or terminal services (such as Windows Terminal Services or Xterm), Accessing hosts should be
done only by physical host administrators in special circumstances. All relevant functionality that can be
done on the host can also be done in VirtualCenter Server.
The VI Client accesses Virtual Center through the VMware API. After the user is authenticated, a
session starts in Virtual Center, and the user sees the resources and virtual machines that are
assigned to the user. For virtual machine console access, the VI Client first gets the virtual
machine location from Virtual Center through the VMware API. It then connects to the appropriate
host and provides access to the virtual machine console.
Users can also access Virtual Center Management Server through the Web browser by first pointing the
browser to an Apache Tomcat Server set up by Virtual Center Management Server. The Apache Tomcat
Server mediates the communication between the browser and Virtual Center through the VMware API.
To access the virtual machine consoles through the Web browser, users can make use of the bookmark that
is created by VirtualCenter Server. The bookmark first points to the VI Web Access.
VI Web Access resolves the physical location of the virtual machine and redirects the Web browser to the
ESX Server where the virtual machine resides.
10. If the virtual machine is running and the user knows the IP address of the virtual machine, the user can also
access the virtual machine console using standard tools, such as Windows Terminal Services or Xterm.
Conclusion
VMware Infrastructure provides a simple architecture in the virtual environment to allow companies to
manage computing, storage, and networking resources without worrying about the underlying physical
hardware. VI architecture allows enterprises to create and configure their datacenters and reallocate
resources to different priorities without the time delay and cost of reconfiguring their physical hardware
infrastructure.
With a suite of complementary virtualization and management services, such as VMware VMotion,
VMware DRS, VMware HA, and VMware Consolidated Backup, VMware Infrastructure is the only
product that provides a complete solution rather than a piecemeal approach to building datacenters in the
virtual environment.
Hardware Requirements
VirtualCenter Server hardware must meet the following requirements:
Processor . 2.0GHz or higher Intel or AMD x86 processor. Processor requirements can be larger if your
database is run on the same hardware.
Memory . 2GB RAM minimum. RAM requirements can be larger if your database is run on the same
hardware.
Disk storage . 560MB minimum, 2GB recommended. You must have 245MB free on the destination drive
for installation of the program, and you must have 315MB free on the drive containing your %temp%
directory.
MSDE disk requirements . The demonstration database requires up to 2GB free disk space to decompress
the installation archive. However, approximately 1.5GB of these files are deleted after the installation is
complete.
Networking . 10/100 Ethernet adapter minimum (Gigabit recommended).
Scalability . A VirtualCenter Server configured with the hardware minimums can support 20 concurrent
clients, 50 ESX Server hosts, and over 1000 virtual machines. A dual-processor VirtualCenter Server with
3GB RAM can scale to 50 concurrent client connections, 100 ESX Server hosts, and over 2000 virtual
machines.
VirtualCenter Server Software Requirements
The VirtualCenter Server is supported as a service on the 32‐bit versions of these operating systems:
Windows 2000 Server SP4 with Update Rollup 1 (Update Rollup 1 can be downloaded from
http://www.microsoft.com/windows2000/server/evaluation/news/bulletins/roll
up.mspx)
Windows XP Pro (at any SP level)
Windows 2003 (all releases except 64-bit)
Virtual enter 2.0 installation is not supported on 64‐bit operating systems.
The Virtual enter installer requires Internet Explorer 5.5 or higher in order to run.
11. VirtualCenter Database Requirements
Virtual enter supports the following database formats:
Microsoft SQL Server 2000 (SP 4 only)
Oracle 9iR2, 10gR1 (versions 10.1.0.3 and higher only), and 10gR2
Microsoft MSDE (not supported for production environments)
Each database requires some configuration adjustments in addition to the basic installation.
Virtual Infrastructure Client Requirements
Virtual Infrastructure Client Hardware Requirements
The Virtual Infrastructure Client hardware must meet the following requirements:
Processor . 266MHz or higher Intel or AMD x86 processor (500MHz recommended).
Memory . 256MB RAM minimum, 512MB recommended.
Disk Storage . 150MB free disk space required for basic installation. You must have 55MB free on the
destination drive for installation of the program, and you must have 100MB free on the drive containing
your %temp% directory.
Networking . 10/100 Ethernet adapter (Gigabit recommended).
Virtual Infrastructure Client Software Requirements
The Virtual Infrastructure Client is designed for the 32‐bit versions of these operating systems:
Windows 2000 Pro SP4
Windows 2000 Server SP4
Windows XP Pro (at any SP level)
Windows 2003 (all releases except 64-bit)
The Virtual Infrastructure Client requires the .NET framework 1.1 (included in installation if required).
VirtualCenter VI Web Access Requirements
The VI Web Access client is designed for these browsers:
Windows . Internet Explorer 6.0 or higher, Netscape Navigator 7.0, Mozilla 1.X, Firefox 1.0.7 and higher.
Linux . Netscape Navigator 7.0 or later, Mozilla 1.x, Firefox 1.0.7 and higher.
License Server Requirements
This section describes the license server requirements.
License Server Hardware Requirements
The license server hardware must meet the following requirements:
Processor . 266MHz or higher Intel or AMD x86 processor.
Memory . 256MB RAM minimum, 512MB recommended.
Disk Storage . 25MB free disk space required for basic installation.
Networking . 10/100 Ethernet adapter (Gigabit recommended).
VMware recommends that you install the license server on the same machine as your VirtualCenter Server
to ensure connectivity.
12. License Server Software Requirements
The license server software is supported on the 32‐bit versions of the following operating systems:
Windows 2000 Server SP4
Windows XP Pro (at any SP level)
Windows 2003 (all releases except 64-bit)
ESX Server Requirements
This section discusses the minimum and maximum hardware configurations supported by ESX Server
version 3.
Minimum Server Hardware Requirements
You need the following hardware and system resources to install and use ESX Server.
At least two processors:
1500 MHz Intel Xeon and later, or AMD Opteron (32-it mode) for ESX Server
1500 MHz Intel Xeon and later, or AMD Opteron (32-it mode) for Virtual SMP.
1500 MHz Intel Viiv or AMD A64 x2 dual-core processors
1GB RAM minimum.
One or more Ethernet controllers. Supported controllers include: Broadcom NetXtreme 570x Gigabit
controllers Intel PRO/100 adapters
For best performance and security, use separate Ethernet controllers for the service console and the virtual
machines
A SCSI adapter, Fibre Channel adapter, or internal RAID controller:
Basic SCSI controllers are Adaptec Ultra‐160 and Ultra‐320, LSI Logic Fusion-MPT, and most
NCR/Symbios. SCSI controllers.
RAID adapters supported are HP Smart Array, Dell PercRAID (Adaptec RAID and LSI MegaRAID), and
IBM (Adaptec) ServeRAID controllers.
Fibre Channel adapters supported are Emulex and QLogic host bus adapters (HBAs).
A SCSI disk, Fibre Channel LUN, or RAID LUN with unpartitioned space. In a minimum configuration,
this disk or RAID is shared between the service console and the virtual machines.
For iSCSI, a disk attached to an iSCSI controller, such as the QLogic qla4010.
ESX Server supports installing and booting from the following storage systems:
IDE/ATA disk drives . Installing ESX Server on an IDE/ATA drive or IDE/ATA RAID is supported.
However, you should ensure that your specific drive controller is included in the supported hardware.
Storage of virtual machines is currently not supported on IDE/ATA drives or RAIDs. Virtual machines
must be stored on VMFS partitions configured on a SCSI drive, a SCSI RAID, or a SAN.
SCSI disk drives . SCSI disk drives are supported for installing ESX Server. They can also store virtual
machines on VMFS partitions.
Storage area networks (SANs) . SANs are supported for installing ESX Server.
They can also store virtual machines on VMFS partitions. For information about pre-installation and
configuration tasks and known issues with installing and booting from SANs,
13. Enhanced Performance Recommendations
The lists in previous sections suggest a basic ESX Server configuration. In practice, you can use multiple
physical disks, which can be SCSI disks, Fibre Channel LUNs, or RAID LUNs.
Here are some recommendations for enhanced performance:
RAM. Having sufficient RAM for all your virtual machines is important to achieving good performance.
ESX Server hosts require more RAM than typical servers. An ESX Server host must be equipped with
sufficient RAM to run concurrent virtual machines, plus run the service console.
For example, operating four virtual machines with Red Hat Enterprise Linux or
Windows XP requires your ESX Server host be equipped with over a gigabyte of
RAM for baseline performance:
1024MB for the virtual machines (256MB minimum per operating system as
recommended by vendors × 4)
272MB for the ESX Server service console
Running these example virtual machines with a more reasonable 512MB RAM requires the ESX Server
host to be equipped with at least 2.2GB RAM.
2048MB for the virtual machines (512MB × 4) 272MB for the ESX Server service console
These calculations do not take into account variable overhead memory for each virtual machine.
Dedicated fast Ethernet adapters for virtual machines . Dedicated Gigabit Ethernet cards for virtual
machines, such as Intel PRO/1000 adapters, improve throughput to virtual machines with high network
traffic.
Disk location . For best performance, all data used by your virtual machines should be on physical disks
allocated to virtual machines. These physical disks should be large enough to hold disk images to be used
by all the virtual machines.
VMFS3 partitioning . For best performance, use VI Client or VI Web Access to set up your VMFS3
partitions rather than the ESX Server installer. Using VI Client or VI Web Access ensures that the starting
sectors of partitions are 64K-aligned, which improves storage performance.
Processors . Faster processors improve ESX Server performance. For certain workloads, larger caches
improve ESX Server performance.
Hardware compatibility . To ensure the best possible I/O performance and workload management,
VMware ESX Server provides its own drivers for supported devices. Be sure that the devices you plan to
use in your server are supported. For additional detail on I/O device compatibility, download the ESX
Server I/O Compatibility Guide from the VMware Web site at
Maximum Configuration for ESX Server
This section describes the hardware maximums for an ESX Server host machine. (Do not confuse this with
a list of virtual hardware supported by a virtual machine.)
Storage
16 host bus adapters (HBAs) per ESX Server system, with 15 targets per HBA
128 logical unit numbers (LUNs) per storage array
255 LUNs per ESX Server system
32 paths to a LUN
Maximum LUN ID: 255
NOTE Although ESX Server supports up to 256 Fibre Channel LUNs for operation, the
installer supports a maximum of 128 Fibre Channel SAN LUNs. If you have more than
128 LUNs, connect them after the installation is complete.
14. Virtual Machine File System (VMFS)
128 VMFS volumes per ESX Server system
Maximum physical extents per VMFS volume:
VMFS-3 volumes: 32 physical extents
VMFS-2 volumes: 32 physical extents (VMFS‐2 volumes are read-only for ESX Server 3.0.)
2TB per physical extent
Maximum size per VMFS volume:
VMFS-3 volumes: approximately 64TB, with a maximum of 2TB per physical extent
VMFS-2 volumes: approximately 64TB, with a maximum of 2TB per physical extent (VMFS-2 volumes
are read-only for ESX Server 3.0.)
Maximum Sockets Maximum Cores Maximum Threads
Single core With hyperthreading 16 16 32,
Without hyperthreading 16 16 16
Dual core With hyperthreading 8 16 32 ,
Without hyperthreading 16 32 32
Virtual Processors
A total of 128 virtual processors in all virtual machines per ESX Server host
Memory
64GB of RAM per ESX Server system
Adapters
Up to 64 adapters of all types, including storage and network adapters, per system
Up to 20 Gigabit Ethernet or 10/100 Ethernet ports per system
Up to 1024 ports per virtual switch
Virtual Machine Specifications
Each ESX Server machine can host up to 128 virtual CPUs in virtual machines (and up to 200 registered
virtual machines), with the following capabilities and specifications.
Virtual Storage
Up to four host bus adapters per virtual machine
Up to 15 targets per host bus adapter
Up to 60 targets per virtual machine; 256 targets concurrently in all virtual machines per ESX Server host
Virtual SCSI Devices
Up to four virtual SCSI adapters per virtual machine, with up to 15 devices per adapter
9TB per virtual disk
Virtual Processor
Intel Pentium II or later (dependent on system processor)
One, two, or four processors per virtual machine
NOTE All multiprocessor virtual machines require purchased licensing for VMware Virtual SMP for ESX
Server. If you plan to create a two-processor virtual machine, your ESX Server machine must have at least
two physical processors. For a four-processor virtual machine, your ESX Server machine must have at least
four physical processors.
Virtual Chip Set Intel 440BX-based motherboard with NS338 SIO chip
15. Virtual BIOS Phoenix BIOS 4.0 Release 6
Virtual Machine Memory Up to 16GB per virtual machine
NOTE Windows NT as a guest supports only 3.444GB RAM.
Virtual Adapters Up to six virtual PCI slots per virtual machine
Virtual Ethernet Cards Up to four virtual Ethernet adapters per virtual machine
NOTE Each virtual machine has a total of six virtual PCI slots, one of which is used by the graphics
adapter. The total number of virtual adapters, SCSI plus Ethernet, cannot be greater than six.
Virtual Floppy Drives Up to two 1.44MB floppy drives per virtual machine
Virtual CD Up to four drives per virtual machine
Legacy Devices Virtual machines can also make use of the following legacy devices. However, for
performance reasons, use of these devices is not recommended.
Virtual Serial (COM) Ports Up to four serial ports per virtual machine
Virtual Parallel (LPT) Ports Up to three virtual LPT ports per virtual machine
Host-Based License and Server-Based License Modes
Virtual Center and ESX Server support two modes of licensing: license server-based and host-based. In
host-based licensing mode, the license files are stored on individual ESX Server hosts. In license server-
based licensing mode, licenses are stored on a license server, which makes these licenses available to one
or more hosts. You can run a mixed environment employing both host-based and license server-based
licensing.
Virtual Center and features that require Virtual Center, such as VMotion., must be licensed in license
server-based mode. ESX Server-specific features can be licensed in either license server-based or host-
based mode.
License Server-Based Licensing
License: server-based licensing simplifies license management in large, dynamic environments by allowing
a VMware license server to administer licenses. With license server-based licensing, you maintain all your
Virtual Center Management Server and ESX Server licenses from one console.
Server-based licensing is based on industry-standard FlexNet mechanisms. With server-based licensing, a
license server manages a license pool, which is a central repository holding your entire licensed
entitlement. When a host requires a particular licensed functionality, the license for that entitlement is
checked out from the license pool. License keys are released back to the pool when they are no longer
being used and are available again to any host.
The advantages of license server-based licensing include:
You administer all licensing from a single location.
New licenses are allocated and reallocated using any combination of ESX Server form factors. For
example, you can use the same 32processor license for sixteen 2-processor hosts, eight 4-processor hosts,
four 8-processor hosts, two 16-processor hosts, or any combination totaling 32 processors.
Ongoing license management is simplified by allowing licenses to be assigned and reassigned as needed.
Assignment changes as the needs of an environment change, such as when hosts are added or removed, or
premium features like VMotion, DRS, or HA are transferred among hosts.
16. During periods of license server unavailability, VirtualCenter Servers and ESX Server hosts using license
server-based licenses are unaffected for a 14-day grace period, relying on cached licensing configurations,
even across reboots.
VMware recommends using the license server-based licensing mode for large, changing environments.
Host-Based Licensing
The host-based licensing mode is similar to the licensing mode of previous releases. With host-based
licensing, your total entitlement for purchased features is divided on a per-machine basis, divided among
separate license files residing on ESX Server hosts and the VirtualCenter Server.
With host-based licensing, when someone activates a licensed feature, the feature for that entitlement must
reside in the license file on that host. With host-based licensing, you maintain separate license files on each
ESX Server host. Distribution of unused licenses is not automatic, and there is no dependence on an
external connection for licensing. Host-based license files are placed directly on individual ESX Server
hosts and replace the serial numbers used by previous versions of ESX Server version 2.x.
The advantages of host-based licensing include:
Host-based files require no license server to be installed for ESX Server host-only environments.
In a VirtualCenter and license server environment, host-based licensing allows ESX Server host licenses to
be modified during periods of license server unavailability. For example, with host-based licensing you can
manually move virtual SMP license keys between hosts without a license server connection.
By default, VirtualCenter and ESX Server software is configured to use TCP/IP ports 27000 and 27010 to
communicate with the license server. If you did not use the default ports during license server installation,
you must update the configuration on each ESX Server host.
If you change the default ports for the license server, log on to the ESX Server host service console and
open the ports you want.
To open a specific port in the service console firewall
1 Log on to the service console as the root user.
2 Execute this command:
esxcfg-firewall --openport <portnumber>,tcp
Component – Attempted Action – During Grace Period -After Grace Period Expires
Virtual machine Power on Permitted Not Permitted
Create/delete Permitted Permitted
Suspend/resume Permitted Permitted
Configure virtual machine with VI Client Permitted
Permitted
ESX Server host Continue operations Permitted
Permitted
Power on/power off Permitted
Permitted
Configure ESX Server host with VI Client Permitted
Permitted
Modify license file for host‐based licensing Permitted
Permitted
Virtual Center Remove an ESX Server host from inventory
17. (see next entry)
Server
Add an ESX Server host to inventory Not Permitted Not
Permitted
Connect/reconnect to an ESX Server host in
Inventory Permitted
Permitted
Move a powered‐off virtual machine between
hosts in inventory (cold migration) Permitted Permitted
Move an ESX Server host among folders in
inventory Permitted
Permitted
Move an ESX Server host out of a
VMotion-DRS-HA cluster (see next entry) Permitted
Permitted
Move an ESX Server host intoa
VMotion-DRS-HA cluster Not Permitted Not Permitted
Configure VirtualCenter Server with
VI Client Permitted
Permitted
Start VMotion between hosts in inventory Permitted
Permitted
Continue load balancing within a DRS cluster Permitted
Permitted
Restart virtual machines within the failed
host.s HA cluster Permitted Not
Permitted
Any component Add or remove license keys Not Permitted Not
Permitted
Upgrade Not Permitted Not Permitted
ESX Server License Types
When you purchased your VMware Infrastructure software, you purchased one of three available editions,
which are:
VMware Infrastructure Starter edition . Provides virtualization for the small business and branch office
environments. Its limited production-oriented features include:
NAS or local storage
Deployable on a server with up to four physical CPUs and up to 8GB physical memory
VMware Infrastructure Standard edition. Provides an enterprise-class virtualized infrastructure suite
for any workload. All standard functionality is enabled, and all optional add-on licenses (purchased
separately) can be configured with this edition. Includes all production-oriented features, such as:
18. NAS, iSCSI, and SAN usage
Up to four-way Virtual SMP
VMware Infrastructure Enterprise edition. Provides an enterprise-class virtualized infrastructure suite
for the dynamic data center. It includes all the features of VMware Infrastructure Standard edition, and also
includes all optional add-on licenses.
License Type Features for ESX Server Machines
Feature ESX Server Standard ESX Server Starter
Maximum number of virtual machines Unlimited
Unlimited
SAN support Yes Not available
iSCSI support Yes Not available
NAS support Yes Yes
Virtual SMP. support Yes Not available
VMware Consolidated Backup (VCB) Add-on1 Not available
Components Installed
The VMware VirtualCenter version 2 default installation includes the following components:
VMware VirtualCenter Server . A Windows service to manage ESX Server hosts.
Microsoft .NET Framework . Software used by the VirtualCenter Server, Database
Upgrade wizard, and the Virtual Infrastructure Client.
VMware VI Web Access . A Web application to allow browser-based virtual
machine management.
VMware Web Service . A software development kit (SDK) for VMware products.
VMware license server . A Windows service allowing all VMware products to be
licensed from a central pool and managed from one console.
The last three components are optional if you select a custom setup
port@hostname . for example, 27000@testserver.vmware.com
port@ip.address . for example, 27000@192.168.123.254
Type a Web Service https port. The default is 443.
Type a Web Service http port. The default is 80.
Type a VirtualCenter diagnostic port. The default is 8083.
Type a VirtualCenter port (the port which VirtualCenter uses to communicate
with the VI Client). The default is 902.
Type a VirtualCenter heartbeat port. The default is 902.
Select the check box if you want to maintain compatibility with the older SDK
Web interface.
The default ports that VirtualCenter Server uses to listen for connections from the VI Client are ports 80
and 902. VirtualCenter Server also uses port 443 to listen for data transfer from the VI Web Access Client
and other SDK clients.
19. The default port that VirtualCenter uses to send data to the managed hosts is port 902.
Managed hosts also send a regular heartbeat over UDP port 902 to VirtualCenter Server. This port must not
be blocked by firewalls.
Installing VMware ESX Server Software
To create a boot partition, use the following settings:
Mount Point . /boot
File System . ext3
Size (MB) . VMware recommends 100MB
Additional Size Options . Fixed size
To create a swap partition, use the following settings:
Mount Point . Not applicable. This drop-down menu is disabled when you select swap for file system.
File System . swap
Size (MB) . VMware recommends 544MB. For a guide to sizing, see the description of the swap partition
in.
Additional size options . Fixed size
To create a root partition, use the following settings:
Mount Point . /
File System . ext3
Size (MB) . VMware recommends at least 2560MB for the root partition, but you can fill the remaining
capacity of the drive. For a guide to sizing, see the description of the root partition in.
Additional size options . Fixed size
(Optional) To create a log partition (recommended), use the following settings:
Mount Point . /var/log
File System . ext3
Size (MB) . 500MB is the minimum size, but VMware recommends 2000MB for the log partition
NOTE If your ESX Server host has no network storage and one local disk, you must create two more
required partitions on the local disk (for a total of five required partitions):
vmkcore . A vmkcore partition is required to store core dumps for troubleshooting. VMware does not
support ESX Server host configurations without a vmkcore partition.
vmfs3 . A vmfs3 partition is required to store your virtual machines. These vmfs and vmkcore partitions are
required on a local disk only if the ESX Server host has no network storage.
Locating the Installation Logs
After you install and reboot, log on to the service console to read the installation logs:
/root/install.log is a complete log of the installation.
/root/anaconda-ks.cfg is a kickstart file containing the selected installation.
Creating a Rescue Floppy Disk
20. Use dd, rawwritewin, or rawrite to create a floppy image called bootnet.img. This file is located on the ESX
Server CD in the /images directory.
Functional Components
The functional components monitor and manage tasks. The functional components are
available through a navigation button bar in the VI Client. The options are:
Inventory – A view of all the monitored objects in Virtual Center. Monitored objects include datacenters,
resource pools, clusters, networks, data stores, templates, hosts, and virtual machines.
Scheduled tasks – A list of activities and a means to schedule those activities. This is available through
Virtual Center Server only.
Events – A list of all the events that occur in the Virtual Center environment. Use the Navigation option to
display all the events. Use an object-specific panel to display only the events relative to that object.
Admin – A list of environment-level configuration options. The Admin option provides configuration
access to Roles, Sessions, Licenses, Diagnostics, and System Logs. When connected to an ESX Server,
only the Roles option appears.
Maps – A visual representation of the status and structure of the VMware Infrastructure environment and
the relationships between managed objects. This includes hosts, networks, virtual machines, and data
stores. This is available only through Virtual Center Server.
Various information lists are generated and tracked by your Virtual Infrastructure
Client activity:
Tasks – These activities are scheduled or initiated manually. Tasks generate event messages that indicate
any issues associated with the task.
Events – Messages that report Virtual Infrastructure activity. Event messages are predefined in the product.
Alarms – Specific notifications that occur in response to selected events. Some alarms are defined by
product default. Additional alarms can be created and applied to selected inventory objects or all inventory
objects.
Logs – Stored reference information related to selected event messages. Logs are predefined in the product.
You can configure whether selected logs are generated.
Users and Groups – For VirtualCenter, users and groups are created and maintained through the Windows
domain or Active Directory database. Users and groups are registered with VirtualCenter, or created and
registered with an ESX Server, through the process that assigns privileges.
Roles – A set of access rights and privileges. There are selected default roles. You can also create roles and
assign combinations of privileges to each role.
SAN (storage area network) is a specialized high-speed network that connects computer systems, or host
servers, to high performance storage subsystems. The SAN components include host bus adapters (HBAs)
in the host servers, switches that help route storage traffic, cables, storage processors (SPs), and storage
disk arrays.
21. A SAN topology with at least one switch present on the network forms a SAN fabric.
To transfer traffic from host servers to shared storage, the SAN uses Fibre Channel (FC)
protocol that packages SCSI commands into Fibre Channel frames.
In the context of this document, a port is the connection from a device into the SAN.
Each node in the SAN, a host, storage device, and fabric component, has one or more ports that connect it
to the SAN. Ports can be identified in a number of ways:
WWPN . World Wide Port Name. A globally unique identifier for a port which allows certain applications
to access the port. The FC switches discover the WWPN of a device or host and assign a port address to the
device.
Port_ID (or port address) . Within the SAN, each port has a unique port ID that serves as the FC address
for the port. This enables routing of data through the SAN to that port. The FC switches assign the port ID
when the device logs into the fabric. The port ID is valid only while the device is logged on.
When transferring data between the host server and storage, the SAN uses a multipathing technique.
Multipathing allows you to have more than one physical path from the ESX Server host to a LUN on a
storage array.
If a default path or any component along the path.HBA, cable, switch port, or storage processor. fails, the
server selects another of the available paths. The process of detecting a failed path and switching to another
is called path failover.
Storage disk arrays can be of the following types:
An active/active disk array, which allows access to the LUNs simultaneously through all the storage
processors that are available without significant performance degradation. All the paths are active at all
times (unless a path fails).
An active/passive disk array, in which one SP is actively servicing a given LUN. The other SP acts as
backup for the LUN and may be actively servicing other LUN I/O. I/O can be sent only to an active
processor. If the primary storage processor fails, one of the secondary storage processors becomes active,
either automatically or through administrator intervention.
To restrict server access to storage arrays not allocated to that server, the SAN uses zoning. Typically,
zones are created for each group of servers that access a shared group of storage devices and LUNs. Zones
define which HBAs can connect to which SPs. Devices outside a zone are not visible to the devices inside
the zone.
Zoning is similar to LUN masking, which is commonly used for permission management. LUN masking is
a process that makes a LUN available to some hosts and unavailable to other hosts. Usually, LUN masking
is performed at the SP or server level.
Overview of Using ESX Server with SAN
Support for QLogic and Emulex FC HBAs allows an ESX Server system to be connected to a SAN array.
You can then use SAN array LUNs to store virtual machine configuration information and application data.
Using ESX Server with a SAN improves flexibility, efficiency, and reliability. It also supports centralized
management
as well as failover and load balancing technologies.
22. Benefits of Using ESX Server with SAN
You can store data redundantly and configure multiple FC fabrics eliminating a single point of failure.
Your enterprise is not crippled when one datacenter becomes unavailable.
ESX Server systems provide multipathing by default and automatically support it for every virtual machine.
Using ESX Server systems extends failure resistance to the server. When you use SAN storage, all
applications can instantly be restarted after host failure.
Using ESX Server with a SAN makes high availability and automatic load balancing affordable for more
applications than if dedicated hardware were used to provide standby services.
Because shared central storage is available, building virtual machine clusters that use MSCS becomes
possible.
If virtual machines are used as standby systems for existing physical servers, shared storage is essential and
a SAN is the best solution.
You can use the VMware VMotion capabilities to migrate virtual machines seamlessly from one host to
another.
You can use VMware HA in conjunction with a SAN for a cold-standby solution that guarantees an
immediate, automatic response.
You can use VMware DRS to automatically migrate virtual machines from one host to another for load
balancing. Because storage is on a SAN array, applications continue running seamlessly.
If you use VMware DRS clusters, you can put an ESX Server host into maintenance mode to have the
system migrate all running virtual machines to other ESX Server hosts. You can then perform upgrades or
other maintenance operations.
The transportability and encapsulation of VMware virtual machines complements the shared nature of SAN
storage. When virtual machines are located on SAN-based storage, it becomes possible to shut down a
virtual machine on one server and power it up on another server.or to suspend it on one server and resume
operation on another server on the same network.in a matter of minutes. This allows you to migrate
computing resources while maintaining consistent shared access.
Use Cases
Using ESX Server systems in conjunction (combination) with SAN is particularly effective for the
following tasks:
Maintenance with zero downtime. When performing maintenance, you can use VMware DRS or VMotion
to migrate virtual machines to other servers. If shared storage is on the SAN, you can perform maintenance
without interruptions to the user.
Load balancing. You can use VMotion explicitly or use VMware DRS to migrate virtual machines to other
hosts for load balancing. If shared storage is on a SAN, you can perform load balancing without
interruption to the user.
Storage consolidation and simplification of storage layout . If you are working with multiple hosts, and
each host is running multiple virtual machines, the hosts. storage is no longer sufficient and external
storage is needed. Choosing a SAN for external storage results in a simpler system architecture while
giving you the other benefits listed in this section. You can start by reserving a large LUN and then allocate
23. portions to virtual machines as needed. LUN reservation and creation from the storage device needs to
happen only once.
Disaster recovery . Having all data stored on a SAN can greatly facilitate remote storage of data backups.
In addition, you can restart virtual machines on remote ESX Server hosts for recovery if one site is
compromised.
Metadata Updates
A VMFS holds files, directories, symbolic links, RDMs, and so on, and corresponding metadata for these
objects. Metadata is accessed each time the attributes of a file are accessed or modified. These operations
include, but are not limited to:
Creating, growing, or locking a file.
Changing a file.s attributes.
Powering a virtual machine on or off.
Zoning and ESX Server
Zoning provides access control in the SAN topology. Zoning defines which HBAs can connect to which
SPs. When a SAN is configured using zoning, the devices outside a zone are not visible to the devices
inside the zone.
Zoning has the following effects:
Reduces the number of targets and LUNs presented to an ESX Server system.
Controls and isolates paths within a fabric.
Can prevent non-ESX Server systems from seeing a particular storage system, and
from possibly destroying ESX Server VMFS data.
Can be used to separate different environments (for example, a test from a production environment).
When you use zoning, keep in mind the following:
ESX Server hosts that use shared storage for failover or load balancing must be in one zone.
If you have a very large deployment, you might need to create separate zones for different areas of
functionality. For example, you can separate accounting from human resources.
! It does not work well to create many small zones of, for example, two hosts with four virtual machines
each.
NOTE Whether a virtual machine can run management software successfully depends on the storage array
in question.
NOTE Check with the storage array vendor for zoning best practices.
Choosing Larger or Smaller LUNs
During ESX Server installation, you are prompted to create partitions for your system.
You need to plan how to set up storage for your ESX Server systems before you perform
installation.
You can choose one of these approaches:
Many LUNs with one VMFS volume on each LUN
Many LUNs with a single VMFS volume spanning all LUNs
You can have at most one VMFS volume per LUN. You could, however, decide to use one large LUN or
multiple small LUNs.
24. You might want fewer, larger LUNs for the following reasons:
More flexibility to create virtual machines without going back to the SAN administrator for more space.
More flexibility for resizing virtual disks, doing snapshots, and so on.
Fewer LUNs to identify and manage.
You might want more, smaller LUNs for the following reasons:
Less contention on each VMFS due to locking and SCSI reservation issues.
Different applications might need different RAID characteristics.
More flexibility (the multipathing policy and disk shares are set per LUN).
Use of Microsoft Cluster Service, which requires that each cluster disk resource is in its own LUN.
Choosing Virtual Machine Locations
When you.re working on optimizing performance for your virtual machines, storage location is an
important factor. There is always a trade-off between expensive storage that offers high performance and
high availability and storage with lower cost and lower performance. Storage can be divided into different
tiers depending on a number of factors:
High Tier . Offers high performance and high availability. May offer built-in snapshots to facilitate
backups and Point-in-Time (PiT) restorations. Supports replication, full SP redundancy, and fibre drives.
Uses high-cost spindles.
Mid Tier . Offers mid-range performance, lower availability, some SP redundancy, and SCSI drives. May
offer snapshots. Uses medium-cost spindles.
Lower Tier . Offers low performance, little internal storage redundancy. Uses low end SCSI drives or
SATA (serial low-cost spindles).
Not all applications need to be on the highest performance, most available storage. at least not throughout
their entire life cycle.
Virtual Switch Policies
You can apply a set of vSwitch-wide policies by selecting the vSwitch at the top of the
Ports tab and clicking Edit.
To override any of these settings for a port group, select that port group and click Edit.
Any changes to the vSwitch-wide configuration are applied to any of the port groups
on that vSwitch except for those configuration options that have been overridden by the
port group.
The vSwitch policies consist of:
! Layer 2 Security policy
! Traffic Shaping policy
! Load Balancing and Failover policy
Layer 2 Security Policy
Layer 2 is the data link layer. The three elements of the Layer 2 Security policy are promiscuous
mode, MAC address changes, and forged transmits.
25. In non-promiscuous mode, a guest adapter listens to traffic only on its own MAC
address. In promiscuous mode, it can listen to all the packets. By default, guest adapters are set to
non-promiscuous mode.
Promiscuous Mode
Reject — Placing a guest adapter in promiscuous mode has no effect on which frames are
received by the adapter.
Accept — Placing a guest adapter in promiscuous mode causes it to detect all frames passed on
the vSwitch that are allowed under the VLAN policy for the port group that the adapter is
connected to.
MAC Address Changes
Reject — If you set the MAC Address Changes to Reject and the guest operating system changes
the MAC address of the adapter to anything other than what is in the .vmx configuration file, all
inbound frames will be dropped.
If the Guest OS changes the MAC address back to match the MAC address in the .vmx
configuration file, inbound frames will be passed again.
Accept — Changing the MAC address from the Guest OS has the intended effect: frames to the
new MAC address are received.
Forged Transmits
Reject — Any outbound frame with a source MAC address that is different from the one
currently set on the adapter will be dropped.
Accept — No filtering is performed and all outbound frames are passed.
Traffic Shaping Policy
ESX Server shapes traffic by establishing parameters for three outbound traffic characteristics:
average bandwidth, burst size, and peak bandwidth. You can set values for these characteristics
through the VI Client, establishing a traffic shaping policy for each uplink adapter.
Average Bandwidth establishes the number of bits per second to allow across the vSwitch
averaged over time—the allowed average load.
Burst Size establishes the maximum number of bytes to allow in a burst. If a burst exceeds the
burst size parameter, excess packets are queued for later transmission. If the queue is full, the
packets are dropped. When you specify values for these two characteristics, you indicate what
you expect the vSwitch to handle during normal operation.
Peak Bandwidth is the maximum bandwidth the vSwitch can absorb (take up) without dropping
packets. If traffic exceeds the peak bandwidth you establish, excess packets are queued for later
transmission after traffic on the connection has returned to the average and there are enough
spare cycles to handle the queued packets. If the queue is full, the packets are dropped. Even if
26. you have spare bandwidth because the connection has been idle, the peak bandwidth parameter
limits transmission to no more than peak until traffic returns to the allowed average load.
Load Balancing — Specify how to choose an uplink.
Route based on the originating port ID — Choose an uplink based on the virtual port where the
traffic entered the virtual switch.
Route based on ip hash — Choose an uplink based on a hash of the source and destination IP
addresses of each packet. For non-IP packets, whatever is at those offsets is used to compute the
hash.
Route based on source MAC hash — Choose an uplink based on a hash of the source Ethernet.
Use explicit failover order — Always use the highest order uplink from the list of Active
adapters which passes failover detection criteria.
Network Failover Detection — Specify the method to use for failover
detection.
Link Status only – Relies solely (only) on the link status provided by the network adapter. This
detects failures, such as cable pulls and physical switch power failures, but not configuration
errors, such as a physical switch port being blocked by spanning tree or misconfigured to the
wrong VLAN or cable pulls on the other side of a physical switch.
Beacon Probing – Sends out and listens for beacon (signal) probes (search) on all NICs in the
team and uses this information, in addition to link status, to determine link failure. This detects
many of the failures mentioned above that are not detected by link status alone.
11.3. The ESX Server Boot Process
Several boot loaders are used on Linux systems, such as the Grand Unified boot loader (GRUB)
and the Linux Loader (LILO). ESX uses LILO as the boot loader and has system components that
expect the presence of LILO as the boot loader, so don't replace LILO with another boot loader, or
your server may experience problems. The configuration parameters for the boot loader are
contained in /etc/lilo.conf in a human-readable format, but the actual boot loader is stored in a
binary format on the boot sector of the default boot disk. This section explains the boot process of
ESX Server, as well as how to load the VMkernel and configuration files.
11.3.1. High-Level Boot Process for ESX Server
BIOS is executed on the server.
BIOS launch LILO from the default boot drive.
27. LILO loads Linux Kernel for the Service Console.
The Service Console launches VMkernel.
MUI Server is started.
Virtual machines can then be launched by VMkernel and managed through MUI.
11.3.2. Detailed Boot Process
As you can see in Figure 11.3, esx is the default boot image that loads automatically after the
timeout period. This is actually configured in the /etc/lilo.conf file shown in Figure 11.4 on the
line default=esx. The Linux kernel for the Service Console is loaded in the lowest part of memory
when it is started and occupies the amount of memory specified during the installation of ESX
Server. If you look at the line in the /etc/lilo.conf file shown in Figure 11.4 that reads
append="mem=272M cpci=0;*;1:*;2:*;3:*;6:*;". This shows that the Service Console occupies the
first 272MB of memory on the server. Figure 11.5 shows a screen shot from the MUI where the
Reserved Memory is set in the Options|Startup Profile for the server.
Using HA and DRS Together
When HA performs failover and restarts virtual machines on different hosts, its first priority is
the immediate availability of all virtual machines. After the virtual machines have been restarted,
those hosts on which they were powered on might be heavily loaded, while other hosts are
comparatively lightly loaded. HA uses the CPU and memory reservation to decide failover, while
the actual usage might be higher. You can also set up affinity and anti-affinity rules in DRS to
distribute virtual machines to help availability of critical resources. For example, you can use an
anti-affinity rule to make sure two virtual machines running a critical application never run on
the same host. Using HA and DRS together combines’ automatic failover with load balancing.
This combination can result in a fast rebalancing of virtual machines after HA has moved virtual
machines to different hosts. You can set up affinity and anti-affinity rules to start two or more
virtual machines preferentially on the same host (affinity) or on different hosts (anti-affinity).
Using DRS Affinity Rules
After you have created a DRS cluster, you can edit its properties to create rules that specify
affinity. You can use these rules to determine that:
DRS should try to keep certain virtual machines together on the same host (for
example, for performance reasons) (affinity).
DRS should try to make sure that certain virtual machines are not together (for
example, for high availability). You might want to guarantee certain virtual
machines are always on different physical hosts. When there’s a problem with one host, you
don’t lose both virtual machines (anti-affinity).
28. Using CPU Affinity to Assign Virtual Machines to Specific Processors
Affinity means that you can restrict the assignment of virtual machines to a subset of the
available processors in multiprocessor systems. You do so by specifying an affinity setting for
each virtual machine.
VMware Workstation and its virtual computing technology have changed the way most
companies look at test environments, and in some cases, even production environments.
However VMware Workstation isn’t the only technology that VMware has to offer. The company
also offers GSX Server and now ESX Server as well. Let's look at how to best leverage these
technologies in your company.
VMware Workstation
VMware Workstation uses virtual machine technology that is designed mostly for the power
user. It allows you to run multiple operating systems on a single PC. The operating systems that
can run under a VMware virtual machine can include Windows 2000, Windows XP, Windows
2003 Server, Novell Netware, and Linux.
After running through a simple installation of VMware Workstation, you have the ability to
configure virtual machines within VMware’s interface. These virtual machines act and look just
like a real computer, except they sit inside a window.
In addition, you can network these computers, join and disjoin them from a domain, connect to
the Internet and other networks from within them, and simulate whatever environment you
choose.
On one of my computers, I've used VMware Workstation to simulate an entire Windows 2003
network with Windows XP clients. With this environment, I can test all of the Windows 2003
product line for compatibility with my network, as well as study for my Windows Server 2003
certification exams. In the past, I had to have at least three systems to be able to accomplish this
kind of testing. Now all I need is one computer, an Internet connection, and VMware
Workstation.
How does this work?
VMware works simultaneously with your operating system to allow you to host multiple virtual
machines. It does this by allowing you to configure your virtual machines on the VMware
virtualization layer. This layer lets you map your hardware to the virtual machine's resources
and have virtual machines mapped to your floppy drive, hard drive, CPU, etc. Inside each virtual
machine, you can create virtual hard disks and specify how much RAM you want to allocate to
each of your virtual machines. Plus, each virtual machine can have its own IP address, even if the
system hardware has only one network adapter.
In most of the environments I've seen, VMware Workstation is typically used to configure test
environments, software development testing, training classrooms, and technical support (to
simulate the environment of the user). Now that you've seen how the power user can use
VMware, let’s examine how VMware can meet the enterprise server and mainframe needs of
your company.
29. VMware GSX Server
I recently was given the opportunity to evaluate VMware GSX Server, and I was impressed by
how well it worked. VMware Workstation supports only one CPU and up to 1 GB of RAM. GSX
Server supports 2 CPUs and up to 2 GB of RAM. GSX Server is very similar to Workstation in
most other ways, but one of its coolest features is the Remote Console that allows you to remotely
manage and access your virtual machine from anywhere on your network. In addition, it's much
easier to work with in a high availability configuration.
While VMware Workstation is mostly used by a single user to run multiple instances of operating
systems for testing and support purposes, GSX Server is often used for server consolidation by
running virtual machines of server operating systems that simply appear to be stand-alone
servers to clients on the network.
VMware ESX Server
VMware ESX Server is mainframe-class virtual machine software. This solution is typically used
by mainframe data centers and cutting-edge companies. I've also seen this solution used by
startup companies. With ESX Server, you can do amazing things such as more extensive server
consolidation and virtual machine clustering.
How does it differ from GSX Server and VMware Workstation?
With VMware Workstation and GSX Server, the software sits on top of a host operating system
such as Windows or Linux. With ESX Server, the software runs directly on the system's
hardware, eliminating the need to install a base OS. In fact, ESX has its own OS. The software
basically runs on its own Linux kernel, and Linux is quite beneficial to know when working with
the product, although it's not an absolute necessity.
Installation of this product is quite basic. You place the CD in the tray of a system and boot from
the CD. It runs you through a typical Linux installation. At the end of the install, you're
instructed to go to a separate machine and type in a specific Web address to access the virtual
console of ESX Server. From there, you'll configure your system and create virtual machines.
With ESX Server, you can have up to 3.6 GB of RAM per virtual machine as well as high
performance network cards.
How are companies using ESX Server?
What I really like about this product is how companies are using it. For example, I've seen
startups simply purchase a SAN and ESX Server and create their whole network using ESX
Server. This includes the servers and workstations, which are accessed with thin clients.
GSX Server is lightning fast, so you can’t tell the difference between real systems and its virtual
systems (if you have powerful hardware running GSX Server). Furthermore, I've seen data
centers use ESX Server for hosting client environments and test environments. In the future, I
think more companies will take advantage of ESX Server as part of their business strategy.
Final analysis
Virtual machine technology is becoming more and more mainstream in today’s IT marketplace.
With the current trend toward consolidating servers, VMware is quickly making a place for its
products in the server room. Microsoft has even taken an interest in the virtual machine market
by buying Virtual PC. However, Microsoft's product line doesn’t quite have the maturity of the
VMware product line when it comes to providing enterprise-class server solutions.
30. VMWARE GSX doesn’t exist anymore. It is replaced by VMWARE Server which is free.
VMWARE server is a free virtualization software that run on a Windows Server platform. Good
for testing and smaller environments
VMWARE ESX Is the Hypervisor from VMWARE.
It has its own OS, so can not be installed upon Windows. But must be installed on the server
itself. It uses it own file system: VMFS.
Has really nice features like Vmotion, HA and resource groups.
The virtualization technology for the Enterprise.
VMware ESX Server 2.0
Server Hardware Requirements
For information on supported hardware, download the VMware ESX Server Hardware
Compatibility Guide from the VMware Web site at www.vmware.com/support/esx2.
Minimum Server Requirements
Two up to sixteen processors: Intel® 900MHz Pentium® III Xeon and above
512MB RAM minimum
One or more Ethernet controllers. Supported controllers include:
Broadcom® NetXtreme 570x Gigabit controllers
Intel PRO/100 adapters
Intel PRO/1000 adapters
3Com® 9xx based adapters
Note: If ESX Server has two or more Ethernet controllers, for best performance and
security, use separate Ethernet controllers for the service console and the virtual
machines.
A SCSI adapter, Fibre Channel adapter or internal RAID controller.
The basic SCSI adapters supported are Adaptec®, LSI Logic and most NCR/Symbios
SCSI adapters. The RAID adapters supported are HP® Smart Array, Dell® PercRAID
(Adaptec RAID and LSI MegaRAID), ServeRAID and Mylex® RAID devices. The Fibre
Channel adapters that are supported are Emulex and QLogic adapters.
The supported SCSI controllers are Adaptec® Ultra-160 and Ultra-320, LSI Logic
Fusion-MPT and most NCR/Symbios SCSI controllers. The supported RAID controllers
are HP® Smart Array, Dell® PercRAID (Adaptec RAID and LSI MegaRAID), IBM®
(Adaptec) ServeRAID and Mylex RAID controllers. The supported Fibre Channel
adapters are Emulex and QLogic host-bus adapters (HBAs).
A SCSI disk, Fibre Channel LUN or RAID LUN with unpartitioned space. In a minimum
configuration, this disk or RAID is shared between the service console and the virtual
machines.
Note: To ensure the best possible performance, always use Fibre Channel cards in
dedicated mode. We do not recommend sharing Fibre Channel cards between the
service console and the virtual machines.
Recommended for Enhanced Performance
A second disk controller with one or more drives, dedicated to the virtual machines
31. Sufficient RAM for each virtual machine and the service console
Dedicated Ethernet cards for network-sensitive virtual machines
The lists above outline a basic configuration. In practice, you may use multiple physical disks,
which may be SCSI disks, Fibre Channel LUNs or RAID LUNs. For best performance, all of the
data used by the virtual machines should be on the physical disks allocated to virtual machines.
Therefore, these physical disks should be large enough to hold disk images that will be used by
all the virtual machines.
Similarly, you should provide enough RAM for all of the virtual machines plus the service console.
For background on the service console, see Characteristics of the VMware Service Console. For
details on how to calculate the amount of RAM you need, see Sizing Memory on the Server.
Note: To ensure the best possible I/O performance and workload management, VMware ESX
Server provides its own drivers for supported devices. Be sure that the devices you plan to use in
your server are supported. For additional detail on I/O device compatibility, download the VMware
ESX Server I/O Adapter Compatibility Guide from the VMware Web site at
www.vmware.com/support/esx2.
ESX Server virtual machines can share a SCSI disk with the service console, but for enhanced
disk performance, you can configure the virtual machines to use a SCSI adapter and disk
separate from those used by the service console. You should make sure enough free disk space
is available to install the guest operating system and applications for each virtual machine on the
disk that they will use.
Maximum Physical Machine Specifications
Storage
16 host bus adapters per ESX Server system
128 logical unit numbers (LUNs) per storage array
128 LUNs per ESX Server system
VMware File System (VMFS)
128 VMFS volumes per ESX Server system
Maximum physical extents per VMFS volume:
VMFS-2 volumes: 32 physical extents
VMFS-1 volumes: 1 physical extent
2TB per physical extent
Maximum size per VMFS volume:
VMFS-2 volumes: approximately 64TB, with a maximum of 2TB per each physical
extent
VMFS-1 volumes: approximately 2 TB
CPU
16 physical processors per system, with 8 virtual CPUs per processor
80 virtual CPUs in all virtual machines per ESX Server system
Memory
64GB of RAM per ESX Server system
Up to 8 swap files, with a maximum file size of 64GB per swap file
Adapters
64 adapters of all types, including storage and network adapters, per system
16 Ethernet ports per system
Up to 8 Gigabit Ethernet ports or up to 16 10/100 Ethernet ports per system
Up to 32 virtual machines per virtual network device (vmnic or vmnet adapter)
Remote Management Workstation Requirements
The remote workstation is a Windows NT 4.0, Windows 2000, Windows XP or Linux system from
which you launch the VMware Remote Console and access the VMware Management Interface.
32. The VMware Remote Console runs as a standalone application. The VMware Management
Interface uses a Web browser.
Hardware Requirements
Standard x86-based computer
266MHz or faster processor
64MB RAM minimum
10MB free disk space required for basic installation
Software — Windows Remote Workstation
Windows XP Professional
Windows 2000 Professional, Server or Advanced Server
Windows NT 4.0 Workstation or Server, Service Pack 6a
The VMware Management Interface is designed for these browsers:
Internet Explorer 5.5 or 6.0 (6.0 highly recommended for better performance)
Netscape Navigator® 7.0
Mozilla 1.x
Software — Linux Remote Workstation
Compatible with standard Linux distributions with glibc version 2 or higher and one of the
following:
For single-processor systems: kernel 2.0.32 or higher in the 2.0.x series, kernel in the
2.2.x series or kernel in the 2.4.x series
For multiprocessor systems: kernel in the 2.2.x series or kernel in the 2.4.x series
The VMware Management Interface is designed for these browsers:
Netscape Navigator 7.0
Mozilla 1.x
Supported Guest Operating Systems
In ESX Server 2.0, VMware Virtual SMP for ESX Server is supported on all of the following guest
operating systems marked SMP-capable for dual-virtual CPU configurations.
Guest Operating System SMP-Capable
Windows Server 2003 (Enterprise, Standard and Web Editions) Yes
Windows XP Professional (Service Pack 1) No
Windows 2000 Server (Service Pack 3 or 4) Yes
Windows 2000 Advanced Server (Service Pack 3 or 4) Yes
Windows NT 4.0 — Service Pack 6a No
Red Hat Linux 7.2 Yes
Red Hat Linux 7.3 and 8.0 No
Red Hat Linux 9.0 Yes
Red Hat Enterprise Linux (AS) 2.1 and 3.0 Yes
SuSE Linux 8.2 Yes
SuSE Linux Enterprise Server (SLES) 8 Yes
33. Novell NetWare 6.5 and 5.1 (Patch 6) No
Virtual Machine Specifications
Each ESX Server machine can host up to 80 virtual CPUs in virtual machines (and up to 200
registered virtual machines) on a single ESX Server or up to 8 virtual machines for each CPU,
with the following capabilities and specifications.
Virtual Storage
4 host bus adapters per virtual machine
15 targets per host bus adapter
60 targets per virtual machine; 256 targets concurrently in all virtual machines
Virtual Processor
Intel Pentium II or later, (dependent on system processor)
One or two processors per virtual machine.
Note: If you plan to create a dual-virtual CPU virtual machine, then your ESX Server
machine must have at least two physical processors and you must have purchased the
VMware Virtual SMP for ESX Server product.
Virtual Chip Set
Intel 440BX-based motherboard with NS338 SIO chip
Virtual BIOS
PhoenixBIOS 4.0 Release 6
Virtual Memory
Up to 3.6GB per virtual machine
Virtual SCSI Devices
Up to four virtual SCSI adapters per virtual machine with up to 15 devices per adapter
9TB per virtual disk
Virtual Ethernet Cards
Up to four virtual Ethernet adapters per virtual machine
Note: Each virtual machine has a total of 5 virtual PCI slots, therefore the total number
of virtual adapters, SCSI plus Ethernet, cannot be greater than 5.
Virtual Floppy Drives
Up to two 1.44MB floppy drives per virtual machine
Virtual CD-ROM
Up to two drives per virtual machine
Legacy Devices
Virtual machines may also make use of the following legacy devices. However, for performance
reasons, use of these devices is not recommended.
Virtual Serial (COM) Ports
Up to two serial ports per virtual machine
Virtual Parallel (LPT) Ports
One LPT Port per virtual machine
VMware Versions Compared
In the past, VMware was just a single product. Now, you will find that there are a wide
variety of VMware products to choose from. Because of this, it can be confusing which one
to choose. This article aims at helping you sort it all out by providing a quick review of all
VMware products.
With that, I will now list out the major VMWare products and provide my take on how
these products differ from one another.
34. ESX Server
VMware’s ESX server is at the highest end of features and price of all the VMware server
applications. The ESX actually loads right on to “bare-metal” servers. Thus, there is no need to
first load an underlying operating system prior to loading VMware ESX. What is unique about
ESX is that it comes with its own modified Linux Kernel called VMKernel (based on Red Hat
Enterprise Linux). One of the strongest features of VMware ESX server is its performance. When
running on similar hardware, you can run twice as many virtual servers on ESX as you can
VMware Server. ESX is now sold in a suite of products called VMware Infrastructure.
Overview:
Enterprise Class
High Availability
Better Manageability
Used for enterprise applications like Oracle, SQL Server, clustered servers, and other critical
infrastructure servers
Supports 4-10+ virtual machines per servers, depending on hardware
Supports up to 32 physical CPU (and 128 virtual) and up to 64GB of RAM
Loads directly on hardware with no need to load underlying operating system (because it uses
the VMKernel)
VMWare Server
VMware’s Server is a FREE VMware virtualization product built for use in production
servers. Unlike ESX, VMware Server still uses the underlying host operating system. With
VMware Server, you loose the some of the functionality and performance of the ESX server
but don’t have as great of price tag (its free!) For an organization starting with a single
VMware server and not anticipating drastic growth, VMware Server is for you. VMware
Server’s primary competition is Microsoft’s Virtual Server.
Overview:
Used for medium/small business workgroup servers
Excellent for software development uses
Used for Intranet, utility, and workgroup application servers
Supports 2-4+ virtual machines per servers, depending on hardware
Supports 2-16 CPU and up to 64GB of RAM (but limited by host OS)
Runs on top of Linux or Windows Server
Workstation
VMware’s Workstation is for use on a client workstation. For example, say that I want to run both
Windows 2003 server and Linux Fedora Core 5 on my desktop workstation, which is running
Windows XP. VMware Workstation would be the program I would use to do this. This would
allow me the flexibility to run these guest operating systems to test various applications and
features. I could also create snapshots of them to capture their configuration at a certain point in
time and easily duplicate them to create other virtual machines (such as moving them to a
VMware Server). Keep in mind that I would have to have a “beefy” workstation with lots of
RAM and CPU to keep up with the applications I am also running on my host operating system
(Windows XP). Some people ask whether you could run Workstation on a “server” and just not
have to use VMware Server. The answer is that, while you can do this, you don’t want to because
the server’s applications won’t perform well under load and neither will the multiple operating
systems. You might ask why you would buy VMware workstation for $189 when VMware Server
35. is free. Many people would assume that Server is better and costs less. The answer is that these
VMware Workstation and VMware Server serve different purposes. VMware Server should be
used to run test or production servers. On the other hand, VMware Workstation would be used
by testers and developers because of its powerful snapshot manager. This development and
testing also applies to IT professionals who want the ability to take multiple snapshots of their
virtual systems and be able to jump forward and back in these snapshots. However, you do not
want to run production servers in VMware Workstation. In other words, both VMware
Workstation and VMware Server have different purposes and should not be looked at as
competing products.
Overview:
Runs on your desktop operating system
Costs $189
Great for testing applications and developing software
Can create new virtual machines, where VMware Player cannot
Support bridged, host only, or NAT network configurations
Ability to share folders between host OS and virtual machines
Access to host devices like CD/DVD drives and USB devices
Snapshot manager allows multiple snapshots and ability to move forward and backwards
between them
Log files should be used only when you are having trouble with a virtual machine.
VMDK files – VMDK files are the actual hard drive for the virtual machine. Usually you
will specify that a virtual machine’s disk can grow as needed. In that case, the VMDK file
will be continually growing, up to a size of 2GB. After 2GB, subsequent VMDK files will be
created.
VMEM – A VMEM file is a backup of the virtual machine’s paging file. It will only appear if
the virtual machine is running, or if it has crashed.
VMSN & VMSD files – these files are used for VMware snapshots. A VMSN file is used to
store the exact state of the virtual machine when the snapshot was taken. Using this
snapshot, you can then restore your machine to the same state as when the snapshot was
taken. A VMSD file stores information about snapshots (metadata). You’ll notice that the
names of these files match the names of the snapshots.
NVRAM files – these files are the BIOS for the virtual machine. The VM must know how
many hard drives it has and other common BIOS settings. The NVRAM file is where that
BIOS information is stored.
VMX files – a VMX file is the primary configuration file for a virtual machine. When you
create a new virtual machine and answer questions about the operating system, disk sizes,
and networking, those answers are stored in this file. As you can see from the screenshot
below, a VMX file is actually a simple text file that can be edited with Notepad. Here is the
“Windows XP Professional.vmx” file from the directory listing, above:
What are all the files that are located in my virtual machines directory on the ESX server for?
*.nvram file – This file contains the CMOS/BIOS for the VM. The BIOS is based off the
Phoenix BIOS 4.0 Release 6 and is one of the most successful and widely used BIOS and is
compliant with all the major standards, including USB, PCI, ACPI, 1394, WfM and PC2001.
36. If the NVRAM file is deleted or missing it will automatically be re-created when the VM is
powered on. Any changes made to the BIOS via the Setup program (F2 at boot) will be
saved in this file. This file is usually less then 10K in size and is not in a text format (binary).
vmdk files – These are the disk files that are created for each virtual hard drive in your VM.
There are 3 different types of files that use the vmdk extension, they are:
• *–flat.vmdk file - This is the actual raw disk file that is created for each virtual hard drive.
Almost all of a .vmdk file's content is the virtual machine's data, with a small portion allotted to
virtual machine overhead. This file will be roughly the same size as your virtual hard drive.
• *.vmdk file – This isn't the file containing the raw data anymore. Instead it is the disk
descriptor file which describes the size and geometry of the virtual disk file. This file is in text
format and contains the name of the –flat.vmdk file for which it is associated with and also the
hard drive adapter type, drive sectors, heads and cylinders, etc. One of these files will exist for
each virtual hard drive that is assigned to your virtual machine. You can tell which –flat.vmdk
file it is associated with by opening the file and looking at the Extent Description field.
• *–delta.vmdk file - This is the differential file created when you take a snapshot of a VM (also
known as REDO log). When you snapshot a VM it stops writing to the base vmdk and starts
writing changes to the snapshot delta file. The snapshot delta will initially be small and then start
growing as changes are made to the base vmdk file, The delta file is a bitmap of the changes to
the base vmdk thus is can never grow larger than the base vmdk. A delta file will be created for
each snapshot that you create for a VM. These files are automatically deleted when the snapshot
is deleted or reverted in snapshot manager.
*.vmx file – This file is the primary configuration file for a virtual machine. When you create a
new virtual machine and configure the hardware settings for it that information is stored in this
file. This file is in text format and contains entries for the hard disk, network adapters, memory,
CPU, ports, power options, etc. You can either edit these files directly if you know what to add or
use the Vmware GUI (Edit Settings on the VM) which will automatically update the file.
*.vswp file – This is the VM swap file (earlier ESX versions had a per host swap file) and is
created to allow for memory over commitment on a ESX server. The file is created when a VM is
powered on and deleted when it is powered off. By default when you create a VM the memory
reservation is set to zero, meaning no memory is reserved for the VM and it can potentially be
100% overcommitted. As a result of this a vswp file is created equal to the amount of memory
that the VM is assigned minus the memory reservation that is configured for the VM. So a VM
that is configured with 2GB of memory will create a 2GB vswp file when it is powered on, if you
set a memory reservation for 1GB, then it will only create a 1GB vswp file. If you specify a 2GB
reservation then it creates a 0 byte file that it does not use. When you do specify a memory
reservation then physical RAM from the host will be reserved for the VM and not usable by any
other VM’s on that host. A VM will not use it vswp file as long as physical RAM is available on
the host. Once all physical RAM is used on the host by all its VM’s and it becomes overcommitted
then VM’s start to use their vswp files instead of physical memory. Since the vswp file is a disk
file it will affect the performance of the VM when this happens. If you specify a reservation and
the host doe’s not have enough physical RAM when the VM is powered on then the VM will not
start.
*.vmss file – This file is created when a VM is put into Suspend (pause) mode and is used to save