Deployment Cookbook: Quick Migration with Virtual Server Host ...
1.
Deployment Cookbook: Quick Migration
with Virtual Server Host Clustering
Windows Server® 2003 Enterprise Edition &
Microsoft® Virtual Server 2005 R2 Service Pack 1
Document Version 2.02 | Last Edited: 09/11/07 | Edited By: TMS
2.
3. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 1
Table of Contents
Introduction .......................................................................................... 2
Before You Begin................................................................................... 6
Section 1: Set Up and Configure the Windows Server Cluster ........... 16
Section 2: Create the Virtual Server Host Cluster............................... 59
Appendix A: Install IIS via Control Panel ............................................. 83
Appendix B: Virtual Server Security Considerations........................... 88
Appendix C: Script for Virtual Server Host Clustering ........................ 89
Related Links ....................................................................................... 95
Version 1.0
4. 2 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
Introduction
Most mid‐sized and larger businesses may want to enable high availability within their
virtual infrastructures. This is possible with Microsoft® Virtual Server host clustering,
which can provide a wide variety of services through a small number of physical servers
and, at the same time, maintain availability of the services provided.
Virtual Server host clustering is a way of combining Microsoft® Virtual Server 2005 R2
with the server cluster feature in Windows Server® 2003. This cookbook describes a
simple configuration in which you use Virtual Server 2005 R2 to configure one guest
operating system, and configure a server cluster that has two servers (nodes). With this
configuration, you can migrate workloads easily from one node to the other. This can be
used in cases when you need to schedule downtime for one of the nodes in your cluster.
You can create this configuration and, by carefully following the pattern of the
configuration, develop a host cluster with additional guests or additional nodes.
Intended Audience
This deployment cookbook is written for the IT generalist at a mid‐market corporation.
The goal of this cookbook is to provide you with all of the steps and guidance you need
to successfully install and configure Virtual Server host clustering and migrate workloads
from one node to another.
Using This Cookbook
To cover the bases, this cookbook probably contains more information than you need.
In addition to the table of contents at the beginning of this cookbook, each section also
includes its own table of contents so that you can easily find the steps that you need
most. If you already have a component installed or a step completed, move on to the
next step or section.
What Is Virtual Server?
In conjunction with Windows Server 2003, Virtual Server 2005 R2 Service Pack 1 (SP1)
provides a virtualization platform that runs most major x86 operating systems in a guest
environment, and is supported by Microsoft as a host for Windows Server operating
systems and Windows Server System™ applications. The comprehensive COM API in
Virtual Server 2005 R2, in combination with the Virtual Hard Drive (VHD) format and
support for virtual networking, provides administrators with complete control of
portable, connected virtual machines and enables easy automation of deployment and
ongoing change and configuration.
Additionally, its integration with a wide variety of existing Microsoft and third‐party
management tools allows administrators to seamlessly manage a Virtual Server 2005 R2
SP1 environment with their existing physical server management tools. A wide array of
5. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 3
complementary product and service offerings are available from Microsoft and its
partners to help businesses plan for, deploy, and manage Virtual Server 2005 R2 SP1 in
their environments.
What Is Virtual Server Host Clustering?
Virtual Server host clustering is a way of combining two technologies: Virtual Server
2005 R2 and the server cluster feature in Windows Server 2003, so that you can
consolidate servers onto one physical host server without causing that host server to
become a single point of failure. To give an example, suppose you had two physical
servers providing client services as follows:
• Windows Server 2003 Standard Edition, used as a Web server
• Microsoft® Windows NT® Server 4.0 with Service Pack 6a (SP6a), with a
specialized application used in your organization
By using a configuration like the scenario in this cookbook, you could consolidate these
physical servers onto one physical server and, at the same time, maintain the availability
and flexibility of services if that consolidated server required scheduled maintenance. To
do this, you would run each server listed above as a guest (also known as a virtual
machine) on a physical server. You would also configure this server as one node in a
server cluster, meaning that a second server would be ready to support the guests. If
the need arose to shut down the first server (such as scheduled maintenance), the
workloads running on the first server can easily be migrated to the second server. You
could perform any necessary work on the first server and then, as needed, have it once
again resume support of the services.
What Is a Cluster?
There are several types of clusters. This paper deals with virtual machines running on
failover clusters. Failover clustering keeps server‐based applications highly available,
regardless of individual server failures. The primary function of failover clustering occurs
when one server in a cluster fails or is taken offline, and the other servers in the cluster
take over the offline server's operations. Clients using server resources experience little
or no interruption of their work because the resource functions move from one server in
the cluster to the other. In this cookbook, you will learn how to configure Virtual Server
to run with failover clustering.
This paper does not deal with network load balancing clusters or high‐performance
clusters. Network load balancing distributes IP traffic across multiple cluster hosts to
scale network performance, and high‐performance clusters allow users to run parallel,
high‐performance computing (HPC) applications for solving complex computations.
What Is a Cluster Resource?
A resource is the single unit that can be administered or managed on the cluster. Cluster
resources include physical hardware devices such as disk drives and network cards, and
6. 4 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
logical items such as Internet Protocol (IP) addresses, applications, and application
databases. Each node in the cluster will have its own local resources, like a stand‐alone
server. However, the cluster also has common resources, such as a common data
storage array and private cluster network. These common resources are accessible by
each node in the cluster. A resource can be either online or offline. A resource is online
when it is available and providing its service to the cluster.
Resources are physical or logical entities that have the following characteristics:
• Can be brought online and taken offline.
• Can be managed in a server cluster.
• Can be owned by only one node at a time.
A resource group is a collection of resources managed by the cluster service as a single
logical unit. Application resources and cluster entities can be easily managed by
grouping logically related resources into a resource group. When a cluster service
operation is performed on a resource group, the operation affects all individual
resources contained within the group. Typically, a resource group is created to contain
all the elements needed by a specific application server and client for successful use of
the application.
Understanding Common Terms in Virtual Server Host Clustering
The following terms are important for understanding Virtual Server host clustering.
host
A physical server on which a version of Virtual Server 2005 is running.
guest
An operating system running as a virtual machine in Virtual Server 2005. Multiple guests
can run on one host, and each guest can run one or more applications.
node
A computer system that is an active or inactive member of a cluster. In this cookbook, a
node is also a Virtual Server 2005 host.
failover
The process of taking a group of clustered resources (such as a disk on which data is
stored, plus an associated script) offline on one node and bringing them online on
another node. The cluster service ensures that this is done in a predefined, orderly
fashion, so that users experience minimal disruptions in service.
cluster storage
Storage that is attached to all nodes of the cluster. Each disk on the cluster storage is
owned by only one node of the cluster at a time. The ownership of disks moves from
one node to another during failover or when the administrator moves a group of
resources to another node.
7. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 5
logical unit numbers (LUNs)
A unique identifier used on a SCSI bus to differentiate up to eight separate devices.
quorum disk/drive
In every cluster, a single resource is designated as the quorum resource. This resource
maintains the configuration data necessary for recovery of the cluster. This data, in the
form of recovery logs, contains details of all of the changes that have been applied to
the cluster database. This provides node‐independent storage for cluster configuration
and state data.
heartbeat
Nodes in a cluster communicate using their cluster service. The cluster service keeps
track of the current state of the nodes within a cluster and determines when a group
and its resources should fail over to an alternate node. This communication takes the
form of messages that are sent regularly between the two nodes' cluster services. These
messages are called heartbeats.
cluster service
Service that facilitates clustering of nodes. Included with Windows Server 2003.
8. 6 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
Before You Begin
This section will cover prerequisites for your environment, as well as other specifics that
will help you get the most out of this cookbook.
Scenario Overview
This host clustering quick migration scenario for Virtual Server uses two host machines,
with a Virtual Server‐based virtual machine (guest) on one of the host machines. Only
one of the two nodes is hosting a virtual machine at any given time. The other node will
be standing by and would take over hosting the virtual machines were the first node to
fail. In this scenario, we will be migrating a workload from the first node to the second
node.
Note: The entire cluster should be in the same geographic location.
The following diagram shows the cluster setup. The public network will be used to
connect these nodes, as well as the virtual machine, to other network resources, such as
domain controllers, printers, and routers. The private network is used exclusively for
cluster‐related network traffic and allows the cluster nodes to verify the state of other
cluster nodes. The network used to connect shared storage to the cluster will depend on
what type of shared storage you use.
Figure 1: Network topology used in this cookbook
The following naming conventions will be used throughout this cookbook:
9. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 7
• Cluster name: MyCluster
• Domain: contoso.com
• Host machine 1 (Virtual Server host): Node 1
• Host machine 2 (Virtual Server host): Node 2
• Guest (virtual) machine: Guest1
• Quorum disk: Disk Q
• Label for volume on shared storage: Disk X
• Folder on disk where configuration files for the guest are placed: X:Guest1
• Cluster resource group: Guest1Group
• Clustered Physical Disk Resource: DiskResourceX
• Clustered Generic Script resource: Guest1Script (uses Havm.vbs)
• Public network: Public or mixed network
• Private network: Private network
This cookbook describes a simple host clustering configuration. You can create this
configuration and then, by carefully following the pattern of the configuration, develop
a host cluster with additional guests or additional nodes.
The scenario described in this cookbook has the following basic characteristics (more
details can be found in the Prerequisites section):
• This cluster is a two‐node cluster. This is fewer than the maximum number of
nodes possible (eight).
• This cluster uses cluster storage (shared storage) connected to the nodes by
SCSI, Fibre Channel, or iSCSI. Any of these will work with the instructions
provided in this cookbook. Consult your hardware vendor or manufacturer’s
instructions for details relating to your storage.
• You will use two shared storage disks. Disk Q will be the quorum disk which
stores information to rebuild the cluster in the event of a failure. Disk X will be
the shared storage for the cluster.
• It has one guest operating system, configured as a resource group in the cluster.
The list of supported guest operating systems can be found in the Prerequisites
section.
In a production environment, it is likely you would want more than one guest
operating system. However, a scenario with one guest provides the foundation
for understanding a scenario with additional guests.
• This scenario uses copies of the provided script. When you configure the script
as a Generic Script resource in the cluster, it ensures that the guest functions
correctly when a failover or other cluster‐related process occurs. The script also
10. 8 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
triggers restart of the guest if the guest stops running. You can find this script in
Appendix C.
Overview of Cluster Setup Procedure
The procedure for setting up your cluster is broken into two sections. In Section 1, you
will create a server cluster with two physical machines. In Section 2, you will create a
Virtual Server host cluster by installing Virtual Server on one of the physical machines.
Licensing Considerations
The license for Windows Server 2003 Enterprise Edition, Enterprise x64 Edition, or later
versions on the server hosting your virtual machines, enables you to run up to four
instances of Windows Server 2003 Standard Edition, Enterprise Edition, or later versions
as the operating systems for those virtual machines, at no extra cost.
The license for Windows Server 2003 Datacenter Edition, Datacenter x64 Edition, or
later versions on the server hosting your virtual machines, enables you to run an
unlimited number of instances of Windows Server 2003 Standard Edition, Enterprise
Edition, Datacenter Edition, or later versions as the operating systems for those virtual
machines, at no extra cost.
For each host machine that is a member of your cluster, you will need a Windows
license.
11. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 9
AMD Virtualization Technology Support
All AMD64 processors are built on Direct Connect Architecture. Direct Connect
architecture eliminates traditional bottlenecks inherent in connecting CPUs, memory
and I/O for reduced latency and optimized memory performance.
Direct Connect Architecture enhances performance of memory bound applications –
virtualized or not. Direct Connect Architecture provides inherent benefits to
virtualization workloads since the reduced latency and performance improvements
increase the number of virtual machines that can run on a server.
Supplementing the inherent benefits provided by Direct Connect Architecture, AMD64
processors starting from Opteron Family F, Rev F processors incorporate extensions to
significantly enhance performance of virtual machines. These extensions are collectively
known as AMD Virtualization (AMD‐V).
AMD‐V is built on the Direct Connect Architecture foundation and significantly reduces
overheads associated with traditional software‐based virtualization solutions. Use of
AMD‐V allows many instances of off‐the‐shelf operating systems to run concurrently on
a single instance of Virtual Server 2005.
Starting with the Opteron Family 10h processors, besides many new virtualization
performance improvements, AMD64 processors also incorporate an extension to allow
virtual machines to manipulate their page tables without Hypervisor intervention. This
extension is called Nested Page Tables (NPT) and significantly reduces overheads
associated with Shadow Paging algorithms found in today’s software‐based
virtualization solutions.
Additional benefits that are derived from AMD‐V are enhanced security and reduced
complexity. By diminishing the reliance on the virtualization software platform, AMD‐V
also reduces the attack surface and eliminates points of failure.
Advanced memory handling capabilities offered by AMD Virtualization allows increased
isolation of virtual machines and can exclude device access, both of which increase the
overall security of virtualization.
AMD‐V and AMD Direct Connect Architecture enhance virtualization by providing
efficient isolation of virtual machine memory. Virtualization software uses these
hardware capabilities ensuring errors that interfere with operations running in one
virtual machine have no effect on other virtual machines on the same computer.
Service Pack 1 (SP1) for Microsoft Virtual Server 2005 R2 is built to take advantage of
AMD‐V hardware‐assisted virtualization and delivers marked improvement in system
performance of guest virtual machines.
12. 10 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
Prerequisites
The following system requirements include software, hardware, network, and storage
requirements.
Software Requirements and Guidelines
• You must have Windows Server 2003 Enterprise Edition, Datacenter Edition, or
later installed on all computers in the cluster. Microsoft strongly recommends
that you also install the latest service pack for Windows Server 2003. If you
install a service pack, the same service pack must be installed on all computers
in the cluster. Either the x86 or x64 version of these operating systems can be
used.
• You will need licensed copies of the operating system and other software that
you will run on the guests. The following operating systems are supported for
the guest machine:
o Windows Server 2003 Standard Edition (Windows Server 2003 SP1
support for Virtual Server 2005 R2 only)
o Windows Server 2003 Enterprise Edition or later
o Windows Server 2003 Web Edition or later
o Windows® Small Business Server 2003 Standard Edition or later
o Windows® Small Business Server 2003 Premium Edition or later
o Windows® 2000 Server
o Windows® 2000 Advanced Server
o Windows NT Server 4.0 with SP6a
• Your system must be using a name‐resolution service, such as Domain Name
System (DNS), DNS dynamic update protocol, Windows Internet Name Service
(WINS), or Hosts file. Hosts file is supported as a local, static file method of
mapping DNS domain names for host computers to their Internet Protocol (IP)
addresses. The Hosts file is provided in the systemrootSystem32DriversEtc
folder. In most cases, DNS should be sufficient.
• All nodes in the cluster must be in the same Active Directory® directory service
domain. As a best practice, all nodes should have the same domain role (either
member server or domain controller), and the recommended role and the role
we will use in this cookbook is member server. Exceptions that can be made to
these domain role guidelines are described later in this cookbook.
• When you first create a cluster or add nodes to it, you must be logged on to the
domain with an account that has administrator rights and permissions on all
nodes in that cluster. The account does not need to be a Domain Admin level
13. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 11
account, but can be a Domain User account with Local Admin rights on each
node.
Hardware Requirements and Guidelines
• All nodes in the cluster must have identical hardware and be of the same
architecture. You cannot mix x86‐based, Itanium‐based, and x64‐based
computers within the same cluster. Also, each hardware component must be
the same make, model, and firmware version. This makes configuration easier
and eliminates compatibility problems.
• If you are installing a server cluster on a storage area network (SAN), and you
plan to have multiple devices and clusters sharing the SAN with a cluster, your
hardware components must be compatible.
• You must have two mass‐storage device controllers in each node in the cluster:
one for the local disk, one for the cluster storage. You can choose between SCSI,
iSCSI, or Fibre Channel for cluster storage on server clusters that are running
Windows Server 2003 Enterprise Edition or Datacenter Edition. You must have
two controllers because one controller has the local system disk for the
operating system installed, and the other controller has the shared storage
installed.
• You must have storage cables to attach the cluster storage device to all
computers. Refer to the manufacturer's instructions for configuring storage
devices.
• The following table includes specifications that each host machine should meet:
Table 1: Host Machine Prerequisites
Hardware Requirement
Minimum CPU speed 550 MHz
Recommended CPU 1.0 GHz or higher
speed
Processor AMD Opteron, Athlon, Athlon 64, Athlon X2, Sempron, Duron
information
Minimum RAM 256 MB (additional memory needed for each guest operating system)
Required available 2 GB on each node. Additional disk space needed on shared storage for each guest
hard‐disk space operating system.
Recommended • Super VGA (800×600) or higher resolution monitor recommended; VGA or
monitor hardware that supports console redirection required
• Keyboard and mouse or compatible pointing device, or hardware that
supports console redirection required
Additional processor Intel Celeron, Pentium III, Pentium 4, Xeon
support
14. 12 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
Network Requirements and Guidelines
Each cluster node requires at least two network adapters and must be connected by
two or more independent networks. At least two LAN networks (or virtual LANs
[VLANs]) are required to prevent a single point of failure. A server cluster whose nodes
are connected by only one network is not a supported configuration—the adapters,
cables, hubs, and switches for each network must be able to be stopped independently.
This usually means that the components of any two networks must be physically
independent.
Two networks must be configured to handle either All communications (mixed network)
or Internal cluster communications only (private network). The recommended
configuration for two adapters is to use one adapter for the private (node‐to‐node only)
communication and the other adapter for mixed communication (node‐to‐node plus
client‐to‐cluster communication), as we will do here.
You should keep all private networks separate from other networks. Specifically, do not
use a router, switch, or bridge to join a private cluster network to any other network. Do
not include other network infrastructure or application servers on the private network
subnet. To separate a private network from other networks, use a cross‐over cable in a
two‐node cluster configuration, or a dedicated hub or switch in a cluster configuration
of more than two nodes.
Note if you are using iSCSI storage, you will need a dedicated NIC to communicate with
your iSCSI storage target. See your storage company documentation for details.
Your cluster network infrastructure should have the following characteristics:
• A unique name for each node on your network.
• A static IP address for the cluster. Resources on the public network will use this
IP address to communicate with the cluster. We will use 192.168.10.50 on the
public interfaces as the cluster IP address in the example network provided in
this paper.
• One static IP address for each network adapter on each node. Set the addresses
for each linked pair of network adapters (linked node‐to‐node) to be on the
same subnet. You will use the following IP addresses:
o Node 1 private network IP address: 10.0.0.11
o Node 2 private network IP address: 10.0.0.12
o Node 1 public network IP address: 192.168.10.11
o Node 2 public network IP address: 192.168.10.12
15. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 13
Note: Server clusters do not support the use of IP addresses assigned from Dynamic
Host Configuration Protocol (DHCP) servers.
• The nodes in the cluster must be able to access an Active Directory domain
controller. The cluster service requires that the nodes be able to contact the
domain controller to function correctly. The domain controller should be in the
same location and on the same local area network (LAN) as the nodes in the
cluster. To avoid a single point of failure, the domain must have at least two
domain controllers. However, for this scenario, these domain controllers should
not be part of the cluster, due to additional complexity associated with
clustering domain controllers.
• Each node must have at least two network adapters. One adapter will be used
exclusively for internal node‐to‐node communication (the private network). The
other adapter will connect the node to the client public network. It should also
connect the cluster nodes to provide support in case the private network fails.
(A network that carries both public and private communication is called a mixed
network.) The network adapters connected together into one network must be
identical to one another.
• Teaming network adapters on all cluster networks concurrently is not supported
because of delays that can occur when heartbeat packets are transmitted and
received between cluster nodes. For best results, when you want redundancy
for the private interconnect, you should disable teaming and use the available
ports to form a second private interconnect. This achieves the same end result
and provides the nodes with dual, robust communication paths. The mixed
network can use teamed network adapters, but the private network cannot. If
you are using fault‐tolerant network cards or teaming network adapters, you
should ensure that you are using the most recent firmware and drivers. Check
with your network adapter manufacturer to verify compatibility with the cluster
technology in Windows Server 2003 Enterprise Edition and
Windows Server 2003 Datacenter Edition.
Storage Requirements and Guidelines
• An external disk storage unit must be connected to all nodes in the cluster. This
will be used as the cluster storage. You should also use some type of hardware
redundant array of independent disks (RAID).
• All cluster storage disks, including the quorum disk, must be physically attached
to a shared bus. If you are using SCSI or Fibre Channel, each node must have a
mass‐storage device controller dedicated to the cluster storage (in addition to
the device controller for the local disk).
• If you are using iSCSI, each node must have a network adapter dedicated to the
cluster storage. That network adapter, must be dedicated to iSCSI. The network
16. 14 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
adapters used for iSCSI on each node should be identical, and we recommend
that the adapters be Gigabit Ethernet.
• A disk that contains the active boot or system partitions cannot be used as a
cluster resource disk. System partition refers to the disk volume that contains
the hardware‐specific files needed to start Windows. The boot partition refers
to the disk volume that contains the Windows operating system files (by default,
in the WINDOWS folder) and its support files. The boot partition can, but is not
required to, be the same partition as the system partition.
• The external shared storage must appear as two logical drives visible to both
nodes in the cluster. In this scenario, you will have two logical drives: disk Q will
be the quorum disk and disk X will be the shared storage disk.
• On your shared storage, you will need a dedicated resource to be used by the
cluster as the quorum resource. The recommended minimum size for the
volume is 50 MB. You should not store user data on any volume on the quorum
LUN.
• If you are using SCSI, ensure that each device on the shared bus (both SCSI
controllers and hard disks) has a unique SCSI identifier. If the SCSI controllers all
have the same default identifier (the default is typically SCSI ID 7), change one
controller to a different SCSI ID, such as SCSI ID 6. If more than one disk will be
on the shared SCSI bus, each disk must also have a unique SCSI identifier.
• Software fault tolerance is not natively supported for disks in the cluster
storage. For cluster disks, you must use the NTFS file system and configure the
disks as basic disks with all partitions formatted as NTFS. They can be either
compressed or uncompressed. Cluster disks cannot be configured as dynamic
disks. In addition, features of dynamic disks, such as spanned volumes (volume
sets), cannot be used without additional non‐Microsoft software.
• All disks on the cluster storage device must be partitioned as master boot
record (MBR) disks, not as GUID partition table (GPT) disks.
• For this scenario, the storage should contain at least two separate volumes, that
is, two separate logical unit numbers (LUNs). One volume will function as the
quorum (disk containing configuration information needed for the cluster), and
one will contain the virtual disk for the guest.
Additional Considerations
• All cluster nodes must be on the same logical subnet.
• If you are using a virtual LAN (VLAN), the one‐way communication latency
between any pair of cluster nodes on the VLAN must be less than 500
milliseconds.
17. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 15
• In Windows Server 2003 operating systems, cluster nodes exchange multicast
heartbeats rather than unicast heartbeats. A heartbeat is a message that is sent
regularly between cluster network drivers on each node. Heartbeat messages
are used to detect communication failure between cluster nodes. Using
multicast technology enables better node communication because it allows
several unicast messages to be replaced with a single multicast message.
Clusters that consist of fewer than three nodes will send unicast rather than
multicast heartbeats.
• Determine an appropriate name for each network connection, by renaming the
network interfaces in the Network Connections window. For example, you
might want to name the private network Private and the public network Public.
This will help you uniquely identify a network and correctly assign its role.
• You will need the script, Havm.vbs. You will copy this script to each node of the
cluster. It can be found in Appendix C.
Required Network Information, Accounts, and Administrative
Credentials
In order to set up a cluster, you will need the following network information and
accounts:
• An account that you will log on to when you are configuring and managing the
cluster. This account must be a member of the local Administrators group on all
nodes.
• If you use iSCSI with network adapters in the nodes, you will also need a static IP
address for each of these adapters.
• A domain account for the cluster service. Do not use this account for other
purposes. The New Server Cluster Wizard gives this account the necessary
permissions when you set up the cluster.
• A name for the cluster, that is, the name administrators will use for connections
to the cluster. (The actual applications running on the cluster can have different
network names.) In this cookbook, the cluster is named Mycluster.
18. 16 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
Section 1:
Set Up and Configure the Windows Server Cluster
This section will walk you through the steps of setting up a two‐node Windows server
cluster with your physical host machines.
Install Windows ................................................................................................................ 17
Windows will need to be installed on both physical host machines before
proceeding.
Set the order of the network adapter binding ................................................................. 17
In this step, connections are listed in the order in which they are accessed by
network services.
Configure the private network adapters.......................................................................... 18
You will start by setting up the private network for cluster‐specific network traffic.
Configure the public network adapters ........................................................................... 21
The second network will be the public network, used by the cluster to communicate
with other network resources.
Setting up a cluster service user account......................................................................... 22
In this section, you will create a unique account used only by the cluster service.
Set up disks....................................................................................................................... 23
You must set up disks for shared storage. All nodes in the cluster must be able to
communicate with the same shared storage device(s) in order for failover to occur.
Configure cluster disks ..................................................................................................... 24
Each node in the cluster must independently communicate with the storage
device(s).
Configure Node 1 ............................................................................................................. 32
This first step is required before you proceed with other cluster installation steps.
Validate the cluster installation ....................................................................................... 41
This step is necessary to make sure that the cluster is operating correctly.
Configure Node 2 ............................................................................................................. 42
Each subsequent node will be quicker to configure than the first node.
Heartbeat configuration................................................................................................... 47
This step will ensure that the private cluster‐specific network is functioning
properly.
Prioritize the order of the heartbeat adapter .................................................................. 50
You must prioritize the order for internal cluster communication.
Quorum disk configuration .............................................................................................. 53
The location of the quorum disk is set automatically.
Test whether group resources can fail over..................................................................... 55
Test the failover capability of the cluster to make sure it works.
19. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 17
Create a cluster
The following sections will show you how to create a two‐node cluster. You will need to
perform the following steps up to and including Verify Domain Membership on both
Node 1 and Node 2.
Install Windows
Install Windows Server 2003 Enterprise Edition or Windows Server 2003 Datacenter
Edition on both Node 1 and Node 2. For information about how to perform this
installation, see the documentation you received with the operating system. Before
configuring the cluster service, you must be logged on locally with a domain account
that is a member of the local administrators group. The cluster service will be installed
by default when you install Windows Server 2003.
Set the order of the network adapter binding
One of the recommended steps for setting up networks is to ensure the network
adapter binding is set in the correct order, as shown in step 4. To do this, use the
following procedure:
1. To open Network Connections, click Start, click Control Panel, and then double‐click
Network Connections.
20. 18 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
2. On the Advanced menu, click Advanced Settings.
Figure 2: Advanced Settings for network connections
3. In Connections, click the connection that you want to modify.
4. Set the order of the network adapter binding as follows:
• External public network
• Internal private network (Heartbeat)
• [Remote Access Connections]
5. Click OK.
6. Repeat this procedure for all nodes in the cluster.
Configure the private network adapters
As stated earlier, the recommended configuration for two adapters is to use one
adapter for private communication, and the other adapter for mixed communication. To
configure the private network adapter, use the following procedure. Perform this first
for Node 1, and then for Node 2.
21. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 19
1. To open Network Connections, click Start, click Control Panel, and then double‐click
Network Connections.
2. Right‐click the connection for the adapter you want to configure, and then click
Properties. Local Area Properties opens.
3. On the General tab, verify that the Internet Protocol (TCP/IP) check box is selected,
and that all other check boxes in the list are clear. This is because the private
network is only used for cluster‐related network traffic.
Figure 3: Properties for private network adapter
4. On the General tab in Network Connections, select Internet Protocol (TCP/IP), and
then click Properties. The Internet Protocol (TCP/IP) Properties dialog box opens.
22. 20 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
5. On the General tab, verify that you have entered a static IP address that is not on
the same subnet or network as any other public network adapter. For Node 1, use
10.0.0.11 for the IP address and for Node 2, use 10.0.0.12 for the IP address. If you
want to select your own IP addresses, you should put the private network adapter in
one of the following private network ranges:
• 10.0.0.0 through 10.255.255.255 (Class A)
• 172.16.0.0 through 172.31.255.255 (Class B)
• 192.168.0.0 through 192.168.255.255 (Class C)
On the General tab, verify that no values are defined in Default Gateway under Use
the following IP address, and no values are defined under Use the Following DNS
server addresses. After you have done so, click Advanced. Advanced TCP/IP
Settings opens
Figure 4: TCP/IP Properties for the private network adapter
23. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 21
6. On the DNS tab, verify that no values are defined on the page and that the check
boxes for Register this connection's addresses in DNS and Use this connection's
DNS suffix in DNS registration are clear.
Figure 5: Advanced TCP/IP Settings for the private network adapter
.
7. On the WINS tab, verify that no values are defined on the page, and then click
Disable NetBIOS over TCP/IP.
8. After you have verified the information, click OK. You might receive the message
This connection has an empty primary WINS address. Do you want to continue? To
continue, click Yes.
9. Repeat this procedure for all additional nodes in the cluster. For each private
network adapter, use a different static IP address on the same network ID.
Configure the public network adapters
If DHCP is used to obtain IP addresses, it might not be possible to access cluster nodes if
the DHCP server is inaccessible. For increased availability, static, valid IP addresses are
required for all interfaces on a server cluster. If you plan to put multiple network
24. 22 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
adapters in each logical subnet, keep in mind that the cluster service will recognize only
one network interface per subnet. If you don’t use DHCP, you can follow the steps
above to configure your public network using static IP addresses. Make sure that these
addresses are in different IP ranges than your private network. For this cookbook, you
can use the following IP addresses:
• Node 1 public network IP address: 192.168.10.11
• Node 2 public network IP address: 192.168.10.12
To verify that the private and public networks are communicating properly, ping all IP
addresses from each node. To ping an IP address means to search for and verify it. You
should be able to ping all IP addresses, both locally and on the remote nodes. To verify
the name resolution, ping each node from a client using the node's computer name
(Node 1 and then Node 2) instead of its IP address. It should only return the IP address
for the public network. You might also want to try using the ping command to perform a
reverse name resolution on the IP addresses using the ping –a command. The public
network is the only network in this configuration on which DNS is used.
Setting up a cluster service user account
The cluster service requires a domain user account that is a member of the Local
Administrators group on each node. This is the account under which the cluster service
can run. Because Setup requires a user name and password, you must create this user
account before you configure the cluster service. This user account should be dedicated
to running only the cluster service and should not belong to an individual.
Note: It is not necessary for the Cluster Service Account (CSA) to be a member of the
Domain Administrators group. For security reasons, domain administrator rights should
not be granted to the Cluster Service Account.
• The Cluster Service Account requires the following rights to function properly on
all nodes in the cluster. The Cluster Configuration Wizard, which we will use
later, grants the following rights automatically:
o Act as part of the operating system
o Adjust memory quotas for a process
o Back up files and directories
o Restore files and directories
o Increase scheduling priority
o Log on as a service
You should ensure that the Local Administrator Group has access to the following user
rights:
25. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 23
• Debug programs
• Impersonate a client after authentication
• Manage auditing and security log
You can use the following procedure to set up a Cluster Service Account:
1. On the domain controller, click Start, All Programs, Administrative Tools, Active
Directory Users and Computers.
2. In the left pane, right‐click User, point to New, and then click User.
3. New Object‐User dialog box opens.
4. Type a first name, Cluster, and last name, Admin (these should make sense but are
usually not important for this account).
5. In the User logon name text box, type Clusteradmin , and then click Next.
In Password and Confirm password, type a password that follows your
organization's guidelines for passwords. If User must change password at next
logon is selected, unselect it and then select User Cannot Change Password and
Password Never Expires. Click Next, verify the summary information, and then click
Finish to create the account.
If your administrative security policy does not allow the use of passwords that never
expire, you must renew the password and update the cluster service configuration
on each node before the passwords expire.
6. In the console tree of the Active Directory Users and Computers snap‐in, right‐click
Cluster Admin, and then click Properties.
7. Click the Member Of tab, click Add, type Administrators, click Check Names, and
then click OK. This gives the new user account administrative permissions on the
computer. Click OK.
Set up disks
This section includes information and step‐by‐step procedures you can use to set up
disks.
Note: To avoid possible corruption of cluster disks, ensure that both the Windows
Server 2003 operating system and the cluster service are installed, configured, and
running on at least one node before you start the operating system on another node in
the cluster. The cluster service should be installed by default when you install Windows
Server 2003. When creating the cluster, configure disks on Node 1 first, then turn
Node 1 off, then turn on Node 2, configure the storage there, and turn Node 2 off. You
can now turn both nodes on: Node 1 first, then Node 2. Instructions for how to
configure disks will follow.
Quorum resource. The quorum resource maintains the configuration data necessary for
recovery of the cluster. The quorum resource is generally accessible to other cluster
26. 24 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
resources so that any cluster node has access to the most recent database changes.
There can only be one quorum disk resource per cluster.
The requirements and guidelines for the quorum disk are as follows:
• The quorum disk should be at least 50 MB in size.
• You should use a separate LUN as the dedicated quorum resource, if you are
using a SAN.
• A disk failure could cause the entire cluster to fail. Because of this, we strongly
recommend that you implement a hardware RAID solution for your quorum disk
to help guard against disk failure. Do not use the quorum disk for anything other
than cluster management.
Cluster disks. When you configure a cluster disk, it is best to manually assign drive
letters to the disks on the shared bus. The drive letters should not start with the next
available letter. Instead, leave several free drive letters between the local disks and the
shared disks. For example, start with drive Q as the quorum disk, and then use drives R
and S for the shared disks. Another method is to start with drive Z as the quorum disk
and then work backward through the alphabet with drives X and Y as data disks. You
might also want to consider labeling the drives in case the drive letters are lost. Using
labels makes it easier to determine what the drive letter was. For example, a drive label
of "DriveR" makes it easy to determine that this drive was drive letter R. In this paper,
we will use drive Q as the quorum disk and drive X as the other shared drive letter.
The letter Q is commonly used as a standard for the quorum disk and is used in the next
procedure.
The first step in setting up disks for a cluster is to configure the cluster disks you plan to
use. It is necessary to complete this step before you configure the cluster because
shared storage is a requirement for creating a cluster. To do this, use the following
procedure.
Configure cluster disks
This procedure should be performed twice for Node 1 and then again twice for Node 2.
On each node it should be performed once for disk Q and once for disk X. This is true
when using iSCSI; however, if you chose different storage, you may be able to configure
both disk Q and disk X at the same time on each node. Disk Q is the quorum disk and
disk X is the volume on shared storage.
1. Make sure that only Node 1 is turned on (Node 2 should be turned off).
2. On Node 1, open Computer Management (Local) by clicking Start and then right‐
clicking My Computer and selecting Manage.
3. In the console tree, click Computer Management (Local), click Storage, and then
click Disk Management. Alternatively, you could go to Start, Run and type
diskmgmt.msc.
27. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 25
4. When you first start Disk Management after installing a new disk, a wizard appears
that provides a list of the new disks detected by the operating system called
Initialize and Convert Disk Wizard. If a new disk is detected, the Write Signature
and Upgrade Wizard starts. Follow the instructions in the wizard.
Figure 6 Initialize and Convert Disk Wizard, Welcome page
5. Because the wizard automatically configures the disk as dynamic storage, you must
reconfigure the disk to basic storage. To do this, right‐click the disk, and then click
Convert To Basic Disk.
28. 26 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
6. Right‐click an unallocated region of a basic disk, and then click New Partition.
7. In the New Partition Wizard, click Next.
Figure 7: New Partition Wizard, Welcome page
29. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 27
8. On the Select Partition Type page, click Primary partition, and then click Next.
Figure 8: New Partition Wizard, Select Partition Type page
30. 28 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
9. On the Specify Partition Size page, use the default partition size. By default, the
maximum size for the partition is selected. When you are finished, click Next.
Figure 9: New Partition Wizard, Specify Partition Size page
31. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 29
10. On the Assign Drive Letter or Path page, change the default drive letter to one that
is further into the alphabet. In this cookbook we will use drive Q as the quorum disk,
and then use drive X for the data disk. When you are finished, click Next.
Figure 10: New Partition Wizard, Assign Drive Letter or Path page
32. 30 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
11. On the Format Partition page, format the partition with the NTFS file system. In
Volume Label, enter the name Quorum for the quorum disk and the name Data for
the data disk. Assigning a drive label for cluster disks reduces the time it takes to
troubleshoot a disk recovery scenario. Click Next.
Figure 11: New Partition Wizard, Format Partition page
12. Review the summary information, and then click Finish.
13. When you have completed this procedure for both disk Q and disk X, turn off
Node 1, and turn on Node 2.
14. On Node 2, go to Disk Management, you will see the new disks, Q and X, listed and
the word Basic under each disk’s name. This indicates that both disk are present and
are both basic.
33. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 31
15. Right‐click on the white space representing the disk for disk Q and click Change
Drive Letter and Paths.
Figure 12 Change Drive Letter and Paths
16. Click Add and the Add Drive Letter or Path screen appears. Click the radio button
next to Assign the following drive letter: From the drop‐down list, choose Q. Click
OK.
Figure 13 Add Drive Letter or Path
17. Repeat steps 15 and 16 for disk X.
34. 32 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
18. After you have configured the cluster disks, you should verify that the cluster disks
are accessible. To do this, use the following procedure:
1. Turn off Node 2, and turn on Node 1.
2. On Node 1, open Windows Explorer.
3. Right‐click Quorum, click New, and then click Text Document.
4. Verify that the text document was created and written to the specified disk, and
then delete the document from the cluster disk.
5. Repeat steps 2 through 4 for disk X, to verify that it is accessible from Node 1.
6. Turn off Node 1, and then turn on Node 2.
7. Repeat steps 2 through 5 to verify that the disks are accessible from Node 2.
8. When finished, turn off Node 2 and then turn on Node 1 again.
Create a new server cluster
At this point, we have two Windows Server machines that are connected to each other
via both a public and a private network. We have also configured each machine to be
able to access the same shared storage device. Now we are ready to create a server
cluster that will use these two machines.
The New Server Cluster Wizard and the Add Nodes Wizard automatically select the
drive used for the quorum device. The wizard automatically uses the smallest partition it
finds that is larger than 50 MB.
In the first phase of creating a new server cluster, you must provide all initial cluster
configuration information. To do this, you will use the New Server Cluster Wizard.
Note: Before configuring Node 1, make sure that all other nodes are turned off. Also
make sure that all cluster storage devices are turned on.
The following procedure explains how to use the New Server Cluster Wizard to
configure Node 1. This wizard should be installed by default with Windows Server 2003.
Configure Node 1
1. On Node 1, log in with a user account that has local administrative rights. Open
Cluster Administrator. To do this, click Start, Control Panel, Administrative Tools,
Cluster Administrator.
35. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 33
2. In the Open Connection to Cluster dialog box, in Action, select Create new cluster,
and then click OK.
Figure 14: Open Connection to Cluster dialog box
3. The New Server Cluster Wizard appears. Verify that you have the necessary
information to continue with the configuration, and then click Next to continue.
Figure 15: New Server Cluster Wizard, Welcome page
36. 34 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
4. In Cluster Name and Domain, select the name of the domain in which the cluster
will be created. In Cluster name, enter a unique NetBIOS name. Here we will use the
name MyCluster. When you have finished with this, click Next.
Figure 16: New Server Cluster Wizard, Cluster Name and Domain page
5. If the Domain Access Denied page appears, it may mean that you are logged on
locally with an account that is not a domain account with local administrative
permissions, and the wizard will prompt you to specify an account. This is not the
account the cluster service will use to start the cluster. If you have the appropriate
credentials, the Domain Access Denied page will not appear.
37. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 35
6. Since it is possible to configure clusters remotely, you must verify or type the name
of the computer you are using as Node 1. On the Select Computer page, verify or
type the name of the computer you plan to use as Node 1. Click Next.
Figure 17 New Server Cluster Wizard, Select Computer page
38. 36 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
7. On the Analyzing Configuration page, Setup analyzes the node for possible
hardware or software issues that can cause installation problems. Review any
warnings or error messages that appear. Click Details to obtain more information
about each warning or error message. When the tasks have completed, click Next.
Figure 18: New Server Cluster Wizard, Analyzing Configuration page
39. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 37
8. On the IP Address page, type the unique, valid, cluster IP address that is on the
same network ID as the external interfaces in the cluster, and then click Next. The
wizard automatically associates the cluster IP address with one of the public
networks by using the subnet mask to select the correct network. The cluster IP
address should be used for administrative purposes only, and not for client
connections. In this scenario, use 192.168.10.50. When you are finished entering
the IP address, click Next.
Figure 19: New Server Cluster Wizard, IP Address page
40. 38 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
9. On the Cluster Service Account page, type Clusteradmin for the user name and in
the password field, type the password that you chose when you created this
account. In Domain, select the domain name. The wizard verifies the user account
and password. When you are finished typing, click Next.
Figure 20: New Server Cluster Wizard, Cluster Service Account page
41. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 39
10. On the Proposed Cluster Configuration page, review the information for accuracy.
You can use the summary information to reconfigure the cluster if a system
recovery occurs. You should keep a hard copy of this summary information with the
change management log at the server. To continue, click Next.
Figure 21: New Server Cluster Wizard, Proposed Cluster Configuration page
42. 40 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
11. On the Creating the Cluster page, review any warnings or error messages that
appear while the cluster is being created. Click to expand each warning or error
message for more information. To continue, click Next.
Figure 22: New Server Cluster Wizard, Creating the Cluster page
43. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 41
12. Click Finish to complete the cluster configuration.
Figure 23: New Server Cluster Wizard, Completion page
13. To view a detailed summary, click View Log or view the text file stored at the
following location: %SystemRoot%System32LogFilesClusterClCfgSrv.Log
Validate the cluster installation
You should validate the cluster configuration of Node 1 before configuring Node 2. To
do this, use the following procedure:
1. On Node 1, open Cluster Administrator. To do this, click Start, click Control Panel,
double‐click Administrative Tools, and then double‐click Cluster Administrator.
44. 42 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
2. In the left pane, click Resources and verify that all cluster resources are successfully
up and running. Under State, all resources should be Online.
Figure 24: Verify cluster resources are online
Configure Node 2
After you install the cluster service on Node 1, it takes less time to install it on any
subsequent nodes. This is because the Setup program uses the network configuration
settings configured on Node 1 as a basis for configuring the network settings on
subsequent nodes. You can also install the cluster service on multiple nodes at the same
time and choose to install it from a remote location.
After you have configured Node 1, you can use the following procedure to configure
Node 2:
1. On Node 2, open Cluster Administrator. To do this, click Start, Control Panel,
Administrative Tools, Cluster Administrator.
2. In the Open Connection to Cluster dialog box, in Action, select Add nodes to
cluster. Then, next to Cluster or server name, click Browse and choose MyCluster,
and then click OK to continue.
Figure 25: Open Connection to Cluster dialog box
45. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 43
3. When the Add Nodes Wizard appears, click Next to continue.
Figure 26: Add Nodes Wizard, Welcome page
4. If you are not logged on with the required credentials, you will be asked to specify a
domain account that has administrator rights and permissions on all nodes in the
cluster.
46. 44 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
5. On the Select Computers page, in the Computer name box, type the name of the
node that you want to add to the cluster. Here, type Node 2.
Figure 27 Add Nodes Wizard, Select Computers page
47. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 45
6. Click Add, and then click Next.
Figure 28 Add Nodes Wizard, Select Computers page
48. 46 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
7. On the Analyzing Configuration page, when the Add Nodes Wizard has analyzed
the cluster configuration successfully, click Next.
Figure 29 Add Nodes Wizard, Analyzing Configuration page
49. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 47
8. On the Cluster Service Account page, in Password, type the password for the cluster
service account. Ensure that the correct domain for this account appears in the
Domain list, and then click Next.
Figure 30 Add Nodes Wizard, Cluster Service Account page
9. On the Proposed Cluster Configuration page, view the configuration details to verify
that the server cluster IP address, the networking information, and the managed
disk information are correct, and then click Next.
10. When the cluster is configured successfully, click Next, and then click Finish.
Additional Configuration
You have now created a two‐node server cluster. The following sections will show you
various configuration steps including heartbeat, network, and quorum configuration.
Heartbeat configuration
After the network has been configured on each node and a Windows server cluster
created, you should determine each network's function within the cluster.
Use the following procedure to configure the heartbeat:
1. On Node 1, open Cluster Administrator. To do this, click Start, click Control Panel,
double‐click Administrative Tools, and then double‐click Cluster Administrator.
2. In the console tree, double‐click Cluster Configuration, and then click Networks.
50. 48 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
3. In the Scope pane, right‐click the private network you want to enable, and then click
Properties.
Figure 31 Select properties
51. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 49
4. The Private Properties dialog box opens. Select the Enable this network for cluster
use check box. Click Internal cluster communications only (private network), and
then click OK.
Figure 32 Private Properties page
5. In the Scope pane, right‐click the public network you want to enable, and then click
Properties.
52. 50 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
6. The Public Properties dialog box opens. Select the Enable this network for cluster
use check box. Click All communications (mixed network), and then click OK.
Figure 33: Cluster configuration properties for the public network adapter
Prioritize the order of the heartbeat adapter
After you have decided the roles in which the cluster service will use the network
adapters, you must prioritize the order in which the adapters will be used for internal
cluster communication. To configure network priority, use the following procedure:
1. On Node 1, open Cluster Administrator. To do this, click Start, click Control Panel,
double‐click Administrative Tools, and then double‐click Cluster Administrator.
53. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 51
2. In the console tree, click MyCluster.
Figure 34 My Cluster in the console tree
54. 52 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
3. On the File menu, click Properties.
Figure 35 File menu
55. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 53
4. Click the Network Priority tab. In Networks used for internal cluster
communications, click Private. To increase the network priority, click Move Up; to
lower the network priority click Move Down. For this scenario, the private network
should be above the public network.
Figure 36: Network priority for internal cluster communication
5. When you are finished, click OK.
Note: If multiple networks are configured as private or mixed, you can specify which one
to use for internal node communication. It is usually best for private networks to have
higher priority than mixed networks.
Quorum disk configuration
The New Server Cluster Wizard and the Add Nodes Wizard automatically select the
drive used for the quorum device. The wizard automatically uses the smallest partition it
finds that is larger than 50 MB. If you want to, you can change the automatically
selected drive or partition of at least 50 MB to a dedicated one that you have
designated for use as the quorum. The following procedure explains what to do if you
want to use a different disk for the quorum resource:
1. On Node 1, open Cluster Administrator. To do this, click Start, click Control Panel,
double‐click Administrative Tools, and then double‐click Cluster Administrator.
56. 54 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
2. In the console tree, click MyCluster.
3. On the File menu, click Properties, and then click the Quorum tab. The quorum
property page opens. On the Quorum tab, click Quorum resource, and then select
the new disk or storage‐class resource that you want to use as the quorum resource
for the cluster. In Partition, if the disk has more than one partition, click the
partition where you want the cluster‐specific data kept. In Root path, ensure that
the path to the folder on the partition is: MSCS. When you are finished, click OK.
Figure 37: Change quorum resource
Test the Server Cluster
After Setup, there are several methods you can use to verify a cluster installation. In this
cookbook, we will be using the Cluster Administrator method.
• Use Cluster Administrator. After Setup is run on Node 1, open Cluster
Administrator, and then try to connect to the cluster.
• Services snap‐in. Use the Services snap‐in to verify that the cluster service is
listed and started.
• Event log. Use Event Viewer to check for ClusSvc entries in the system log. You
should see entries that confirm the cluster service successfully formed or joined
a cluster.
57. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 55
Test whether group resources can fail over
You might want to ensure that a new group is functioning correctly. To do this, use the
following procedure:
1. On Node 1, open Cluster Administrator. To do this, click Start, click Control Panel,
double‐click Administrative Tools, and then double‐click Cluster Administrator.
2. In the console tree, expand the Groups folder.
3. In the console tree, click Cluster Group.
Figure 38: Cluster Administrator before failover
58. 56 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
4. On the File menu, click Move Group. When using Move Group, select Node 2 to
move the group to, if prompted to choose a node. Make sure the Owner column in
the details pane reflects a change of owner for all of the group's dependencies.
Figure 39 File menu, Move Group
59. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 57
5. If the group resources successfully fail over, the group will be brought online on the
second node after a short period of time. In this example, we have moved the group
from Node 1 to Node 2 manually. This allows us to confirm that the group is able to
move between nodes. Therefore, in the event of the physical failure of a node with
active groups, the groups will move to the node that is still active. Clustering is
valuable for high availability in both planned and unplanned downtime; this test
shows us an example of planned downtime.
Figure 40: Cluster Administrator after failover
60. 58 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
Dependencies Between Cluster Resources in Virtual Server Host
Clusters
It is very important to specify the correct resource dependencies in any kind of server
cluster, including a Virtual Server host cluster. When you specify a dependency, you
ensure that the cluster service starts resources in the correct order. Each dependent
resource is started after the resources that it depends on.
Before considering the dependencies, it can be useful to review the two types of
resources used in a Virtual Server host cluster: Generic Script resources, each
representing a script that ensures smooth functioning of a guest in the cluster, and
Physical Disk resources, each representing a disk used by a guest.
The two principles in specifying the correct dependencies are as follows:
For guests with multiple Physical Disk resources, the principle is "operating system
disk depends on data disk": The Physical Disk resource that contains the guest's
operating system must depend on any Physical Disk resources that contain the guest's
data. This ensures that all the resources associated with data are online before the
guest's operating system attempts to access data on them.
For all guests, the principle is "script depends on disk": The Generic Script resource
used for a guest must depend on the Physical Disk resource used for that guest. If the
guest has more than one Physical Disk resource, the Generic Script resource must
depend on the Physical Disk resource that contains the guest's operating system. This
ensures that all of the guest's Physical Disk resources are up and available before any
line of the script is run.
If, after trying the configuration in this cookbook, you decide to create a configuration
with multiple guests in the same resource group (meaning the guests would always
move together, never separately), begin by understanding the principles listed in this
section. Then plan your resource dependencies, building them into an orderly chain or
tree, keeping in mind that each dependent resource is started after the resources that it
depends upon.
In the Complete the configuration of Guest1 so it can fail over section, we will go
through the steps of creating an example dependency.
61. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 59
Section 2:
Create the Virtual Server Host Cluster
This section explains the steps for creating the Virtual Server host cluster.
Steps covered in this section:
Install IIS by using the Configure Your Server Wizard ...................................................... 59
Internet Information Services is a prerequisite for installing Microsoft Virtual Server,
which uses a Web‐based console that depends on the Internet Information Server
World Wide Web service.
Install Virtual Server 2005 R2 SP1 .................................................................................... 60
The installation process will automatically set up the Virtual Server Administration
Website.
Configure shutdown on the cluster node ........................................................................ 63
In this section, you will create a batch file that will later be used to stop the cluster
service on Node 1 to force Virtual Server to fail over to Node 2.
Configure the disk resource, resource group, and guest control script........................... 66
In this section, you will create cluster resources and resource groups. This step is
required in order for the cluster to manage these resources.
Create Guest1 on one of the hosts .................................................................................. 71
In this section, you will create a virtual machine called Guest1 on Node 1. Guest1
will be used as the virtual machine that fails over.
Install the guest operating system from a startup CD or image file ................................ 76
The virtual machine that you have created is analogous to a physical server that you
have just taken out of the box; you will need to install an operating system on it
before you can install applications.
Install Virtual Machine Additions ..................................................................................... 77
Virtual Machine Additions improve mouse cursor tracking and control, and also
greatly improve overall performance of the guest operating system on the virtual
machines.
Complete the configuration of Guest1 so workloads can move onto it .......................... 78
This section will cover additional steps that are necessary for Guest1 to be managed
by the cluster.
Migrate a workload .......................................................................................................... 82
You will now migrate a workload from Node 1 to Node 2.
Install IIS by using the Configure Your Server Wizard
IIS must be installed on both Node 1 and Node 2. You must install the World Wide Web
Service component of IIS so that you can use the Administration Website to manage
Virtual Server.
62. 60 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
1. From the Start menu of the physical computer that will run the Virtual Server
service, select Programs > Administrative Tools > Manage Your Server.
2. Under Managing Your Server Roles, click Add or remove a role.
3. Read the preliminary steps in the Configure Your Server Wizard, and then click
Next.
4. Under Server Role, click Application server (IIS, ASP.NET), and then click Next.
Figure 41 Select Application Server in Configure Your Server Wizard
Note: By default, the wizard installs and enables IIS, COM+, and DTC. In addition, the
Configure Your Server Wizard enables Microsoft® ASP.NET by default.
5. Read the summary, and then click Next. You will need the installation medium (CD
or network file share) for the operating systems of this computer to complete this
step.
6. Complete the wizard, and then click Finish.
7. Repeat steps 1‐7 on Node 2.
Install Virtual Server 2005 R2 SP1
You will need to install Virtual Server on both Node 1 and Node 2.
1. Start Virtual Server 2005 Setup (Setup Wizard) from the Virtual Server 2005 CD‐
ROM.
63. Deployment Cookbook: Quick Migration with Virtual Server Host Clustering 61
Note: If you start the Setup Wizard manually, be sure to use Setup.exe.
2. Proceed through the wizard until you reach the Setup Type page.
3. On the Setup Type page, select Complete, which installs Virtual Server by using the
default configuration, and then click Next.
Figure 42 Virtual Server Setup Wizard, Setup Type page
64. 62 Deployment Cookbook: Quick Migration with Virtual Server Host Clustering
4. On the Configure Components page, either accept the default Web site port value
of 1024, or type a new value for the port, and then click Next. Here we will accept
the default.
Figure 43 Virtual Server Setup Wizard, Configure Components page 1