SlideShare a Scribd company logo
1 of 50
Download to read offline
Redpaper
Scott Smith

IBM Flex System Solution for Microsoft
Hyper-V (2-node) Reference Architecture
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Business problem and business value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Business problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Business value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Microsoft Hyper-V and failover clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Component model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
IBM Flex System Enterprise Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
IBM Flex System Chassis Management Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
IBM Flex System x240 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
IBM System Storage DS3524 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
IBM Flex System EN2092 Ethernet switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Deployment considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Racking and power distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Networking and VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Active Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Setup of the IBM Flex System x240 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Cluster creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Optional four-node configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
IBM Reseller Option Kit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Related links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Bill of materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Networking worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Author. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

© Copyright IBM Corp. 2013. All rights reserved.

ibm.com/redbooks

1
Introduction
The Flex System Solution for Microsoft Hyper-V Reference Architecture provides businesses
with an affordable, interoperable, and reliable industry-leading virtualization and cloud
solution choice. This IBM® Flex System based offering, which is built around the latest IBM
x86 servers, storage, and networking, takes the complexity out of the solution by using
step-by-step implementation guides. Validated by the Microsoft Private Cloud Fast Track
program, the IBM virtualization reference architecture combines Microsoft software,
consolidated guidance, and validated configurations for compute, network, and storage
resources. The Microsoft program requires a certain minimum level of redundancy and fault
tolerance across the servers, storage, and networking for the Windows Servers clusters to
help ensure a certain level of fault tolerance while you manage private cloud
pooled resources.
This Reference Architecture provides ordering, setup, and configuration details for the IBM
2-Node highly available virtualization environment that has been validated as a Microsoft
Hyper-V Fast Track Small configuration. The design consists of two IBM Flex System™ x240
compute nodes that are attached to IBM System Storage® DS3524 iSCSI-connected
storage. Networking takes advantage of the Flex Chassis EN2092 switches. This
fault-tolerant hardware configuration is clustered by using the Microsoft Windows Server
2012 operating system.

Business problem and business value
This section briefly describes the business problem that is associated with maintaining a
robust IT environment while you keep pace with the ever-changing landscape and the
business value that can be realized by combining Hyper-V Fast Track virtualization with
failover clustering to ensure reliable continuity of business during periods of stress.

Business problem
Good IT practices recognize the need for high availability, flexibility, and maximum resource
usage. Rapidly responding to changing business needs with rapid deployment and
configuration while maintaining healthy systems and services directly corresponds to the
vitality of your business. Natural disasters, malicious attacks, and even simple configuration
problems can cripple services and applications until administrators resolve the problems and
restore any backed up data. The challenge of maintaining uptime becomes more critical as
businesses consolidate physical servers in to a virtual server infrastructure to reduce data
center costs, maximize utilization, and increase workload performance.

Business value
Combining virtualization with failover clustering helps eliminate single points of failure so
users have near-continuous access to important server-based and business-productivity
applications. Virtual machines can be migrated among clustered host servers to support
scheduled maintenance, and if physical or logical outages result in unplanned failures, virtual
machines can be automatically restarted on the remaining cluster nodes. As a result, clients
experience little to no downtime. This seamless operation is attractive for organizations trying
to create business and maintain healthy service level agreements (SLAs).

2

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Architectural overview
The Microsoft Hyper-V Fast Track Small configuration provides a validated configuration of
two compute nodes or less without a stand-alone management environment. This is ideal for
smaller organizations that do not require the extra complexity and flexibility a dedicated
management environment brings or for larger organizations that might have an existing
management environment or are interested in setting up a proof of concept configuration. The
design consists of two IBM Flex System x240 compute nodes, which are attached to the IBM
System Storage DS3524 Storage controller. The networking design leverages the Flex
System EN2092 Ethernet Switches. This fault-tolerant hardware configuration is clustered by
using the Microsoft Windows Server 2012 operating system. A short summary of the
Reference Architecture software and hardware components is listed below, followed by
preferred practice implementation guidelines.
The Reference Architecture Configuration is composed of the following
enterprise-class components:
One IBM Flex Enterprise System Chassis
Two IBM Flex System x240 compute nodes in a Windows Failover Cluster
running Hyper-V
One DS3524 Highly Available (HA) storage with dual controllers
Four Flex System EN2092 switches providing redundant networking for data and storage
Together, these components form a high-performance and cost-effective solution that
supports Microsoft Hyper-V cloud environments for the most popular business-critical
applications and many custom third-party solutions. Equally important, these components
meet the criteria that are set by Microsoft for the Private Cloud Fast Track program. The
Private Cloud Fast Track program promotes robust cloud environments to help satisfy even
the most demanding virtualization requirements.
Figure 1 shows the overall configuration.

14

13

12

11

(1) Flex Enterprise Chassis

10

9

8

7

6

5

(2) Flex System x240
Compute Nodes

(4) Flex System
EN2092 Ethernet
Switches

4

3

2

1

Flex System Enterprise

(1) DS3524 Storage
Controller

(1) Chassis
Management Module
(1) DS3524 Storage
w/ iSCSI Controllers

Figure 1 Cloud Hyper-V Fast Track configuration

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

3
This IBM Redpaper™ publication is for IT architects who are familiar with the necessary
components of virtualized environments and want to begin with a small Hyper-V environment,
but be positioned to scale up as demand grows. Additionally, IBM Sellers and IBM Business
Partners and their clients that are evaluating or pursuing Hyper-V virtualization solutions can
benefit from this previously validated configuration. Advanced comprehensive experience
with the various Reference Architecture components is advised.

Microsoft Hyper-V and failover clustering
Microsoft Hyper-V technology continues to gain competitive traction as a key cloud
component in many client virtualization environments. Hyper-V is included as a standard
component in Windows Server 2012 Standard Edition and Datacenter Edition. Hyper-V virtual
machines (VMs) support up to 64 virtual processors and 1 TB of memory.
Individual VMs have their own operating system instance and are isolated from the host
operating system and other VMs. VM isolation helps promote higher business-critical
application availability. The Microsoft failover clustering feature, in the Windows Server 2012
Standard and Datacenter Editions, can dramatically improve production uptimes.
Microsoft failover clustering helps eliminate single points of failure (SPOFs) so that users
have near-continuous access to important server-based, business-productivity applications.
VMs can be migrated among clustered host servers to support scheduled maintenance. In
physical or logical outages that result in unplanned failures, VMs can be automatically
restarted on the remaining cluster nodes. As a result, clients experience little-to-no downtime.
This seamless operation is attractive for organizations that are trying to create new business
and maintain healthy SLAs.
Additionally, Microsoft failover clustering in Windows Server 2012 now supports native
network interchange card (NIC) teaming to improve network fault tolerance. Microsoft failover
clustering in Windows Server 2012 further improves physical resource utilization by load
balancing VMs across cluster members in active/active configurations.

Component model
This highly available IBM private cloud architecture consists of the IBM Flex System
Enterprise chassis with IBM Flex EN2092 Ethernet switches, IBM Flex System x240 compute
nodes that run Microsoft Windows Server 2012, and DS3524 storage. Each component
provides a key element to the overall solution.

IBM Flex System Enterprise Chassis
The IBM Flex System Enterprise Chassis is a simple and integrated infrastructure platform
that supports a mix of compute, storage, and networking resources to meet the demands of
your application workloads. More chassis can be added easily as workloads scale.
With the IBM Flex System Manager™, multiple chassis can be monitored from a single
window. The 14-node, 10U chassis delivers high-speed performance that is complete with
integrated servers, storage, and networking. This flexible chassis is designed for a simple
deployment now and to scale to meet your needs in the future.

4

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Figure 2 shows the IBM Flex System Enterprise Chassis with compute nodes that are
installed in the front and with network switches, power supplies, and fans that are installed in
the rear.

Figure 2 IBM Flex Enterprise Chassis

IBM Flex System Chassis Management Module
The IBM Flex System Chassis Management Module (CMM) is a hot-swap module that
configures and manages all installed chassis components. The CMM provides resource
discovery, inventory, monitoring, and alerts for all compute nodes, switches, power supplies,
and fans in a single chassis. The CMM provides the communication link with each compute
node system management processor, which is also called an Integrated Management
Module (IMM), to support power control and out-of-band remote connectivity. The default IP
address for the CMM is 192.168.70.100.

IBM Flex System x240
At the core of the IBM Cloud Reference Configuration solution, the IBM Flex System x240
compute nodes deliver the performance and reliability that are required for virtualizing
business-critical applications in Hyper-V cloud environments.
To provide the expected virtualization performance to handle any Microsoft production
environment, IBM Flex System x240 compute nodes can be equipped with up to two 8-core
E5-2600 processors, and up to 768 GB of memory. The IBM Flex System x240 includes an
onboard RAID controller. You can choose either spinning hot-swap serial-attached SCSI
(SAS) or Serial Advanced Technology Attachment (SATA) disks. Or, you can choose small
form-factor (SFF) hot-swap solid-state drives (SSDs).

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

5
Figure 3 shows the front of the x240.
Hard disk drive
activity LED

USB port

NMI control

Console Breakout
Cable port

Hard disk drive
status LED

Power button / LED

LED panel

Figure 3 IBM Flex System x240

Two I/O slots provide ports for both your data and storage connections though the Flex
Enterprise chassis switches. The server also supports remote management through the IBM
Integrated Management Module II (IMM2), which enables continuous management
capabilities. All of these key features, including many that are not listed, help solidify the
dependability IBM clients are accustomed to with IBM System x® servers.
By virtualizing with Microsoft Hyper-V technology on IBM Flex System x240 compute nodes,
businesses reduce physical server sprawl, power consumption, and total cost of ownership
(TCO). Virtualizing the server environment also results in lower server administrative impact,
giving IT administrators the capability to manage more systems than exclusive physical
environments. Highly available critical applications that are on clustered host servers can be
managed with greater flexibility and minimal downtime due to the Microsoft Hyper-V live and
quick migration capabilities.

IBM System Storage DS3524
The DS3524 combines storage development with leading 6-Gbps SAS, 1/10 Gb iSCSI or
Fibre Channel (FC) host interfaces, and SAS/SATA drive technology. With its simple,
efficient, and flexible approach to storage, the DS3524 is a cost-effective complement to IBM
Flex System, System x, and IBM BladeCenter® systems.
By offering substantial features at a price that fits most budgets, the DS3524 delivers superior
price/performance ratios, functionality, scalability, and ease of use for the entry-level
storage user.
The DS3524 offers these benefits:
Scalability to mid-range performance and features that start at entry-level prices
Efficiency to help reduce annual energy expenditures and environmental footprints
Simplicity that does not sacrifice control with the perfect combination of robustness and
ease of use

6

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
The DS3524 is well-suited for Microsoft virtualized cloud environments. The DS3524
complements the IBM Flex System Enterprise Chassis, Flex EN2092 Ethernet switches, and
x240 compute nodes in an end-to-end Microsoft Hyper-V private cloud solution by delivering
proven disk storage in flexible and scalable configurations. Connecting optional EXP3500
enclosures to your DS3524 can scale up to 192 SAS, SATA, and SSD disks and with up to
576 TB of raw capacity. The DS3524 has 1 GB of cache per controller, upgradeable to 2 GB.
The DS3524 now comes standard with 128 activated storage partitions. The DS3524 also
comes with Volume Copy, Encryption, Dynamic Disk Pool, Thin Provisioning, and
32 Enhanced IBM FlashCopy® snapshots. Optional features, such as SSD Cache,
512 Enhanced FlashCopy snapshots, Consistency Groups, IP Replication, and Remote and
Global Mirroring, are available for an extra cost, if needed.
The DS3524 is shown in Figure 4.

Figure 4 IBM System Storage DS3524

IBM Flex System EN2092 Ethernet switches
The IBM Flex System EN2092 1Gb Ethernet Scalable Switch enables administrators to offer
full Layer 2 and 3 switching and routing capability with combined 1-Gb and 10-Gb uplinks in
an IBM Flex System Enterprise Chassis. This consolidation simplifies the data center
infrastructure and helps reduce the number of discrete devices, management consoles, and
management systems while taking advantage of the 1-Gb Ethernet infrastructure.
In addition, the next-generation switch module hardware supports IPv6 Layer 3 frame
forwarding protocols. This scalable switch delivers port flexibility, efficient traffic
management, increased uplink bandwidth, and strong Ethernet switching price/performance.
The IBM Flex System EN2092 1Gb Ethernet Scalable Switch is shown in Figure 5.

Figure 5 IBM Flex System EN2092 1Gb Ethernet Scalable Switch

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

7
Deployment considerations
A successful Microsoft Hyper-V deployment and operation can be attributed to a set of
test-proven planning and deployment techniques. Proper planning includes sizing the needed
server resources (CPU and memory), storage (space and IOPS), and networking bandwidth
to support the infrastructure. This information can then be implemented by using industry
preferred practices to achieve optimal performance and the growth headroom that is
necessary for the solution.
The Microsoft Private Cloud Fast Track program combined with the IBM enterprise-class
hardware prepares IT administrators to successfully meet their virtualization performance and
growth objectives by deploying private clouds efficiently and reliably.
The preferred practices and implementation guidelines for the Cloud Reference Configuration
are broken down into the following topics:
Racking and power distribution
Networking and VLANs
Active Directory
Storage
Setup of the IBM Flex System x240
Optional four-node configuration

Racking and power distribution
Perform the installation of power distribution units (PDUs) and their cabling before any
system is racked. When cabling the PDUs, remember the following information:
Ensure that you have sufficient and separate electrical circuits and receptacles to support
the required PDUs.
To minimize the chance of a single electrical circuit failure taking down a device, ensure
that sufficient PDUs exist to feed redundant power supplies that use separate
electrical circuits.
For devices that have redundant power supplies, plan for individual electrical cords from
separate PDUs.
Maintain appropriate shielding and surge suppression practices.
Employ the appropriate battery backup techniques.

Networking and VLANs
Combinations of physical and isolated virtual local area networks (VLANs) are configured at
the host, switch, and storage layers to satisfy isolation requirements. At the physical host
layer, eight 1 Gb Ethernet devices exist for each Hyper-V server (two Flex System EN2024
4-port 1GbE switch modules). At the physical switch layer, four Flex System EN2092
switches have up to 48 1 GbE ports each for storage and host connectivity.
To support all eight 1 GbE connections from each server, the EN2092 switches require the
Upgrade 1 Feature on Demand (FoD) option. A second FoD option is available if the external
10 GbE network ports are used for either uplink or inter-switch link connections.

8

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
The servers and storage maintain connectivity through multiple iSCSI connections that use
Multipath I/O (MPIO). Windows Server 2012 NIC teaming is used to provide fault tolerance
and load balancing to all the remaining communication networks (host management, Cluster
Private, Live Migration, and VM).
At the physical switch layer, VLANs are used to provide logical isolation between the various
networks that are used for storage and data traffic. A key element is configuring the switches
correctly to maximize the available bandwidth and reduce congestion. However, based on
individual environment preferences, flexibility is available regarding how many VLANs are
created and what type of role-based traffic they handle. However, after a final selection is
made, ensure that the switch configurations are saved or backed up.
Switch ports that are used for iSCSI traffic, Cluster Private, and Live Migration must be
configured as untagged (access mode in Cisco terms). This configuration limits that port to
only a single VLAN. The Ethernet frame receives a default VLAN ID at the switch (no settings
are needed at the operating system level).
Switch ports that are used for the Cluster Private and Live Migration team need to carry
multiple VLAN IDs. These ports must be set to enable tagging, and the VLAN definitions must
be specified on each switch to include the related ports. Each of these networks needs VLAN
assignments in Windows Server.
Inter-switch links are created between switches that share NIC team members. Link
Aggregation Control Protocol (LACP) bonds 2 - 8 switch ports between two switches. LACP
teams provide for higher bandwidth connections and error correction between LACP team
members. LACP teams are used for the inter-switch links and the uplink connections to a
corporate network.

Up to 8 – DS3500 and
Expansion

(4) EN2092 Ethernet switches (in Flex
chassis) provide fault tolerant data and
storage connectivity Fault tolerant NIC
teams and LACP teams across the
switches provide redundant
communication paths for the storage,
servers, and VMs.

Figure 6 shows a high-level network overview of the configuration.

Figure 6 Cloud Reference configuration

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

9
VLAN description
The five VLANS are described in Table 1. More information, such as an example of port
layouts and configuration, is shown in Table 11 on page 46. Worksheets to help plan network
layout are in “Networking worksheets” on page 42.
Table 1 LAN definitions
Network

Description

VLAN 10

iSCSI Storage Network

Used for iSCSI storage traffic

VLAN 20

iSCSI Storage Network

Used for iSCSI storage traffic

VLAN 30

Cluster Private Network

Used for private cluster communication and Cluster
Shared Volumes traffic

VLAN 31

Cluster Live Migration Network

Used for cluster VM Live Migration traffic

VLAN 40

10

Name

Cluster Public Network

Used for host management and VM communication

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Flex System switch locations
The IBM Flex System chassis contains up to four switches. The numbering of these switches
is interleaved, as shown in Figure 7. Consider this numbering when you perform work on the
switches or add cable connections to the external ports.
Switch 1

Power
Supply
Bay

Switch 3

10

1

Switch 2

3

I/O Bay

Power
Supply
Bay

Switch 4

5

2

4
CMM2

I/O Bay

I/O Bay

I/O Bay
CMM
Bay

6

5

4
Fan
Bay

Fan
Bay

Power
Supply
Bay

Power
Supply
Bay

Fan
Bay

Fan
Bay

Power
Supply
Bays

6

3

5

2

4

1

Fan
Bays

10
9
8
7
6

5
4
3
2
1

Fan
Bay

Fan
Bay

Power
Supply
Bay

Power
Supply
Bay

Fan
Bay

Fan
Bay

3

2

1

CMM1
1

6

3

2

1

4

Figure 7 IBM Flex System switch locations in the chassis

iSCSI storage network (VLANs 10 and 20)
At the physical storage layer, the DS3524 uses iSCSI ports for connectivity. Each controller
has four 1 GbE Ethernet ports for iSCSI traffic. The usage of the Microsoft MPIO driver and
the DS3524 Device Specific Module (DSM) manages the multiple I/O paths between the host
servers and storage. Using the Microsoft MPIO driver and DSM optimizes the storage paths
for maximum performance. VLANs are used to isolate storage traffic from other data traffic on
the switches. Ethernet Jumbo Frames are set on the hosts and storage to maximize storage
traffic throughput.
VLAN 10 and VLAN 20 are reserved for server access to the iSCSI storage. All iSCSI traffic
must be isolated on VLAN 10 and 20. One switch hosts VLAN 10, and a second switch hosts
VLAN 20.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

11
In setting up iSCSI access to the DS3524 storage controller, consider the
following information:
To help balance iSCSI workloads, each DS3524 controller maintains two iSCSI
connections to the networks.
Each controller has one connection to each switch.
Each DS3524 controller must have its iSCSI ports set to support Jumbo frames
(9000 bytes).
The EN2092 switch, by default, supports Jumbo frames.
By default, the EN2092 switches are set as untagged ports. The correct default VLAN ID
needs to be assigned to the targeted ports from the EN2092 switch configuration menu.
In setting up iSCSI access for each host (server/compute node), consider the following items:
Each compute node has two connections to the iSCSI networks (one to each VLAN). One
connection must be made from each of the two NIC cards (see Figure 6 on page 9).
Because the switch ports are configured for a single VLAN in untagged mode, you do not
need to specify a VLAN ID in the operating system on the NIC.
By default, the EN2092 switches are set as untagged ports. The correct default VLAN ID
needs to be assigned to the targeted ports in the EN2092 switch configuration menu.
Each NIC port that connects to these VLANs must be set for Jumbo frames in the
advanced properties of the NIC under Windows Device Manager.

Cluster Private and Cluster Shared Volumes networks (VLAN 30)
This network is reserved for Cluster Private (heartbeat) communication between clustered
servers. Switch ports must be configured to appropriately limit the scope of each of these
VLANs. This configuration requires that the switch ports for each x240 compute node are set
to tagged. The VLAN definitions must include these ports for each switch. The switch ports
that use this VLAN must specify VLAN 30 in Windows Server 2012. There must be no IP
routing or default gateways for Cluster Private networks.

Production Live Migration network (VLAN 31)
A separate VLAN must be created to support Live Migration for the cluster. Switch ports must
be configured to appropriately limit the scope of each of these VLANs. This configuration
requires the switch ports that are used by each x240 compute node to be set to tagged. The
VLAN definitions must include these ports for each switch. The switch ports that use this
VLAN must specify VLAN 31 in Windows Server. There must be no routing on the Live
Migration VLAN.

Production communication network (VLAN 40)
This network supports communication for the hosts and VMs. Two teams, which are created
by using the Windows Server 2012 native NIC teaming feature, are used to provide fault
tolerance, and load balancing for communication for host servers and VMs. These switch
ports must be configured with their assigned VLAN ID in untagged mode. Default VLAN IDs
are assigned for each of the ports that participate in the VLAN.

12

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
If additional segregation between the management and VM networks is required, the VM
Team network ports can be set to tagged, and the ports can be added to the switch VLAN
definitions. Each VM can then set the necessary VLAN ID as part of its network settings
under Hyper-V manager. Layer 3 routing must also be configured for the switches to allow
support for VM network access as needed.
For more configuration network planning and configuration assistance, see “Networking
worksheets” on page 42.

DS3524 network ports
At the physical storage layer, the DS3524 uses iSCSI ports for storage connectivity. Each
controller has four 1 GbE Ethernet ports for iSCSI traffic. The use of the DS3524 Device
Specific Module (DSM) manages the multiple I/O paths between the host servers and
storage, and optimizes the storage paths for maximum performance. VLANs are used to
isolate storage traffic from other data traffic on the switches. Ethernet Jumbo Frames are set
on the hosts and storage to maximize storage traffic throughput.
Two Ethernet ports on each controller are reserved for management of the DS3524. At a
minimum, one management connection from each controller must be connected to the
network. Connecting each controller to both switches provides more redundancy.
The location of the iSCSI and management ports can be seen in Figure 8.
Management Connections

iSCSI Connections

Figure 8 DS3524 iSCSI and management port location

IBM Flex System x240 network ports
The host servers have a total of two EN2024 4-port 1GbE network cards for a total of eight
1 GbE network ports to use for iSCSI storage connectivity, public and private cluster
communication, and VM communication. The iSCSI connections to storage use Multipath I/O
drives to ensure fault tolerance and load balancing. Windows Server 2012 NIC teaming is
used for all but the iSCSI networks to provide fault tolerance, and spread the workload across
the network communication interfaces. The NIC teams follow the preferred practice by
ensuring that the team members are from each of the EN2024 network cards so that no
single card failure can take down the team.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

13
The x240 compute node I/O connectors are shown in Figure 9.
I/O connector 1

Fabric connector

I/O connector 2

Expansion
connector

Figure 9 Locations of the I/O connectors 1 and 2

Ethernet port assignment is listed in Table 2.
Table 2 Ethernet port assignment
I/O slot

iSCSI VLAN

ClstrPriv Team

Mgmt Team

VM Team

Slot 1

Switch 1 (VLAN 10)

Switch 1

Switch 2

Switch 2

Slot 2

Switch 3 (VLAN 20)

Switch 3

Switch 4

Switch 4

IBM Flex System EN2092 Ethernet configuration
The IBM Flex System configuration uses four Flex System EN2092 switches that contain up
to 48-Gb Ethernet ports each. The EN2092 provides primary storage access and data
communication services. Redundancy across the switches is achieved by creating an
inter-switch link between switches 1 and 3 and between switches 2 and 4. The inter-switch
links can be created by using the external 10 GbE links if activated or by creating an LACP
team with multiple 1 GbE ports. Uplink connections can be achieved with either
10 GbE or LACP teams, depending on the client configuration.
Each EN2092 switch requires Upgrade 1 to activate the additional ports that are required to
fully support all the EN2024 ports on each x240 compute node. An additional FoD license is
needed if the 10 GbE interfaces are used.

14

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Management of the EN2092 switches can be performed either by the command-line interface
(CLI) or a web-based user interface (Figure 10). The default user name and password for the
IBM EN2092 switches is admin/admin. Change the default user name and password to a
non-default password that meets the security requirements of each organization.

Figure 10 EN2092 administration interface

Spanning tree must be enabled on all switches according to the requirements of
your organization.
By default, the switches are assigned the following management IP addresses:
192.168.70.120 - Switch 1
192.168.70.121 - Switch 2
192.168.70.122 - Switch 3
192.168.70.123 - Switch 4
EN2092 switch port assignments can be seen in Table 3.
Table 3 EN2092 switch port layout
Port

Switch 1

Switch 2

Switch 3

Switch 4

Internal ports
Internal
Port A1

iSCSI (VLAN 10)

Mgmt Team (VLAN 40)

iSCSI (VLAN 20)

Mgmt Team (VLAN 40)

Internal
Port B1

LM and Cluster Priv
Team (VLANs 30 and
31)

VM Team (VLAN 40)

LM and Cluster Priv
Team (VLANs 30 and
31)

VM Team (VLAN 40)

Internal
Port A2

iSCSI (VLAN 10)

Mgmt Team (VLAN 40)

iSCSI (VLAN 20)

Mgmt Team (VLAN 40)

Internal
Port B2

LM and Cluster Priv
Team

VM Team (VLAN 40)

LM and Cluster Priv
Team (VLANs 30 and
31)

VM Team (VLAN 40)

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

15
Port

Switch 1

Switch 2

Switch 3

Switch 4

External ports
External
Port E1

Not used

AD Server (VLAN 40)

Not used

AD Server (VLAN 40)

External
Port E2

Not used

Storage Mgmt (Cntrl-A)

Not used

Storage Mgmt (Cntrl-A)

External
Port E3

Not used

Storage Mgmt (Cntrl-B)

Not used

Storage Mgmt (Cntrl-B)

External
Port E4

iSCSI - Cntrl-A (VLAN
10)

Not used

iSCSI - Cntrl-A (VLAN
20)

Not used

External
Port E5

iSCSI - Cntrl-B (VLAN
10)

Not used

iSCSI - Cntrl-B (VLAN
20)

Not used

External
Port E6

LACP Team (inter-switch
link) (VLANs 30 and 31)

LACP Team (inter-switch
link) (VLAN 40)

LACP Team (inter-switch
link) (VLANs 30 and 31)

LACP Team (inter-switch
link) (VLAN 40)

External
Port E7

LACP Team
(Inter-switch link)
(VLANs 30 and 31)

LACP Team (inter-switch
link) (VLAN 40)

LACP Team (inter-switch
link) (VLANs 30 and 31)

LACP Team (inter-switch
link) (VLAN 40)

External
Port E8

No uplink

LACP Team (corporate
uplink) (VLAN 40)

No uplink

LACP Team (corporate
uplink) (VLAN 40)

External
Port E9

No uplink

LACP Team (corporate
uplink) (VLAN 40)

No uplink

LACP Team (corporate
uplink) (VLAN 40)

Ports are set as untagged, by default. For example, the storage ports remain untagged
(iSCSI and management). A default VLAN ID must be set as appropriate for the untagged
ports. This setting can be done from the switch configuration menu for each port, as shown in
Figure 11.

Figure 11 Setting VLAN tagging and the default VLAN ID

16

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Switch ports that might have traffic from multiple VLANs must use tagged ports that must be
added to the respective VLANs in each switch, as appropriate, as shown in Figure 12.

Figure 12 Adding ports to the VLAN interface

Consider the following information about LACP teams (see Figure 13) on the EN2092 switch.
Each LACP team has a unique port admin key and each port that is a member of that team is
set to this unique value. In addition, the ports of one switch take the active role, and the ports
of the other switch are set to a passive role.

Figure 13 LACP configuration interfaces

The configuration of the ports for each switch in the configuration is described.
Switch 1 ports must be configured in the following manner:
Ports A1, A2, EXT4, and EXT5 - VLAN 10 iSCSI traffic:
– VLAN tagging disabled (default).
– Jumbo frames that are configured by default on the switch.
– The default is VLAN 10.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

17
Ports B1, B2, EXT6, and EXT7 - Cluster Private/Cluster Shared Volumes (CSV) and
Live Migration:
– VLAN tagging enabled.
– Add ports to VLANs 30 and 31.
Ports EXT6 and EXT7 - Inter-switch link with Switch 3:
–
–
–
–

Configure as an LACP team.
Set ports to active.
Check that the Ethernet cables connect to the external switch ports on Switch 3.
Consider the interleaved numbering of switches.

Switch 3 ports must be configured in the following manner:
Ports A1, A2, EXT4, and EXT5 - VLAN 20 iSCSI traffic:
– VLAN tagging disabled (default).
– Jumbo frames that are configured by default.
– The default is VLAN 20.
Ports B1, B2, EXT6, and EXT7 - Cluster Private/CSV and Live Migration:
– VLAN tagging enabled.
– Add ports to VLANs 30 and 31.
Ports EXT6 and EXT7 - Inter-switch link with Switch 1:
–
–
–
–

Configure as an LACP team.
Set ports to passive.
Check that the Ethernet cables connect to external switch ports on Switch 1.
Consider the interleaved numbering of switches.

Switch 2 ports must be configured in the following manner:
Ports A1, A2, B1, B2, EXT1, EXT2, EXT3, EXT6, EXT7, EXT8, and EXT9 - VLAN 40
management traffic:
– VLAN tagging disabled (default).
– The default is VLAN 40.
Ports EXT6 and EXT7 - Inter-switch link with Switch 4:
–
–
–
–

Configure as an LACP team.
Set ports to active.
Check that the Ethernet cables connect to external switch ports on Switch 4.
Consider the interleaved numbering of switches.

Ports EXT8 and EXT9 - LACP team for corporate uplink:
– Configure as an LACP team.
– Set ports to active/passive, depending on the needs of the uplink switches.
– Check that the Ethernet cables connect to uplink switches.
Switch 4 ports must be configured in the following manner:
Ports A1, A2, B1, B2, EXT1, EXT2, EXT3, EXT6, EXT7, EXT8, and EXT9 - VLAN 40
management traffic:
– VLAN tagging disabled (default).
– The default is VLAN 40.
Ports EXT6 and EXT7 - Inter-switch link with Switch 2:
– Configure as an LACP team.
– Set ports to passive.
– Check that Ethernet cables connect to external switch ports on Switch 2.
18

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Ports EXT8 and EXT9 - LACP team for corporate uplink:
– Configure as an LACP team.
– Set ports to active/passive, depending on the needs of uplink switches.
– Check that the Ethernet cables connect to uplink switches.

Active Directory
The IBM Private Cloud Architecture must be part of an Active Directory (AD) domain, which is
required to form the Microsoft Windows Server 2012 clusters. An AD server is presumed to
exist. The identified external switch ports on switches 2 and 4 can be used for connectivity, or
connectivity can be achieved from your uplink ports to the network of your organization.

Storage
For an overview of the DS3524, see the IBM System Storage DS3500 Introduction and
Implementation Guide, SG24-7914, found at:
http://www.redbooks.ibm.com/abstracts/sg247914.html?Open

Cabling
In this configuration, each storage controller maintains two connections to the switches on the
back of the Flex Enterprise chassis. One connection is to Switch 1, and one connection is to
Switch 3. Storage controller-A must be connected to external port 3 on each of these
switches. Storage controller-B must be connected to external port 4 on each controller. See
Figure 14 on page 20.
Two 1 GbE connections that use MPIO provide sufficient bandwidth for most configurations of
this size. However, if the storage network load requires more bandwidth, the remaining two
iSCSI ports on the DS3524 can be connected as well. If additional Ethernet connections exist
between the storage controllers and the switches, configure the switch ports to support the
correct VLANs as well.
Two management Ethernet ports are on the back of the DS3524. Distribute the management
connections across external ports 2 and 3 on EN2092 switches 2 and 4 to help ensure
connectivity if one switch is temporarily down. These switch ports must also be configured
with VLAN 40 in untagged mode to communicate correctly.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

19
Figure 14 shows the storage connections for both iSCSI and management to the IBM Flex
System EN2092 switches.
EN2092
SW1
Power
Supply
Bay

EN2092
SW3
10

1

EN2092
SW2

3

I/O Bay

Power
Supply
Bay

EN2092
SW4
5

2

4
CMM2

I/O Bay

I/O Bay

I/O Bay
CMM
Bay

6

5

4

Fan
Bay

Power
Supply
Bay

Fan
Bay

Power
Supply
Bay

Fan
Bay

Fan
Bay

Power
Supply
Bays

6

3

5

2

4

1

Fan
Bays

10
9
8
7
6

5
4
3
2
1

Fan
Bay

Fan
Bay

Power
Supply
Bay

Power
Supply
Bay

Fan
Bay

Fan
Bay

3

2

1

CMM1
1

6

3

2

1

4

VLAN 10 / iSCSI
VLAN 20 / iSCSI
VLAN 40 / Mgmt
Controller-A

Controller-B

DS3524 iSCSI Storage
Figure 14 DS3524 Storage Ethernet connections

Management
The DS3524 is managed by using the IBM Total Storage Manager tools that are available for
download at the IBM Support website found at http://www.ibm.com/support (support
account registration is required). The DS3524 MPIO DSM driver is also required for
this configuration.

20

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
To begin the management of the DS3524, complete the following steps:
1. Establish an out-of-band connection with Total Storage Manager by using the default
TCP/IP addresses (see Figure 15):
– Management Interface 1:
•
•

Controller-A - 192.168.128.101
Controller-B - 192.168.128.102

– Management Interface 2:
•
•

Controller-A - 192.168.129.101
Controller-B - 192.168.129.102

Figure 15 Establish an out-of-band connection to DS3524 management ports

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

21
2. Navigate to the DS3524 Setup page to change the management and iSCSI port TCP/IP
addresses to the address to use in production (Figure 16).

Figure 16 Setting management and iSCSI ports for DS3524

22

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
3. Set the iSCSI port TCP/IP addresses for the two ports to use on each controller, and
enable Jumbo frames (9000 bytes) under Advanced Port Settings (Figure 17).

Figure 17 iSCSI port settings

DS3524 and Hyper-V cluster storage considerations
The DS3524 storage system supports a concept that is called disk pooling. Disk pools
remove much of the guesswork of creating arrays and creating logical volumes from these
arrays. A single disk pool that contains all 24 drives can be created. The DS3524 creates and
aggregates the optimum number of RAID 6 arrays to support this disk pool. From this pool,
one or more logical disks can be created and presented to the host servers. All I/O can be
spread out across all the disks to maximize disk throughput. The combination of RAID 6 and
proprietary disk pooling software adds exceptional fault tolerance and quicker disk rebuild
time in a disk failure.
Microsoft Windows Failover Clustering supports Cluster Shared Volumes (CSVs). Cluster
Shared Volumes provide the primary storage for the VM configuration files and virtual hard
disks. All CSVs are concurrently visible to all cluster nodes and are simultaneously accessible
from each node. From the disk pool, two logical disks can be created: one logical disk for the
cluster quorum and one logical disk for a Cluster Shared Volume.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

23
Figure 18 shows a suggested disk configuration for the DS3524.

Disk Pool1 – 24 Disk pool
Logical Disk1 – 5 GB Volume Quorum
Logical Disk2 – 4 TB Volume CSV1
Figure 18 DS3524 storage configuration

Disk configuration and performance can be highly workload-dependent. Although this disk
configuration fits most user applications, profile and analyze your specific environment to
ensure adequate performance for your needs.

Configuration
To configure DS3524 storage, complete the following steps:
1. Create the disk pool that is needed for the production configuration. Assign a pool name to
it, and select the number of disks to use (Figure 19).

Figure 19 DS3524 array creation

24

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Logical disks can now be created off the pool (Figure 20).

Figure 20 DS3524 logical drive creation

2. Create a host group to contain each of the host servers (Figure 21). A host group is a
logical group that contains the host servers that all see the same storage volumes.

Figure 21 DS3524 host group creation

Setup of the IBM Flex System x240
Our Windows Server cluster consists of two dual-socket IBM Flex System x240 compute
nodes with 64 GB of RAM, and eight 1GbE NIC ports each.
The setup involves the installation of Windows Server 2012 Datacenter Edition on each
server followed by the confirmation of network and storage connectivity. Then, Hyper-V and
Microsoft Clustering can be enabled and configured. Highly available VMs can then be
created to perform the various production tasks that your organization requires.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

25
Pre-operating system installation steps
Before you install the operating system, complete the following steps:
1. Confirm that both EN2024 4-port Ethernet adapters are installed in each compute node.
2. Install the latest firmware on the x240 by using a Bootable Media Creator image.
Bootable Media Creator creates a bootable image of the latest IBM x240 updates
(download in advance). An external DVD drive is required. The Bootable Media Creator
(BoMC) can be downloaded from this website:
http://ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-BOMC
IBM Fast Setup is an optional tool that can be downloaded and used to configure multiple
System x, BladeCenter, or Flex System systems simultaneously. A link to this tool is at
this website:
http://ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-FASTSET
3. By default, the x240 compute node is set to balance power consumption and
performance. To change this setting boot to UEFI mode, select System Settings 
Operating Mode (Figure 22) and change the selection to what best fits your
organizational parameters.

Figure 22 Operating Modes settings in UEFI

4. EN2092 switches are configured as described in “Networking and VLANs” on page 8:
– Inter-switch links are created and show as active in the EN2092
management consoles.
– Uplinks are created and show as active in the EN2092 management consoles.
– VLANs are configured for their respective ports in the EN2092 management consoles.

26

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
5. DS3524 iSCSI storage must be configured, as described in “Configuration” on page 24.
DS3524 iSCSI storage must be ready for iSCSI qualified name (IQN) assignments to map
the volumes to the servers.
6. The two local disks must be configured as a RAID 1 array.
IMM address: The default IMM address for each x240 compute node is 192.168.70.1xx,
where xx is equal to the two-digit slot number in which the compute node is installed (Slot 1
= 01).

OS installation and configuration
To install and configure the operating system on each x240 compute node, complete the
following steps:
1. Install Windows Server 2012 Datacenter Edition.
Windows Server 2012 Datacenter Edition offers unlimited Windows VM rights on the host
servers and is the preferred version for building private cloud configurations.
Windows Server 2012 Standard Edition now supports clustering as well, but it provides
licensing rights for up to two Windows VMs only (additional licenses are needed for more
VMs). Windows Server 2012 Standard Edition is intended for physical servers that have
few or no VMs that run on it.
2. Set your server name, and join the domain.
3. Install the Hyper-V role and Failover Clustering feature.
4. Run Windows Update to ensure that any new patches are installed.
5. Multipath I/O is used to provide balanced and fault-tolerant paths to DS3524. Multipath I/O
requires an additional DS3524 DSM-specific driver1 to be installed on the host servers
before you attach the storage.
6. The Microsoft MPIO prerequisite driver is also installed if the driver is not on the system.
This driver is part of Windows and installs automatically when the IBM driver is installed.

Network configuration
To complete network configuration, complete the following steps:
1. For the iSCSI network interfaces, set the MTU size to 9000 to support Jumbo frames. The
larger packet size helps the storage performance. Complete this step under the device
properties of each NIC (Figure 23).

Figure 23 Jumbo frame settings for host server
1

Go to http://ibm.com/support and select downloads for the DS3524 (http://bit.ly/10CiWbd). Scroll down to the
Storage Manager section of downloads and locate the correct download in the form
Disk-SM-Windows-x86-Month-Year-Version-xx.xx.xx. The MPIO driver is in the Windows directory in the
compressed file that you download.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

27
2. Set up NIC teaming.
One key new feature of Windows Server 2012 is in-box NIC teaming. This in-box teaming
can provide fault tolerance and link aggregation and be tailored to host or VM connectivity.
Three separate Windows Server 2012 teams are created in this configuration. One team
is used to support host server management traffic. A second team is used to support
Cluster Private/CSV communication and Live Migration (across separate vNICs and
VLANs). A third team provides VM communication.
Carefully identify and enumerate the network interfaces in each host to ensure that teams
are spread across the two physical devices and routed to the correct switches. Two
network interfaces run to each switch. One way to enumerate the ports is to disable a port
on a switch and see the change that is reflected under network devices.
The setting for Windows Server 2012 in-box NIC teaming is in the Server Manager
console, as shown in Figure 24.

Figure 24 NIC teaming in Server Manager

3. Create the team to support cluster public communication with the host servers by using
the two dedicated NIC ports, as described in “Networking and VLANs” on page 8.

28

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Create this team by using the default switch independent teaming mode and address hash
load balancing mode (Figure 25). These modes provide 2 Gbps of outbound traffic
bandwidth and 1 Gb of inbound traffic bandwidth.

Figure 25 Windows Server 2012 NIC team

4. Create a second team with the teaming properties with the Cluster Private/Live Migration
network interfaces. However, do not specify any VLANs now.
5. Create the team to support VM communication with the host servers by using the two
dedicated NIC ports, as described in “Networking and VLANs” on page 8. Create this
team by using the default switch independent teaming mode and Hyper-V port load
balancing mode.
Ethernet traffic for each VM is assigned to one of the team members as the default path.
The VM traffic is spread evenly across the team. In a failure, traffic is reassigned to an
alternative team member. The VLAN setting is configured under Hyper-V.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

29
When Windows Server 2012 NIC teaming is complete, three teams display under the NIC
teaming management utility (Figure 26).

Figure 26 Windows Server NIC teaming

6. Create a vSwitch for use by the host for Cluster Private/CSV communication and Live
Migration. PowerShell is used to create this vSwitch (instead of Hyper-V Virtual Switch
Manager) to take advantage of additional options and flexibility only available
with PowerShell.
PowerShell is part of Windows Server 2012. The CLI can be started by entering
PowerShell at the command line, running the start command, or clicking the
PowerShell icon.
7. Determine the network adapters that are available to work with by running the following
PowerShell command:
Get-NetAdapter
8. Record the name of the VM team that is created for Cluster Private/CSV and
Live Migration.
9. Create the vSwitch on top of this team by running the following PowerShell command:
New-VMSwitch -name ClusterPrivate -netadaptername TeamName
-MinimumBandwidthMode Weight -AllowManagementOS $true
10.Add the second vNIC interface to the vSwitch (allow management OS access) by running
the following command:
Add-VMNetworkAdapter -ManagementOS -Name LiveMigration -SwitchName
ClusterPrivate
11.Reserve a minimum of 10% of the available bandwidth for the Cluster Private/CSV
network by running the following command:
Set-VMNetworkAdapter -ManagementOS -Name ClusterPrivate -MinimumBandwidthWeight 10
12.Reserve a minimum of 90% of the available bandwidth for the Live Migration network by
running the following command:
Set-VMNetworkAdapter -ManagementOS -Name LiveMigration -MinimumBandwidthWeight 90

30

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
13.Set the correct VLAN ID for each of these networks by running the following command:
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName ClusterPrivate
-Access -VlanId 30
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName LiveMigration
-Access -VlanId 31
14.After you set the VLAN IDs, confirm your network adapter name and VLAN assignments
by running the following command:
Get-VMNetworkAdapterVlan -ManagementOS
The output is shown in Figure 27.
PS C:Usersadministrator.C4> Get-VMNetworkAdaptervlan -ManagementOS
VMName VMNetworkAdatperName Mode
VlanList
------ -------------------- ----------LiveMigration
Access 31
ClusterPriv
Access 30
Figure 27 Results of the PowerShell VMNetworkAdapter configuration

15.Record the Windows Team network device name that is intended for use by the VMs
(Figure 28).

Figure 28 Available networking devices that can be used to create a vSwitch

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

31
16.Use Hyper-V Manager to create a vSwitch that is based on this device. Clear the check
box that allows management traffic on this device (Figure 29).

Figure 29 vSwitch settings

17.Confirm that the switch name is the same on all cluster nodes to ensure that Live
Migration works correctly.
18.Assign TCP/IP addresses and confirm network connectivity for all network connections on
each VLAN.
19.The cluster public network must be at the top of the network binding order (VLAN 40).
20.The iSCSI, Cluster Private, and Live Migration networks must not have any defined default
gateway. In addition, the Client for Microsoft networks and File and Print Sharing can be
disabled for these interfaces.

Storage connections
The DS3524 provides shared storage that is used to create highly available and fault-tolerant
drives for use by the cluster.

32

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
The following steps complete the configuration and presentation of the disks on the DS3524.
The process of making the iSCSI connections from Windows Server 2012 back to these disks
is described. Complete the following steps:
1. Each disk is used to ensure that the DS3524 storage volumes are accessible only to the
specific servers that are assigned to them. IQN names are assigned to each server, and
the IQN names can be seen in the Microsoft iSCSI Initiator Properties window in the
Control Panel. The IQN name for each server changes after the host servers join the
Windows domain.
Record the IQN names for each server to complete the host mapping in the DS3524
Storage Manager (Figure 30).

Figure 30 Server IQN name in Windows Server 2012 iSCSI Initiator Properties

2. From the Total Storage Manager application, add each of the clustered hosts to the host
group (Figure 31).

Figure 31 Add Host to Host Group

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

33
3. Select iSCSI as the interface type, add the unique IQN name for each host, and assign a
chosen name (Figure 32).

Figure 32 Host definition

4. Select Windows Clustered if you are not using Disk Pools and are queried for a Host
type (Figure 33).

Figure 33 Host type

The DS3524 disks are now ready and visible to the host servers. iSCSI connections are
made from each server to the DS3524 to complete the storage connections.
5. Using the Microsoft iSCSI initiator, connect each host to a server path. Use the Quick
Connect option if you are not using any advanced features.

34

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
If a CHAP secret is defined on the target (DS3524), click the Discover Target Portal tab,
enter the target IP, and click Advanced (Figure 34).

Figure 34 Target discovery with advanced options

6. When complete, a minimum of four paths that are defined between the server and the
storage are shown (Figure 35).

Figure 35 iSCSI storage paths

7. The Volumes and Devices tab now displays the targets that are available to the host
server. The disks also appear in Windows Disk Manager, although a disk rescan might
be required.
8. From a single server, bring each disk online, and format it as a GPT disk for use by the
cluster. Assigning drive letters is optional because drive letters are used for specific
clustering roles, such as CSV, and Quorum is not required.
Validate that each potential host server can see the disks and bring them online.
Tip: Only one server can have the disks online at a time until all disks are added to
Cluster Shared Volumes.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

35
Cluster creation
Microsoft Windows clustering joins the host servers in to a highly available configuration that
allows both servers to run VMs to support a production environment.
VM workloads must be balanced across both hosts. Be careful to ensure that the combined
resources of all VMs do not exceed the resources that are available on N-1 cluster nodes.
Staying under this threshold allows a single server to be taken out of the cluster and
minimizes the impact to your production servers.
A policy of monitoring resource utilization, such as the CPU, memory, and disk (both space
and I/O) helps keep the cluster running at optimal levels. By monitoring resource utilization,
you can plan to add more resources as needed.
Using the Failover Cluster Manager, run the cluster validation wizard to assess the two
physical host servers as potential cluster candidates and to address any errors. Consider the
following information as you run the wizard:
The cluster validation wizard checks for available cluster compatible host servers, storage,
and networking (Figure 36).

Figure 36 Cluster validation wizard

Ensure that the intended cluster storage is online to only one of the cluster nodes.
Temporarily disable the default IBM USB Remote NDIS Network Device on all cluster
nodes. This device causes the validation to issue a warning during network detection
because all the nodes share the same IP address.
Address any issues that are flagged during the validation.
Use the Failover Cluster Manager to create a cluster with the two physical host servers.
You need a cluster name and IP address.

36

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Figure 37 shows the Failover Cluster Manager with the two hosts visible.

Figure 37 Failover Cluster Manager

Add the disks to Cluster Shared Volumes.
Use Hyper-V Manager to set the default paths for VM creation to use the Cluster
Shared Volumes.

VM setup and configuration
Perform the setup and configuration of new VMs by using the Failover Cluster Manager utility.
The Failover Cluster Manager utility automatically makes the VM highly available and able to
migrate (by using Live Migrate) between each cluster member.
The operating system can be installed on a VM by using various methods. A straightforward
approach is to modify the VM DVD drive settings to specify an image file that points to the
Windows installation ISO image. Then, start the VM to begin the installation. Other
deployment methods are acceptable as well:
A virtual hard drive (VHD) file with a Sysprep image
Windows Deployment Service (WDS) server
System Center Configuration Manager (SCCM)
With the operating system installed and the VM running, complete the following steps before
you install the application software:
1. Run Windows Update.
2. Update or install the integration services in the VM. Ensure that both the host and VM
have the same version of integration services.
3. Activate Windows.
Hyper-V supports Dynamic Memory in VMs. Dynamic Memory allows flexibility in the
assignment of memory resources to VMs. However, certain applications might experience
performance-related issues if the memory settings of the VM are configured incorrectly.
Research how Dynamic Memory might affect the virtualization of specific applications before
you implement Dynamic Memory.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

37
For a high-level overview of dynamic memory, see The Server Virtualization on Windows
Server 2012, found at:
http://download.microsoft.com/download/5/D/B/5DB1C7BF-6286-4431-A244-438D4605DB1D/
WS%202012%20White%20Paper_Hyper-V.pdf

Optional four-node configuration
Increasing the number of cluster nodes from two to four, if needed, is a straightforward
process. You might consider increasing the number of cluster nodes to ensure sufficient
compute nodes to achieve an N+1 level of redundancy. Your configuration must have
sufficient compute nodes to run all VM workloads with one of the cluster nodes down. A
two-node cluster must fail over all workloads to the remaining cluster node. With a larger
cluster, this workload is distributed among several operational compute nodes.
The following changes to the configuration are required to support four nodes:
Compute nodes
Two more x240 compute nodes are required. The specifications must match the original
compute nodes.
Networking
No changes need to be made to the network switching hardware. The existing
configuration is sufficient to support the two additional cluster nodes. An updated EN2092
Flex Enterprise switch configuration table is shown in Table 4 as a reference for the
additional cluster nodes.
Table 4 EN2092 switch port layout for four cluster nodes
Port

Switch 1

Switch 2

Switch 3

Switch 4

Internal ports
Internal
Port A1

iSCSI (VLAN 10)

Mgmt Team (VLAN 40)

iSCSI (VLAN 20)

Mgmt Team (VLAN 40)

Internal
Port B1

LM and Cluster Priv
Team (VLANs 30 and
31)

VM Team (VLAN 40)

LM and Cluster Priv
Team (VLANs 30 and
31)

VM Team (VLAN 40)

Internal
Port A2

iSCSI (VLAN 10)

Mgmt Team (VLAN 40)

iSCSI (VLAN 20)

Mgmt Team (VLAN 40)

Internal
Port B2

LM and Cluster Priv
Team

VM Team (VLAN 40)

LM and Cluster Priv
Team (VLANs 30 and
31)

VM Team (VLAN 40)

Internal
Port A3

iSCSI (VLAN 10)

Mgmt Team (VLAN 40)

iSCSI (VLAN 20)

Mgmt Team (VLAN 40)

Internal
Port B3

LM and Cluster Priv
Team

VM Team (VLAN 40)

LM and Cluster Priv
Team (VLANs 30 and
31)

VM Team (VLAN 40)

Internal
Port A4

iSCSI (VLAN 10)

Mgmt Team (VLAN 40)

iSCSI (VLAN 20)

Mgmt Team (VLAN 40)

Internal
Port B4

LM and Cluster Priv
Team

VM Team (VLAN 40)

LM and Cluster Priv
Team (VLANs 30 and
31)

VM Team (VLAN 40)

38

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Port

Switch 1

Switch 2

Switch 3

Switch 4

External ports
External
Port E1

Not used

AD Server (VLAN 40)

Not used

AD Server (VLAN 40)

External
Port E2

Not used

Storage Mgmt (Cntrl-A)
(VLAN 40)

Not used

Storage Mgmt (Cntrl-A)
(VLAN 40)

External
Port E3

Not used

Storage Mgmt (Cntrl-B)
(VLAN 40)

Not used

Storage Mgmt (Cntrl-B)
(VLAN 40)

External
Port E4

iSCSI - Cntrl-A (VLAN
10)

Not used

iSCSI - Cntrl-A (VLAN
20)

Not used

External
Port E5

iSCSI - Cntrl-B (VLAN
10)

Not used

iSCSI - Cntrl-B (VLAN
20)

Not used

External
Port E6

LACP Team
(Inter-switch Link)
(VLANs 30 and 31)

LACP Team
(Inter-switch Link)
(VLAN 40)

LACP Team
(Inter-switch Link)
(VLANs 30 and 31)

LACP Team
(Inter-switch Link)
(VLAN 40)

External
Port E7

LACP Team
(Inter-switch Link)
(VLANs 30 and 31)

LACP Team
(Inter-switch Link)
(VLAN 40)

LACP Team
(Inter-switch Link)
(VLANs 30 and 31)

LACP Team
(Inter-switch Link)
(VLAN 40)

External
Port E8

No uplink

LACP Team (Corp
Uplink) (VLAN 40)

No uplink

LACP Team (Corp
Uplink) (VLAN 40)

External
Port E9

No uplink

LACP Team (Corp
Uplink) (VLAN 40)

No uplink

LACP Team (Corp
Uplink) (VLAN 40)

Storage
Profile and evaluate the storage needs to ensure that sufficient resources are available to
support operational needs. Ensure that you have a combination of space and sufficient
disk spindles to support the required I/O for a particular environment. If needed, the
DS3524 storage controller supports the EXP3524 storage expansion modules for
additional storage and I/O capacity. Establish the additional connections between each of
the host servers and the new iSCSI connections.

Summary
Upon completing implementation steps, an operational, highly available Microsoft Hyper-V
failover cluster helps you form a high-performance, interoperable, and reliable IBM private
cloud architecture. With Enterprise-class multilevel software and hardware, fault tolerance is
achieved by configuring a robust collection of industry-leading IBM Flex Systems, storage
systems, and networking components to meet the Microsoft Private Cloud Fast Track
program guidelines. The program’s unique framework promotes standardized and highly
manageable cloud environments, which help satisfy even the most challenging
business-critical virtualization demands.

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

39
Appendix
This section describes the IBM Reseller Option Kit.

IBM Reseller Option Kit
Getting your clients the operating system that they want has never been easier. The IBM
Reseller Option Kit (ROK) is a software delivery option that enables distributors and resellers
to order Microsoft Windows Server products separately from IBM server hardware. Each IBM
ROK package is tuned for IBM servers but is not yet installed. This product is purchased as a
server option, such as RAM, hard disk drives, or processors. The installation-ready reseller kit
provides the Windows Server license separately from IBM branded servers with all the
benefits and reliability of an IBM provided Windows Server image.
Tuned to run on System x servers, ROK includes certified and tested drivers and an OS
image. ROK also contains the IBM ServerGuide, a tool that helps to simplify and automate
installation and configuration. For more information, see the Announcement Letter, found at:
http://ibm.com/common/ssi/cgi-bin/ssialias?infotype=dd&subtype=ca&&htmlfid=897/ENU
S212-513

Related links
IBM Bootable Media Creator:
http://ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-BOMC
IBM Director Agent Download (Platform Agent):
http://ibm.com/systems/software/director/downloads/agents.html
IBM Fast Setup:
http://ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-FASTSET
IBM Firmware Update and Best Practices Guide, found at:
http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5082923
IBM Flex System EN2092 1Gb Ethernet Scalable Switch User’s Guide, found at:
http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.
networkdevices.doc/88y7927.pdf
IBM Flex System x240 Compute Node Types 7863, 8737, and 8738 Installation and
Service Guide, found at:
http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.
8737.doc/dw1ko_book.pdf
IBM Reseller Option Kit for Windows Server 2012:
http://ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/E
NUS212-513&appname=totalstorage
IBM Server Guide:
http://ibm.com/support/entry/portal/docdisplay?lndocid=serv-guide
IBM Support:
http://www.ibm.com/support

40

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
IBM System Storage DS3524 Storage Users Guide, found at:
http://ibm.com/systems/networking/hardware/ethernet/b-type/b48y/
IBM x86 Server Cloud Solutions:
http://ibm.com/systems/x/solutions/cloud/

Bill of materials
Table 5 lists the bill of materials for the configuration.
Table 5 Bill of materials
SBB
part number

Description

Quantity

Rack configuration
9360-4PX

IBM 42U 1200mm Deep Dynamic Rack

1

39Y8941

DPI Single-phase 30A/208V C13 Enterprise PDU (US)

2

40K9614

L6-30 power cord 2.8m

2

39Y8948

DPI Single-phase 60A/208V C19 Enterprise PDU (US)

2

40K9615

IEC 309 2P+G power cord 4.3m

2

Chassis configuration
8721HC1

IBM Flex System Enterprise Chassis

1

Includes 2500W Power Modules

2

Includes IBM Flex System Chassis Management Module

1

IBM Flex System Console Breakout Cable

1

1.8m Black Cat5e Cable (Corporate Uplinks and AD)

6

40K5627

1.5m Green Cat5e Cable (iSCSI Links)

4

40K8932

0.6m Yellow Cat5e Cable (ISL Links)

4

40K5564

1.5m Blue Cat5e Cable (Storage Management)

4

49Y4297

IBM Flex System EN2092 1Gb Ethernet Scalable Switch

4

49Y4297

IBM Flex System EN2092 1Gb Ethernet Scalable Switch (Upgrade 1)

4

16A/100-250V, C19 to IEC 320-C20 2m Rack Power Cable

2

Service pack1: 3 Year onsite Repair 24x7 4 Hour Response

1

Compute node configuration
8737MC1

ComputeNodes: IBM Flex System x240 Compute Node

2

49Y1379

8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM

16

49Y7903

IBM Flex System EN2024 4-port 1Gb Ethernet Adapter

4

90Y8879

IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD

4

81Y9421

Additional Intel Xeon Processor E5-2670 8C 2.6GHz 20MB Cache 1600MHz 115W

2

81Y9420

Intel Xeon Processor E5-2670 8C 2.6GHz 20MB Cache 1600MHz 115W

2

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

41
SBB
part number

Description

Quantity

Service pack2: 3 Year onsite Repair 24x7 4 Hour Response

2

Operating system
OOY6283

Operating system: Windows Server 2012 Datacenter (2 skt)

2

DS3524 storage configuration
1746C4A

Storage1: IBM System Storage DS3524 Express

1

68Y8434

2GB Cache Upgrade

2

68Y8433

1Gb iSCSI 4 Port Daughter Card

2

49Y2048

600GB 10,000 rpm 6Gb SAS 2.5" HDD

24

Service pack3: 3 Year onsite Repair 24x7 4 Hour Response

1

Networking worksheets
Use these worksheets to document your network configuration.

Switch 1
Table 6 shows the EN2092 switch layout for switch 1.
Table 6 EN2092 Switch layout (switch 1)
Switch ports

Device

Port setting and VLANs

Internal Port A1

Compute Node 1 - iSCSI Port 1

Untagged/VLAN 10

Internal Port B1

Compute Node 1 - Live Migr/Cluster Priv

Tagged/VLANs 30 and 31

Internal Port A2

Compute Node 2 - iSCSI Port 1

Untagged/VLAN 10

Internal Port B2

Compute Node 2 - Live Migr/Cluster Priv

Tagged/VLANs 30 and 31

Internal Port A3

Optional Compute Node 3 - iSCSI Port 1

Untagged/VLAN 10

Internal Port B3

Optional Compute Node 3 - Live Migr/Cluster Priv

Tagged/VLANs 30 and 31

Internal Port A4

Optional Compute Node 4 - iSCSI Port 1

Untagged/VLAN 10

Internal Port B4

Optional Compute Node 4 - Live Migr/Cluster Priv

Tagged/VLANs 30 and 31

External Port E4

iSCSI – Controller-A

Untagged/VLAN 10

External Port E5

iSCSI – Controller-B

Untagged/VLAN 10

External Port E6

(Switch 3) Inter-switch link LACP Team

Tagged/VLANs 30 and 31

External Port E7

(Switch 3) Inter-switch link LACP Team

Tagged/VLANs 30 and 31

External Port E8

No uplink

External Port E9

No uplink

External Port E1
External Port E2
External Port E3

42

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Switch 3
Table 7 shows the EN2092 switch layout for switch 3.
Table 7 EN2092 switch layout (switch 3)
Switch ports

Device

Port setting and VLANs

Internal Port A1

Compute Node 1 - iSCSI Port 2

Untagged/VLAN 20

Internal Port B1

Compute Node 1 - Live Migr/Cluster Priv

Tagged/VLANs 30 and 31

Internal Port A2

Compute Node 2 - iSCSI Port 2

Untagged/VLAN 20

Internal Port B2

Compute Node 2 - Live Migr/Cluster Priv

Tagged/VLANs 30 and 31

Internal Port A3

Optional Compute Node 1 - iSCSI Port 3

Untagged/VLAN 20

Internal Port B3

Optional Compute Node 3 - Live Migr/Cluster Priv

Tagged/VLANs 30 and 31

Internal Port A4

Optional Compute Node 3 - iSCSI Port 2

Untagged/VLAN 20

Internal Port B4

Optional Compute Node 3 - Live Migr/Cluster Priv

Tagged/VLANs 30 and 31

External Port E4

iSCSI – Controller-A

Untagged/VLAN 20

External Port E5

iSCSI – Controller-B

Untagged/VLAN 20

External Port E6

(Switch 1) Inter-switch link LACP Team

Tagged/VLANs 30 and 31

External Port E7

(Switch 1) Inter-switch link LACP Team

Tagged/VLANs 30 and 31

External Port E8

No uplink

External Port E9

No uplink

External Port E1
External Port E2
External Port E3

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

43
Switch 2
Table 8 shows the EN2092 switch layout for switch 2.
Table 8 EN2092 Switch Layout (Switch 2)
Switch ports

Device

Port setting and VLANs

Internal Port A1

Compute Node 1 - Mgmt Team

Untagged/VLAN 40

Internal Port B1

Compute Node 1 - VM Comm Team

Untagged/VLAN 40

Internal Port A2

Compute Node 2 - Mgmt Team

Untagged/VLAN 40

Internal Port B2

Compute Node 2 - VM Comm Team

Untagged/VLAN 40

Internal Port A3

Optional Compute Node 3 - Mgmt Team

Untagged/VLAN 40

Internal Port B3

Optional Compute Node 3 - VM Comm Team

Untagged/VLAN 40

Internal Port A4

Optional Compute Node 4 - Mgmt Team

Untagged/VLAN 40

Internal Port B4

Optional Compute Node 4 - VM Comm Team

Untagged/VLAN 40

External Port E1

AD Server

Untagged/VLAN 40

External Port E2

Storage Management (Cntrl-A)

Untagged/VLAN 40

External Port E3

Storage Management (Cntrl-B)

Untagged/VLAN 40

External Port E4

Untagged/VLAN 20

External Port E5

Untagged/VLAN 20

External Port E6

(Switch 4) Inter-switch link LACP Team

Untagged/VLAN 40

External Port E7

(Switch 4) Inter-switch link LACP Team

Untagged/VLAN 40

External Port E8

Uplink LACP Team

Untagged/VLAN 40

External Port E9

Uplink LACP Team

Untagged/VLAN 40

44

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Switch 4
Table 9 shows the EN2092 switch layout for switch 4.
Table 9 EN2092 switch layout (switch 4)
Switch Ports

Device

Port setting and VLANs

Internal Port A1

Compute Node 1 - Mgmt Team

Untagged/VLAN 40

Internal Port B1

Compute Node 1 - VM Comm Team

Untagged/VLAN 40

Internal Port A2

Compute Node 2 - Mgmt Team

Untagged/VLAN 40

Internal Port B2

Compute Node 2 - VM Comm Team

Untagged/VLAN 40

Internal Port A3

Optional Compute Node 3 - Mgmt Team

Untagged/VLAN 40

Internal Port B3

Optional Compute Node 3 - VM Comm Team

Untagged/VLAN 40

Internal Port A4

Optional Compute Node 4 - Mgmt Team

Untagged/VLAN 40

Internal Port B4

Optional Compute Node 4 - VM Comm Team

Untagged/VLAN 40

External Port E1

AD Server

Untagged/VLAN 40

External Port E2

Storage Management (Cntrl-A)

Untagged/VLAN 40

External Port E3

Storage Management (Cntrl-B)

Untagged/VLAN 40

External Port E4

Untagged/VLAN 20

External Port E5

Untagged/VLAN 20

External Port E6

(Switch 2) Inter-switch link LACP Team

Untagged/VLAN 40

External Port E7

(Switch 2) Inter-switch link LACP Team

Untagged/VLAN 40

External Port E8

Uplink LACP Team

Untagged/VLAN 40

External Port E9

Uplink LACP Team

Untagged/VLAN 40

Multiple VLANs
If multiple VLANs are used with the VMs, switches 2 and 4 need the port configuration
changes shown in Table 10 to allow multiple VLANs across the port. The VLAN definitions
and routing also need to be determined and addressed in the two switches.
Table 10 Configuration changes for switches 2 and 4 if multiple VLANs are used
Switch ports

Device

Port setting and VLANs

Internal Port B1

Compute Node 1 - VM Comm Team

Tagged/VLANs TBD

Internal Port B2

Compute Node 2 - VM Comm Team

Tagged/VLANs TBD

Internal Port B3

Optional Compute Node 3 - VM Comm Team

Tagged/VLANs TBD

Internal Port B4

Optional Compute Node 4 - VM Comm Team

Tagged/VLANs TBD

Internal Port A2

(Switch 2) Inter-switch link LACP Team

Tagged/VLAN TBD

Internal Port B2

(Switch 2) Inter-switch link LACP Team

Tagged/VLAN TBD

External Port E1

Uplink LACP Team

Tagged/VLAN TBD

External Port E2

Uplink LACP Team

Tagged/VLAN TBD

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

45
VLAN layout
Table 11 describes the configurations for the five VLANs that are described in Table 1 on
page 10.
Table 11 VLAN configuration
Device

IP addresses

VLAN 10 (iSCSI)

IP address

Controller-A iSCSI Port 1

192.168.10.xx

Controller-B iSCSI Port 1
Compute Node 1 - iSCSI Port 1
Compute Node 2 - iSCSI Port 1
Optional Compute Node 3 - iSCSI Port 1
Optional Compute Node 4 - iSCSI Port 1
VLAN 20 (iSCSI)

IP address

Controller-A iSCSI Port 2

192.168.20.xx

Controller-B iSCSI Port 2
Compute Node 1 - iSCSI Port 2
Compute Node 2 - iSCSI Port 2
Optional Compute Node 3 - iSCSI Port 2
Optional Compute Node 4 - iSCSI Port 2
VLAN 30 (Cluster Priv/CSV)

IP address

Compute Node 1 - Cluster Private/CSV

192.168.30.xx

Compute Node 2 - Cluster Private/CSV
Optional Compute Node 3 - Cluster Private/CSV
Optional Compute Node 4 - Cluster Private/CSV
VLAN 31 (Cluster Priv/Live Migr)

IP address

Compute Node 1 - Live Migration

192.168.31.xx

Compute Node 2 - Live Migration
Optional Compute Node 3 - Live Migration
Optional Compute Node 4 - Live Migration

46

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Device

IP addresses

VLAN 40 (Cluster Pub/Mgmt and VM Comm)

IP address

Compute Node 1 - (WS12 Team - Cluster Public)

192.168.40.xx

Compute Node 2 - (WS12 Team - Cluster Public)
Cluster IP address
Storage Controller-A (Mgmt - Switch2)
Storage Controller-A (Mgmt - Switch4)
Storage Controller-B (Mgmt - Switch2)
Storage Controller-B (Mgmt - Switch4)
Compute Node 1 VM WS12 Team

No Host Exposure

Compute Node 2 VM WS12 Team

No Host Exposure

Author
This paper was produced by a technical specialist working at the International Technical
Support Organization, Raleigh Center.
Scott Smith is an IBM System x Systems Engineer working at the IBM Center for Microsoft
Technology. Over the past 15 years, Scott has worked to optimize the performance of IBM
x86-based servers that run the Microsoft Windows Server operating system and Microsoft
application software. Recently, his focus has been on Microsoft Hyper-V-based solutions with
IBM System x servers, storage, and networking. He has extensive experience in helping IBM
clients understand the issues that they face and in developing solutions that address them.
Thanks to the following people for their contributions to this project:
David Ye, IBM Solutions Architect
Vinay Kulkarni, IBM Performance Engineer
Cole Kiblinger, IBM Systems Networking Engineer
Marco Rengan, IBM Cloud Marketing Manager
David Watts, IBM Redbooks®
Stephen Smith, IBM Redbooks

Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author - all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

47
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html

48

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that does
not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
© Copyright International Business Machines Corporation 2013. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by
GSA ADP Schedule Contract with IBM Corp.

49
This document REDP-4981-01 was created or updated on June 5, 2013.
®

Send us your comments in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400 U.S.A.

Redpaper ™

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
BladeCenter®
FlashCopy®
IBM®
IBM Flex System™

IBM Flex System Manager™
Redbooks®
Redpaper™
Redbooks (logo)
®

System Storage®
System x®

The following terms are trademarks of other companies:
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.

50

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

More Related Content

What's hot

Managing an soa environment with tivoli redp4318
Managing an soa environment with tivoli redp4318Managing an soa environment with tivoli redp4318
Managing an soa environment with tivoli redp4318Banking at Ho Chi Minh city
 
Redbook: Running IBM WebSphere Application Server on System p and AIX: Optimi...
Redbook: Running IBM WebSphere Application Server on System p and AIX: Optimi...Redbook: Running IBM WebSphere Application Server on System p and AIX: Optimi...
Redbook: Running IBM WebSphere Application Server on System p and AIX: Optimi...Monty Poppe
 
Backing up web sphere application server with tivoli storage management redp0149
Backing up web sphere application server with tivoli storage management redp0149Backing up web sphere application server with tivoli storage management redp0149
Backing up web sphere application server with tivoli storage management redp0149Banking at Ho Chi Minh city
 
Client install
Client installClient install
Client installmrt Londeh
 
Deployment guide series ibm tivoli configuration manager sg246454
Deployment guide series ibm tivoli configuration manager sg246454Deployment guide series ibm tivoli configuration manager sg246454
Deployment guide series ibm tivoli configuration manager sg246454Banking at Ho Chi Minh city
 
IBM PureFlex System and IBM Flex System Products and Technology
IBM PureFlex System and IBM Flex System Products and TechnologyIBM PureFlex System and IBM Flex System Products and Technology
IBM PureFlex System and IBM Flex System Products and TechnologyIBM India Smarter Computing
 
Implementing Systems Management of IBM PureFlex System
Implementing Systems Management of IBM PureFlex SystemImplementing Systems Management of IBM PureFlex System
Implementing Systems Management of IBM PureFlex SystemIBM India Smarter Computing
 
BOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system zBOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system zSatya Harish
 
Data sharing planning and administration
Data sharing planning and administrationData sharing planning and administration
Data sharing planning and administrationMarino Savoldi
 
IBM Flex System p260 and p460 Planning and Implementation Guide
IBM Flex System p260 and p460 Planning and Implementation GuideIBM Flex System p260 and p460 Planning and Implementation Guide
IBM Flex System p260 and p460 Planning and Implementation GuideIBM India Smarter Computing
 
Tivoli and web sphere application server on z os sg247062
Tivoli and web sphere application server on z os sg247062Tivoli and web sphere application server on z os sg247062
Tivoli and web sphere application server on z os sg247062Banking at Ho Chi Minh city
 
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762Banking at Ho Chi Minh city
 
In-memory Computing with SAP HANA on IBM eX5 Systems
In-memory Computing with SAP HANA on IBM eX5 SystemsIn-memory Computing with SAP HANA on IBM eX5 Systems
In-memory Computing with SAP HANA on IBM eX5 SystemsIBM India Smarter Computing
 
Setup and configuration for ibm tivoli access manager for enterprise single s...
Setup and configuration for ibm tivoli access manager for enterprise single s...Setup and configuration for ibm tivoli access manager for enterprise single s...
Setup and configuration for ibm tivoli access manager for enterprise single s...Banking at Ho Chi Minh city
 

What's hot (16)

Sg248203
Sg248203Sg248203
Sg248203
 
Managing an soa environment with tivoli redp4318
Managing an soa environment with tivoli redp4318Managing an soa environment with tivoli redp4318
Managing an soa environment with tivoli redp4318
 
Redbook: Running IBM WebSphere Application Server on System p and AIX: Optimi...
Redbook: Running IBM WebSphere Application Server on System p and AIX: Optimi...Redbook: Running IBM WebSphere Application Server on System p and AIX: Optimi...
Redbook: Running IBM WebSphere Application Server on System p and AIX: Optimi...
 
Backing up web sphere application server with tivoli storage management redp0149
Backing up web sphere application server with tivoli storage management redp0149Backing up web sphere application server with tivoli storage management redp0149
Backing up web sphere application server with tivoli storage management redp0149
 
Client install
Client installClient install
Client install
 
Deployment guide series ibm tivoli configuration manager sg246454
Deployment guide series ibm tivoli configuration manager sg246454Deployment guide series ibm tivoli configuration manager sg246454
Deployment guide series ibm tivoli configuration manager sg246454
 
IBM PureFlex System and IBM Flex System Products and Technology
IBM PureFlex System and IBM Flex System Products and TechnologyIBM PureFlex System and IBM Flex System Products and Technology
IBM PureFlex System and IBM Flex System Products and Technology
 
IBMRedbook
IBMRedbookIBMRedbook
IBMRedbook
 
Implementing Systems Management of IBM PureFlex System
Implementing Systems Management of IBM PureFlex SystemImplementing Systems Management of IBM PureFlex System
Implementing Systems Management of IBM PureFlex System
 
BOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system zBOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system z
 
Data sharing planning and administration
Data sharing planning and administrationData sharing planning and administration
Data sharing planning and administration
 
IBM Flex System p260 and p460 Planning and Implementation Guide
IBM Flex System p260 and p460 Planning and Implementation GuideIBM Flex System p260 and p460 Planning and Implementation Guide
IBM Flex System p260 and p460 Planning and Implementation Guide
 
Tivoli and web sphere application server on z os sg247062
Tivoli and web sphere application server on z os sg247062Tivoli and web sphere application server on z os sg247062
Tivoli and web sphere application server on z os sg247062
 
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762
Proof of concept guide for ibm tivoli storage manager version 5.3 sg246762
 
In-memory Computing with SAP HANA on IBM eX5 Systems
In-memory Computing with SAP HANA on IBM eX5 SystemsIn-memory Computing with SAP HANA on IBM eX5 Systems
In-memory Computing with SAP HANA on IBM eX5 Systems
 
Setup and configuration for ibm tivoli access manager for enterprise single s...
Setup and configuration for ibm tivoli access manager for enterprise single s...Setup and configuration for ibm tivoli access manager for enterprise single s...
Setup and configuration for ibm tivoli access manager for enterprise single s...
 

Similar to IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

IBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data CenterIBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data CenterIBM India Smarter Computing
 
Deployment guide series ibm total storage productivity center for data sg247140
Deployment guide series ibm total storage productivity center for data sg247140Deployment guide series ibm total storage productivity center for data sg247140
Deployment guide series ibm total storage productivity center for data sg247140Banking at Ho Chi Minh city
 
IBM PureFlex System Solutions for Managed Service Providers
IBM PureFlex System Solutions for Managed Service ProvidersIBM PureFlex System Solutions for Managed Service Providers
IBM PureFlex System Solutions for Managed Service ProvidersIBM India Smarter Computing
 
Ibm system storage solutions handbook
Ibm system storage solutions handbook Ibm system storage solutions handbook
Ibm system storage solutions handbook Diego Alberto Tamayo
 
Deployment guide series ibm tivoli composite application manager for web sphe...
Deployment guide series ibm tivoli composite application manager for web sphe...Deployment guide series ibm tivoli composite application manager for web sphe...
Deployment guide series ibm tivoli composite application manager for web sphe...Banking at Ho Chi Minh city
 
Implementing IBM InfoSphere BigInsights on System x
Implementing IBM InfoSphere BigInsights on System xImplementing IBM InfoSphere BigInsights on System x
Implementing IBM InfoSphere BigInsights on System xIBM India Smarter Computing
 
Ref arch for ve sg248155
Ref arch for ve sg248155Ref arch for ve sg248155
Ref arch for ve sg248155Accenture
 
Implementing IBM SmartCloud Entry on IBM PureFlex System
Implementing IBM SmartCloud Entry on IBM PureFlex SystemImplementing IBM SmartCloud Entry on IBM PureFlex System
Implementing IBM SmartCloud Entry on IBM PureFlex SystemIBM India Smarter Computing
 
Getting Started with KVM for IBM z Systems
Getting Started with KVM for IBM z SystemsGetting Started with KVM for IBM z Systems
Getting Started with KVM for IBM z SystemsMark Ecker
 
Ibm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configurationIbm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configurationgagbada
 
Patterns: Implementing an SOA using an enterprise service bus (ESB)
Patterns: Implementing an SOA using an enterprise service bus (ESB)Patterns: Implementing an SOA using an enterprise service bus (ESB)
Patterns: Implementing an SOA using an enterprise service bus (ESB)Kunal Ashar
 
Patterns: Implementing an SOA Using an Enterprise Service Bus
Patterns: Implementing an SOA Using an Enterprise Service BusPatterns: Implementing an SOA Using an Enterprise Service Bus
Patterns: Implementing an SOA Using an Enterprise Service BusBlue Atoll Consulting
 
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...Banking at Ho Chi Minh city
 
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...Banking at Ho Chi Minh city
 
Deployment guide series ibm tivoli configuration manager sg246454
Deployment guide series ibm tivoli configuration manager sg246454Deployment guide series ibm tivoli configuration manager sg246454
Deployment guide series ibm tivoli configuration manager sg246454Banking at Ho Chi Minh city
 

Similar to IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture (20)

IBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data CenterIBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data Center
 
IBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and ConfigurationIBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and Configuration
 
Deployment guide series ibm total storage productivity center for data sg247140
Deployment guide series ibm total storage productivity center for data sg247140Deployment guide series ibm total storage productivity center for data sg247140
Deployment guide series ibm total storage productivity center for data sg247140
 
IBM PureFlex System Solutions for Managed Service Providers
IBM PureFlex System Solutions for Managed Service ProvidersIBM PureFlex System Solutions for Managed Service Providers
IBM PureFlex System Solutions for Managed Service Providers
 
Ibm system storage solutions handbook
Ibm system storage solutions handbook Ibm system storage solutions handbook
Ibm system storage solutions handbook
 
IBM PowerVM Best Practices
IBM PowerVM Best PracticesIBM PowerVM Best Practices
IBM PowerVM Best Practices
 
Deployment guide series ibm tivoli composite application manager for web sphe...
Deployment guide series ibm tivoli composite application manager for web sphe...Deployment guide series ibm tivoli composite application manager for web sphe...
Deployment guide series ibm tivoli composite application manager for web sphe...
 
Implementing IBM InfoSphere BigInsights on System x
Implementing IBM InfoSphere BigInsights on System xImplementing IBM InfoSphere BigInsights on System x
Implementing IBM InfoSphere BigInsights on System x
 
Ref arch for ve sg248155
Ref arch for ve sg248155Ref arch for ve sg248155
Ref arch for ve sg248155
 
Ibm system storage solutions handbook sg245250
Ibm system storage solutions handbook sg245250Ibm system storage solutions handbook sg245250
Ibm system storage solutions handbook sg245250
 
Implementing IBM SmartCloud Entry on IBM PureFlex System
Implementing IBM SmartCloud Entry on IBM PureFlex SystemImplementing IBM SmartCloud Entry on IBM PureFlex System
Implementing IBM SmartCloud Entry on IBM PureFlex System
 
Sap
SapSap
Sap
 
Db2 virtualization
Db2 virtualizationDb2 virtualization
Db2 virtualization
 
Getting Started with KVM for IBM z Systems
Getting Started with KVM for IBM z SystemsGetting Started with KVM for IBM z Systems
Getting Started with KVM for IBM z Systems
 
Ibm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configurationIbm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configuration
 
Patterns: Implementing an SOA using an enterprise service bus (ESB)
Patterns: Implementing an SOA using an enterprise service bus (ESB)Patterns: Implementing an SOA using an enterprise service bus (ESB)
Patterns: Implementing an SOA using an enterprise service bus (ESB)
 
Patterns: Implementing an SOA Using an Enterprise Service Bus
Patterns: Implementing an SOA Using an Enterprise Service BusPatterns: Implementing an SOA Using an Enterprise Service Bus
Patterns: Implementing an SOA Using an Enterprise Service Bus
 
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
 
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
Deployment guide series ibm tivoli ccmdb overview and deployment planning sg2...
 
Deployment guide series ibm tivoli configuration manager sg246454
Deployment guide series ibm tivoli configuration manager sg246454Deployment guide series ibm tivoli configuration manager sg246454
Deployment guide series ibm tivoli configuration manager sg246454
 

More from IBM India Smarter Computing

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments IBM India Smarter Computing
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...IBM India Smarter Computing
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceIBM India Smarter Computing
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM India Smarter Computing
 

More from IBM India Smarter Computing (20)

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments
 
All-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage EfficiencyAll-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage Efficiency
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
 
IBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product GuideIBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product Guide
 
IBM System x3250 M5
IBM System x3250 M5IBM System x3250 M5
IBM System x3250 M5
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
IBM System x3650 M4 HD
IBM System x3650 M4 HDIBM System x3650 M4 HD
IBM System x3650 M4 HD
 
IBM System x3300 M4
IBM System x3300 M4IBM System x3300 M4
IBM System x3300 M4
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
IBM System x3500 M4
IBM System x3500 M4IBM System x3500 M4
IBM System x3500 M4
 
IBM System x3550 M4
IBM System x3550 M4IBM System x3550 M4
IBM System x3550 M4
 
IBM System x3650 M4
IBM System x3650 M4IBM System x3650 M4
IBM System x3650 M4
 
IBM System x3500 M3
IBM System x3500 M3IBM System x3500 M3
IBM System x3500 M3
 
IBM System x3400 M3
IBM System x3400 M3IBM System x3400 M3
IBM System x3400 M3
 
IBM System x3250 M3
IBM System x3250 M3IBM System x3250 M3
IBM System x3250 M3
 
IBM System x3200 M3
IBM System x3200 M3IBM System x3200 M3
IBM System x3200 M3
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization Performance
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
X6: The sixth generation of EXA Technology
X6: The sixth generation of EXA TechnologyX6: The sixth generation of EXA Technology
X6: The sixth generation of EXA Technology
 
Stephen Leonard IBM Big Data and cloud
Stephen Leonard IBM Big Data and cloudStephen Leonard IBM Big Data and cloud
Stephen Leonard IBM Big Data and cloud
 

Recently uploaded

How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 

Recently uploaded (20)

How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 

IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture

  • 1. Redpaper Scott Smith IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Business problem and business value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Business problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Business value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Microsoft Hyper-V and failover clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Component model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 IBM Flex System Enterprise Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 IBM Flex System Chassis Management Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 IBM Flex System x240 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 IBM System Storage DS3524 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 IBM Flex System EN2092 Ethernet switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Deployment considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Racking and power distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Networking and VLANs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Active Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Setup of the IBM Flex System x240 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Cluster creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Optional four-node configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 IBM Reseller Option Kit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Related links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Bill of materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Networking worksheets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Author. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 © Copyright IBM Corp. 2013. All rights reserved. ibm.com/redbooks 1
  • 2. Introduction The Flex System Solution for Microsoft Hyper-V Reference Architecture provides businesses with an affordable, interoperable, and reliable industry-leading virtualization and cloud solution choice. This IBM® Flex System based offering, which is built around the latest IBM x86 servers, storage, and networking, takes the complexity out of the solution by using step-by-step implementation guides. Validated by the Microsoft Private Cloud Fast Track program, the IBM virtualization reference architecture combines Microsoft software, consolidated guidance, and validated configurations for compute, network, and storage resources. The Microsoft program requires a certain minimum level of redundancy and fault tolerance across the servers, storage, and networking for the Windows Servers clusters to help ensure a certain level of fault tolerance while you manage private cloud pooled resources. This Reference Architecture provides ordering, setup, and configuration details for the IBM 2-Node highly available virtualization environment that has been validated as a Microsoft Hyper-V Fast Track Small configuration. The design consists of two IBM Flex System™ x240 compute nodes that are attached to IBM System Storage® DS3524 iSCSI-connected storage. Networking takes advantage of the Flex Chassis EN2092 switches. This fault-tolerant hardware configuration is clustered by using the Microsoft Windows Server 2012 operating system. Business problem and business value This section briefly describes the business problem that is associated with maintaining a robust IT environment while you keep pace with the ever-changing landscape and the business value that can be realized by combining Hyper-V Fast Track virtualization with failover clustering to ensure reliable continuity of business during periods of stress. Business problem Good IT practices recognize the need for high availability, flexibility, and maximum resource usage. Rapidly responding to changing business needs with rapid deployment and configuration while maintaining healthy systems and services directly corresponds to the vitality of your business. Natural disasters, malicious attacks, and even simple configuration problems can cripple services and applications until administrators resolve the problems and restore any backed up data. The challenge of maintaining uptime becomes more critical as businesses consolidate physical servers in to a virtual server infrastructure to reduce data center costs, maximize utilization, and increase workload performance. Business value Combining virtualization with failover clustering helps eliminate single points of failure so users have near-continuous access to important server-based and business-productivity applications. Virtual machines can be migrated among clustered host servers to support scheduled maintenance, and if physical or logical outages result in unplanned failures, virtual machines can be automatically restarted on the remaining cluster nodes. As a result, clients experience little to no downtime. This seamless operation is attractive for organizations trying to create business and maintain healthy service level agreements (SLAs). 2 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 3. Architectural overview The Microsoft Hyper-V Fast Track Small configuration provides a validated configuration of two compute nodes or less without a stand-alone management environment. This is ideal for smaller organizations that do not require the extra complexity and flexibility a dedicated management environment brings or for larger organizations that might have an existing management environment or are interested in setting up a proof of concept configuration. The design consists of two IBM Flex System x240 compute nodes, which are attached to the IBM System Storage DS3524 Storage controller. The networking design leverages the Flex System EN2092 Ethernet Switches. This fault-tolerant hardware configuration is clustered by using the Microsoft Windows Server 2012 operating system. A short summary of the Reference Architecture software and hardware components is listed below, followed by preferred practice implementation guidelines. The Reference Architecture Configuration is composed of the following enterprise-class components: One IBM Flex Enterprise System Chassis Two IBM Flex System x240 compute nodes in a Windows Failover Cluster running Hyper-V One DS3524 Highly Available (HA) storage with dual controllers Four Flex System EN2092 switches providing redundant networking for data and storage Together, these components form a high-performance and cost-effective solution that supports Microsoft Hyper-V cloud environments for the most popular business-critical applications and many custom third-party solutions. Equally important, these components meet the criteria that are set by Microsoft for the Private Cloud Fast Track program. The Private Cloud Fast Track program promotes robust cloud environments to help satisfy even the most demanding virtualization requirements. Figure 1 shows the overall configuration. 14 13 12 11 (1) Flex Enterprise Chassis 10 9 8 7 6 5 (2) Flex System x240 Compute Nodes (4) Flex System EN2092 Ethernet Switches 4 3 2 1 Flex System Enterprise (1) DS3524 Storage Controller (1) Chassis Management Module (1) DS3524 Storage w/ iSCSI Controllers Figure 1 Cloud Hyper-V Fast Track configuration IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 3
  • 4. This IBM Redpaper™ publication is for IT architects who are familiar with the necessary components of virtualized environments and want to begin with a small Hyper-V environment, but be positioned to scale up as demand grows. Additionally, IBM Sellers and IBM Business Partners and their clients that are evaluating or pursuing Hyper-V virtualization solutions can benefit from this previously validated configuration. Advanced comprehensive experience with the various Reference Architecture components is advised. Microsoft Hyper-V and failover clustering Microsoft Hyper-V technology continues to gain competitive traction as a key cloud component in many client virtualization environments. Hyper-V is included as a standard component in Windows Server 2012 Standard Edition and Datacenter Edition. Hyper-V virtual machines (VMs) support up to 64 virtual processors and 1 TB of memory. Individual VMs have their own operating system instance and are isolated from the host operating system and other VMs. VM isolation helps promote higher business-critical application availability. The Microsoft failover clustering feature, in the Windows Server 2012 Standard and Datacenter Editions, can dramatically improve production uptimes. Microsoft failover clustering helps eliminate single points of failure (SPOFs) so that users have near-continuous access to important server-based, business-productivity applications. VMs can be migrated among clustered host servers to support scheduled maintenance. In physical or logical outages that result in unplanned failures, VMs can be automatically restarted on the remaining cluster nodes. As a result, clients experience little-to-no downtime. This seamless operation is attractive for organizations that are trying to create new business and maintain healthy SLAs. Additionally, Microsoft failover clustering in Windows Server 2012 now supports native network interchange card (NIC) teaming to improve network fault tolerance. Microsoft failover clustering in Windows Server 2012 further improves physical resource utilization by load balancing VMs across cluster members in active/active configurations. Component model This highly available IBM private cloud architecture consists of the IBM Flex System Enterprise chassis with IBM Flex EN2092 Ethernet switches, IBM Flex System x240 compute nodes that run Microsoft Windows Server 2012, and DS3524 storage. Each component provides a key element to the overall solution. IBM Flex System Enterprise Chassis The IBM Flex System Enterprise Chassis is a simple and integrated infrastructure platform that supports a mix of compute, storage, and networking resources to meet the demands of your application workloads. More chassis can be added easily as workloads scale. With the IBM Flex System Manager™, multiple chassis can be monitored from a single window. The 14-node, 10U chassis delivers high-speed performance that is complete with integrated servers, storage, and networking. This flexible chassis is designed for a simple deployment now and to scale to meet your needs in the future. 4 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 5. Figure 2 shows the IBM Flex System Enterprise Chassis with compute nodes that are installed in the front and with network switches, power supplies, and fans that are installed in the rear. Figure 2 IBM Flex Enterprise Chassis IBM Flex System Chassis Management Module The IBM Flex System Chassis Management Module (CMM) is a hot-swap module that configures and manages all installed chassis components. The CMM provides resource discovery, inventory, monitoring, and alerts for all compute nodes, switches, power supplies, and fans in a single chassis. The CMM provides the communication link with each compute node system management processor, which is also called an Integrated Management Module (IMM), to support power control and out-of-band remote connectivity. The default IP address for the CMM is 192.168.70.100. IBM Flex System x240 At the core of the IBM Cloud Reference Configuration solution, the IBM Flex System x240 compute nodes deliver the performance and reliability that are required for virtualizing business-critical applications in Hyper-V cloud environments. To provide the expected virtualization performance to handle any Microsoft production environment, IBM Flex System x240 compute nodes can be equipped with up to two 8-core E5-2600 processors, and up to 768 GB of memory. The IBM Flex System x240 includes an onboard RAID controller. You can choose either spinning hot-swap serial-attached SCSI (SAS) or Serial Advanced Technology Attachment (SATA) disks. Or, you can choose small form-factor (SFF) hot-swap solid-state drives (SSDs). IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 5
  • 6. Figure 3 shows the front of the x240. Hard disk drive activity LED USB port NMI control Console Breakout Cable port Hard disk drive status LED Power button / LED LED panel Figure 3 IBM Flex System x240 Two I/O slots provide ports for both your data and storage connections though the Flex Enterprise chassis switches. The server also supports remote management through the IBM Integrated Management Module II (IMM2), which enables continuous management capabilities. All of these key features, including many that are not listed, help solidify the dependability IBM clients are accustomed to with IBM System x® servers. By virtualizing with Microsoft Hyper-V technology on IBM Flex System x240 compute nodes, businesses reduce physical server sprawl, power consumption, and total cost of ownership (TCO). Virtualizing the server environment also results in lower server administrative impact, giving IT administrators the capability to manage more systems than exclusive physical environments. Highly available critical applications that are on clustered host servers can be managed with greater flexibility and minimal downtime due to the Microsoft Hyper-V live and quick migration capabilities. IBM System Storage DS3524 The DS3524 combines storage development with leading 6-Gbps SAS, 1/10 Gb iSCSI or Fibre Channel (FC) host interfaces, and SAS/SATA drive technology. With its simple, efficient, and flexible approach to storage, the DS3524 is a cost-effective complement to IBM Flex System, System x, and IBM BladeCenter® systems. By offering substantial features at a price that fits most budgets, the DS3524 delivers superior price/performance ratios, functionality, scalability, and ease of use for the entry-level storage user. The DS3524 offers these benefits: Scalability to mid-range performance and features that start at entry-level prices Efficiency to help reduce annual energy expenditures and environmental footprints Simplicity that does not sacrifice control with the perfect combination of robustness and ease of use 6 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 7. The DS3524 is well-suited for Microsoft virtualized cloud environments. The DS3524 complements the IBM Flex System Enterprise Chassis, Flex EN2092 Ethernet switches, and x240 compute nodes in an end-to-end Microsoft Hyper-V private cloud solution by delivering proven disk storage in flexible and scalable configurations. Connecting optional EXP3500 enclosures to your DS3524 can scale up to 192 SAS, SATA, and SSD disks and with up to 576 TB of raw capacity. The DS3524 has 1 GB of cache per controller, upgradeable to 2 GB. The DS3524 now comes standard with 128 activated storage partitions. The DS3524 also comes with Volume Copy, Encryption, Dynamic Disk Pool, Thin Provisioning, and 32 Enhanced IBM FlashCopy® snapshots. Optional features, such as SSD Cache, 512 Enhanced FlashCopy snapshots, Consistency Groups, IP Replication, and Remote and Global Mirroring, are available for an extra cost, if needed. The DS3524 is shown in Figure 4. Figure 4 IBM System Storage DS3524 IBM Flex System EN2092 Ethernet switches The IBM Flex System EN2092 1Gb Ethernet Scalable Switch enables administrators to offer full Layer 2 and 3 switching and routing capability with combined 1-Gb and 10-Gb uplinks in an IBM Flex System Enterprise Chassis. This consolidation simplifies the data center infrastructure and helps reduce the number of discrete devices, management consoles, and management systems while taking advantage of the 1-Gb Ethernet infrastructure. In addition, the next-generation switch module hardware supports IPv6 Layer 3 frame forwarding protocols. This scalable switch delivers port flexibility, efficient traffic management, increased uplink bandwidth, and strong Ethernet switching price/performance. The IBM Flex System EN2092 1Gb Ethernet Scalable Switch is shown in Figure 5. Figure 5 IBM Flex System EN2092 1Gb Ethernet Scalable Switch IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 7
  • 8. Deployment considerations A successful Microsoft Hyper-V deployment and operation can be attributed to a set of test-proven planning and deployment techniques. Proper planning includes sizing the needed server resources (CPU and memory), storage (space and IOPS), and networking bandwidth to support the infrastructure. This information can then be implemented by using industry preferred practices to achieve optimal performance and the growth headroom that is necessary for the solution. The Microsoft Private Cloud Fast Track program combined with the IBM enterprise-class hardware prepares IT administrators to successfully meet their virtualization performance and growth objectives by deploying private clouds efficiently and reliably. The preferred practices and implementation guidelines for the Cloud Reference Configuration are broken down into the following topics: Racking and power distribution Networking and VLANs Active Directory Storage Setup of the IBM Flex System x240 Optional four-node configuration Racking and power distribution Perform the installation of power distribution units (PDUs) and their cabling before any system is racked. When cabling the PDUs, remember the following information: Ensure that you have sufficient and separate electrical circuits and receptacles to support the required PDUs. To minimize the chance of a single electrical circuit failure taking down a device, ensure that sufficient PDUs exist to feed redundant power supplies that use separate electrical circuits. For devices that have redundant power supplies, plan for individual electrical cords from separate PDUs. Maintain appropriate shielding and surge suppression practices. Employ the appropriate battery backup techniques. Networking and VLANs Combinations of physical and isolated virtual local area networks (VLANs) are configured at the host, switch, and storage layers to satisfy isolation requirements. At the physical host layer, eight 1 Gb Ethernet devices exist for each Hyper-V server (two Flex System EN2024 4-port 1GbE switch modules). At the physical switch layer, four Flex System EN2092 switches have up to 48 1 GbE ports each for storage and host connectivity. To support all eight 1 GbE connections from each server, the EN2092 switches require the Upgrade 1 Feature on Demand (FoD) option. A second FoD option is available if the external 10 GbE network ports are used for either uplink or inter-switch link connections. 8 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 9. The servers and storage maintain connectivity through multiple iSCSI connections that use Multipath I/O (MPIO). Windows Server 2012 NIC teaming is used to provide fault tolerance and load balancing to all the remaining communication networks (host management, Cluster Private, Live Migration, and VM). At the physical switch layer, VLANs are used to provide logical isolation between the various networks that are used for storage and data traffic. A key element is configuring the switches correctly to maximize the available bandwidth and reduce congestion. However, based on individual environment preferences, flexibility is available regarding how many VLANs are created and what type of role-based traffic they handle. However, after a final selection is made, ensure that the switch configurations are saved or backed up. Switch ports that are used for iSCSI traffic, Cluster Private, and Live Migration must be configured as untagged (access mode in Cisco terms). This configuration limits that port to only a single VLAN. The Ethernet frame receives a default VLAN ID at the switch (no settings are needed at the operating system level). Switch ports that are used for the Cluster Private and Live Migration team need to carry multiple VLAN IDs. These ports must be set to enable tagging, and the VLAN definitions must be specified on each switch to include the related ports. Each of these networks needs VLAN assignments in Windows Server. Inter-switch links are created between switches that share NIC team members. Link Aggregation Control Protocol (LACP) bonds 2 - 8 switch ports between two switches. LACP teams provide for higher bandwidth connections and error correction between LACP team members. LACP teams are used for the inter-switch links and the uplink connections to a corporate network. Up to 8 – DS3500 and Expansion (4) EN2092 Ethernet switches (in Flex chassis) provide fault tolerant data and storage connectivity Fault tolerant NIC teams and LACP teams across the switches provide redundant communication paths for the storage, servers, and VMs. Figure 6 shows a high-level network overview of the configuration. Figure 6 Cloud Reference configuration IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 9
  • 10. VLAN description The five VLANS are described in Table 1. More information, such as an example of port layouts and configuration, is shown in Table 11 on page 46. Worksheets to help plan network layout are in “Networking worksheets” on page 42. Table 1 LAN definitions Network Description VLAN 10 iSCSI Storage Network Used for iSCSI storage traffic VLAN 20 iSCSI Storage Network Used for iSCSI storage traffic VLAN 30 Cluster Private Network Used for private cluster communication and Cluster Shared Volumes traffic VLAN 31 Cluster Live Migration Network Used for cluster VM Live Migration traffic VLAN 40 10 Name Cluster Public Network Used for host management and VM communication IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 11. Flex System switch locations The IBM Flex System chassis contains up to four switches. The numbering of these switches is interleaved, as shown in Figure 7. Consider this numbering when you perform work on the switches or add cable connections to the external ports. Switch 1 Power Supply Bay Switch 3 10 1 Switch 2 3 I/O Bay Power Supply Bay Switch 4 5 2 4 CMM2 I/O Bay I/O Bay I/O Bay CMM Bay 6 5 4 Fan Bay Fan Bay Power Supply Bay Power Supply Bay Fan Bay Fan Bay Power Supply Bays 6 3 5 2 4 1 Fan Bays 10 9 8 7 6 5 4 3 2 1 Fan Bay Fan Bay Power Supply Bay Power Supply Bay Fan Bay Fan Bay 3 2 1 CMM1 1 6 3 2 1 4 Figure 7 IBM Flex System switch locations in the chassis iSCSI storage network (VLANs 10 and 20) At the physical storage layer, the DS3524 uses iSCSI ports for connectivity. Each controller has four 1 GbE Ethernet ports for iSCSI traffic. The usage of the Microsoft MPIO driver and the DS3524 Device Specific Module (DSM) manages the multiple I/O paths between the host servers and storage. Using the Microsoft MPIO driver and DSM optimizes the storage paths for maximum performance. VLANs are used to isolate storage traffic from other data traffic on the switches. Ethernet Jumbo Frames are set on the hosts and storage to maximize storage traffic throughput. VLAN 10 and VLAN 20 are reserved for server access to the iSCSI storage. All iSCSI traffic must be isolated on VLAN 10 and 20. One switch hosts VLAN 10, and a second switch hosts VLAN 20. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 11
  • 12. In setting up iSCSI access to the DS3524 storage controller, consider the following information: To help balance iSCSI workloads, each DS3524 controller maintains two iSCSI connections to the networks. Each controller has one connection to each switch. Each DS3524 controller must have its iSCSI ports set to support Jumbo frames (9000 bytes). The EN2092 switch, by default, supports Jumbo frames. By default, the EN2092 switches are set as untagged ports. The correct default VLAN ID needs to be assigned to the targeted ports from the EN2092 switch configuration menu. In setting up iSCSI access for each host (server/compute node), consider the following items: Each compute node has two connections to the iSCSI networks (one to each VLAN). One connection must be made from each of the two NIC cards (see Figure 6 on page 9). Because the switch ports are configured for a single VLAN in untagged mode, you do not need to specify a VLAN ID in the operating system on the NIC. By default, the EN2092 switches are set as untagged ports. The correct default VLAN ID needs to be assigned to the targeted ports in the EN2092 switch configuration menu. Each NIC port that connects to these VLANs must be set for Jumbo frames in the advanced properties of the NIC under Windows Device Manager. Cluster Private and Cluster Shared Volumes networks (VLAN 30) This network is reserved for Cluster Private (heartbeat) communication between clustered servers. Switch ports must be configured to appropriately limit the scope of each of these VLANs. This configuration requires that the switch ports for each x240 compute node are set to tagged. The VLAN definitions must include these ports for each switch. The switch ports that use this VLAN must specify VLAN 30 in Windows Server 2012. There must be no IP routing or default gateways for Cluster Private networks. Production Live Migration network (VLAN 31) A separate VLAN must be created to support Live Migration for the cluster. Switch ports must be configured to appropriately limit the scope of each of these VLANs. This configuration requires the switch ports that are used by each x240 compute node to be set to tagged. The VLAN definitions must include these ports for each switch. The switch ports that use this VLAN must specify VLAN 31 in Windows Server. There must be no routing on the Live Migration VLAN. Production communication network (VLAN 40) This network supports communication for the hosts and VMs. Two teams, which are created by using the Windows Server 2012 native NIC teaming feature, are used to provide fault tolerance, and load balancing for communication for host servers and VMs. These switch ports must be configured with their assigned VLAN ID in untagged mode. Default VLAN IDs are assigned for each of the ports that participate in the VLAN. 12 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 13. If additional segregation between the management and VM networks is required, the VM Team network ports can be set to tagged, and the ports can be added to the switch VLAN definitions. Each VM can then set the necessary VLAN ID as part of its network settings under Hyper-V manager. Layer 3 routing must also be configured for the switches to allow support for VM network access as needed. For more configuration network planning and configuration assistance, see “Networking worksheets” on page 42. DS3524 network ports At the physical storage layer, the DS3524 uses iSCSI ports for storage connectivity. Each controller has four 1 GbE Ethernet ports for iSCSI traffic. The use of the DS3524 Device Specific Module (DSM) manages the multiple I/O paths between the host servers and storage, and optimizes the storage paths for maximum performance. VLANs are used to isolate storage traffic from other data traffic on the switches. Ethernet Jumbo Frames are set on the hosts and storage to maximize storage traffic throughput. Two Ethernet ports on each controller are reserved for management of the DS3524. At a minimum, one management connection from each controller must be connected to the network. Connecting each controller to both switches provides more redundancy. The location of the iSCSI and management ports can be seen in Figure 8. Management Connections iSCSI Connections Figure 8 DS3524 iSCSI and management port location IBM Flex System x240 network ports The host servers have a total of two EN2024 4-port 1GbE network cards for a total of eight 1 GbE network ports to use for iSCSI storage connectivity, public and private cluster communication, and VM communication. The iSCSI connections to storage use Multipath I/O drives to ensure fault tolerance and load balancing. Windows Server 2012 NIC teaming is used for all but the iSCSI networks to provide fault tolerance, and spread the workload across the network communication interfaces. The NIC teams follow the preferred practice by ensuring that the team members are from each of the EN2024 network cards so that no single card failure can take down the team. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 13
  • 14. The x240 compute node I/O connectors are shown in Figure 9. I/O connector 1 Fabric connector I/O connector 2 Expansion connector Figure 9 Locations of the I/O connectors 1 and 2 Ethernet port assignment is listed in Table 2. Table 2 Ethernet port assignment I/O slot iSCSI VLAN ClstrPriv Team Mgmt Team VM Team Slot 1 Switch 1 (VLAN 10) Switch 1 Switch 2 Switch 2 Slot 2 Switch 3 (VLAN 20) Switch 3 Switch 4 Switch 4 IBM Flex System EN2092 Ethernet configuration The IBM Flex System configuration uses four Flex System EN2092 switches that contain up to 48-Gb Ethernet ports each. The EN2092 provides primary storage access and data communication services. Redundancy across the switches is achieved by creating an inter-switch link between switches 1 and 3 and between switches 2 and 4. The inter-switch links can be created by using the external 10 GbE links if activated or by creating an LACP team with multiple 1 GbE ports. Uplink connections can be achieved with either 10 GbE or LACP teams, depending on the client configuration. Each EN2092 switch requires Upgrade 1 to activate the additional ports that are required to fully support all the EN2024 ports on each x240 compute node. An additional FoD license is needed if the 10 GbE interfaces are used. 14 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 15. Management of the EN2092 switches can be performed either by the command-line interface (CLI) or a web-based user interface (Figure 10). The default user name and password for the IBM EN2092 switches is admin/admin. Change the default user name and password to a non-default password that meets the security requirements of each organization. Figure 10 EN2092 administration interface Spanning tree must be enabled on all switches according to the requirements of your organization. By default, the switches are assigned the following management IP addresses: 192.168.70.120 - Switch 1 192.168.70.121 - Switch 2 192.168.70.122 - Switch 3 192.168.70.123 - Switch 4 EN2092 switch port assignments can be seen in Table 3. Table 3 EN2092 switch port layout Port Switch 1 Switch 2 Switch 3 Switch 4 Internal ports Internal Port A1 iSCSI (VLAN 10) Mgmt Team (VLAN 40) iSCSI (VLAN 20) Mgmt Team (VLAN 40) Internal Port B1 LM and Cluster Priv Team (VLANs 30 and 31) VM Team (VLAN 40) LM and Cluster Priv Team (VLANs 30 and 31) VM Team (VLAN 40) Internal Port A2 iSCSI (VLAN 10) Mgmt Team (VLAN 40) iSCSI (VLAN 20) Mgmt Team (VLAN 40) Internal Port B2 LM and Cluster Priv Team VM Team (VLAN 40) LM and Cluster Priv Team (VLANs 30 and 31) VM Team (VLAN 40) IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 15
  • 16. Port Switch 1 Switch 2 Switch 3 Switch 4 External ports External Port E1 Not used AD Server (VLAN 40) Not used AD Server (VLAN 40) External Port E2 Not used Storage Mgmt (Cntrl-A) Not used Storage Mgmt (Cntrl-A) External Port E3 Not used Storage Mgmt (Cntrl-B) Not used Storage Mgmt (Cntrl-B) External Port E4 iSCSI - Cntrl-A (VLAN 10) Not used iSCSI - Cntrl-A (VLAN 20) Not used External Port E5 iSCSI - Cntrl-B (VLAN 10) Not used iSCSI - Cntrl-B (VLAN 20) Not used External Port E6 LACP Team (inter-switch link) (VLANs 30 and 31) LACP Team (inter-switch link) (VLAN 40) LACP Team (inter-switch link) (VLANs 30 and 31) LACP Team (inter-switch link) (VLAN 40) External Port E7 LACP Team (Inter-switch link) (VLANs 30 and 31) LACP Team (inter-switch link) (VLAN 40) LACP Team (inter-switch link) (VLANs 30 and 31) LACP Team (inter-switch link) (VLAN 40) External Port E8 No uplink LACP Team (corporate uplink) (VLAN 40) No uplink LACP Team (corporate uplink) (VLAN 40) External Port E9 No uplink LACP Team (corporate uplink) (VLAN 40) No uplink LACP Team (corporate uplink) (VLAN 40) Ports are set as untagged, by default. For example, the storage ports remain untagged (iSCSI and management). A default VLAN ID must be set as appropriate for the untagged ports. This setting can be done from the switch configuration menu for each port, as shown in Figure 11. Figure 11 Setting VLAN tagging and the default VLAN ID 16 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 17. Switch ports that might have traffic from multiple VLANs must use tagged ports that must be added to the respective VLANs in each switch, as appropriate, as shown in Figure 12. Figure 12 Adding ports to the VLAN interface Consider the following information about LACP teams (see Figure 13) on the EN2092 switch. Each LACP team has a unique port admin key and each port that is a member of that team is set to this unique value. In addition, the ports of one switch take the active role, and the ports of the other switch are set to a passive role. Figure 13 LACP configuration interfaces The configuration of the ports for each switch in the configuration is described. Switch 1 ports must be configured in the following manner: Ports A1, A2, EXT4, and EXT5 - VLAN 10 iSCSI traffic: – VLAN tagging disabled (default). – Jumbo frames that are configured by default on the switch. – The default is VLAN 10. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 17
  • 18. Ports B1, B2, EXT6, and EXT7 - Cluster Private/Cluster Shared Volumes (CSV) and Live Migration: – VLAN tagging enabled. – Add ports to VLANs 30 and 31. Ports EXT6 and EXT7 - Inter-switch link with Switch 3: – – – – Configure as an LACP team. Set ports to active. Check that the Ethernet cables connect to the external switch ports on Switch 3. Consider the interleaved numbering of switches. Switch 3 ports must be configured in the following manner: Ports A1, A2, EXT4, and EXT5 - VLAN 20 iSCSI traffic: – VLAN tagging disabled (default). – Jumbo frames that are configured by default. – The default is VLAN 20. Ports B1, B2, EXT6, and EXT7 - Cluster Private/CSV and Live Migration: – VLAN tagging enabled. – Add ports to VLANs 30 and 31. Ports EXT6 and EXT7 - Inter-switch link with Switch 1: – – – – Configure as an LACP team. Set ports to passive. Check that the Ethernet cables connect to external switch ports on Switch 1. Consider the interleaved numbering of switches. Switch 2 ports must be configured in the following manner: Ports A1, A2, B1, B2, EXT1, EXT2, EXT3, EXT6, EXT7, EXT8, and EXT9 - VLAN 40 management traffic: – VLAN tagging disabled (default). – The default is VLAN 40. Ports EXT6 and EXT7 - Inter-switch link with Switch 4: – – – – Configure as an LACP team. Set ports to active. Check that the Ethernet cables connect to external switch ports on Switch 4. Consider the interleaved numbering of switches. Ports EXT8 and EXT9 - LACP team for corporate uplink: – Configure as an LACP team. – Set ports to active/passive, depending on the needs of the uplink switches. – Check that the Ethernet cables connect to uplink switches. Switch 4 ports must be configured in the following manner: Ports A1, A2, B1, B2, EXT1, EXT2, EXT3, EXT6, EXT7, EXT8, and EXT9 - VLAN 40 management traffic: – VLAN tagging disabled (default). – The default is VLAN 40. Ports EXT6 and EXT7 - Inter-switch link with Switch 2: – Configure as an LACP team. – Set ports to passive. – Check that Ethernet cables connect to external switch ports on Switch 2. 18 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 19. Ports EXT8 and EXT9 - LACP team for corporate uplink: – Configure as an LACP team. – Set ports to active/passive, depending on the needs of uplink switches. – Check that the Ethernet cables connect to uplink switches. Active Directory The IBM Private Cloud Architecture must be part of an Active Directory (AD) domain, which is required to form the Microsoft Windows Server 2012 clusters. An AD server is presumed to exist. The identified external switch ports on switches 2 and 4 can be used for connectivity, or connectivity can be achieved from your uplink ports to the network of your organization. Storage For an overview of the DS3524, see the IBM System Storage DS3500 Introduction and Implementation Guide, SG24-7914, found at: http://www.redbooks.ibm.com/abstracts/sg247914.html?Open Cabling In this configuration, each storage controller maintains two connections to the switches on the back of the Flex Enterprise chassis. One connection is to Switch 1, and one connection is to Switch 3. Storage controller-A must be connected to external port 3 on each of these switches. Storage controller-B must be connected to external port 4 on each controller. See Figure 14 on page 20. Two 1 GbE connections that use MPIO provide sufficient bandwidth for most configurations of this size. However, if the storage network load requires more bandwidth, the remaining two iSCSI ports on the DS3524 can be connected as well. If additional Ethernet connections exist between the storage controllers and the switches, configure the switch ports to support the correct VLANs as well. Two management Ethernet ports are on the back of the DS3524. Distribute the management connections across external ports 2 and 3 on EN2092 switches 2 and 4 to help ensure connectivity if one switch is temporarily down. These switch ports must also be configured with VLAN 40 in untagged mode to communicate correctly. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 19
  • 20. Figure 14 shows the storage connections for both iSCSI and management to the IBM Flex System EN2092 switches. EN2092 SW1 Power Supply Bay EN2092 SW3 10 1 EN2092 SW2 3 I/O Bay Power Supply Bay EN2092 SW4 5 2 4 CMM2 I/O Bay I/O Bay I/O Bay CMM Bay 6 5 4 Fan Bay Power Supply Bay Fan Bay Power Supply Bay Fan Bay Fan Bay Power Supply Bays 6 3 5 2 4 1 Fan Bays 10 9 8 7 6 5 4 3 2 1 Fan Bay Fan Bay Power Supply Bay Power Supply Bay Fan Bay Fan Bay 3 2 1 CMM1 1 6 3 2 1 4 VLAN 10 / iSCSI VLAN 20 / iSCSI VLAN 40 / Mgmt Controller-A Controller-B DS3524 iSCSI Storage Figure 14 DS3524 Storage Ethernet connections Management The DS3524 is managed by using the IBM Total Storage Manager tools that are available for download at the IBM Support website found at http://www.ibm.com/support (support account registration is required). The DS3524 MPIO DSM driver is also required for this configuration. 20 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 21. To begin the management of the DS3524, complete the following steps: 1. Establish an out-of-band connection with Total Storage Manager by using the default TCP/IP addresses (see Figure 15): – Management Interface 1: • • Controller-A - 192.168.128.101 Controller-B - 192.168.128.102 – Management Interface 2: • • Controller-A - 192.168.129.101 Controller-B - 192.168.129.102 Figure 15 Establish an out-of-band connection to DS3524 management ports IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 21
  • 22. 2. Navigate to the DS3524 Setup page to change the management and iSCSI port TCP/IP addresses to the address to use in production (Figure 16). Figure 16 Setting management and iSCSI ports for DS3524 22 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 23. 3. Set the iSCSI port TCP/IP addresses for the two ports to use on each controller, and enable Jumbo frames (9000 bytes) under Advanced Port Settings (Figure 17). Figure 17 iSCSI port settings DS3524 and Hyper-V cluster storage considerations The DS3524 storage system supports a concept that is called disk pooling. Disk pools remove much of the guesswork of creating arrays and creating logical volumes from these arrays. A single disk pool that contains all 24 drives can be created. The DS3524 creates and aggregates the optimum number of RAID 6 arrays to support this disk pool. From this pool, one or more logical disks can be created and presented to the host servers. All I/O can be spread out across all the disks to maximize disk throughput. The combination of RAID 6 and proprietary disk pooling software adds exceptional fault tolerance and quicker disk rebuild time in a disk failure. Microsoft Windows Failover Clustering supports Cluster Shared Volumes (CSVs). Cluster Shared Volumes provide the primary storage for the VM configuration files and virtual hard disks. All CSVs are concurrently visible to all cluster nodes and are simultaneously accessible from each node. From the disk pool, two logical disks can be created: one logical disk for the cluster quorum and one logical disk for a Cluster Shared Volume. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 23
  • 24. Figure 18 shows a suggested disk configuration for the DS3524. Disk Pool1 – 24 Disk pool Logical Disk1 – 5 GB Volume Quorum Logical Disk2 – 4 TB Volume CSV1 Figure 18 DS3524 storage configuration Disk configuration and performance can be highly workload-dependent. Although this disk configuration fits most user applications, profile and analyze your specific environment to ensure adequate performance for your needs. Configuration To configure DS3524 storage, complete the following steps: 1. Create the disk pool that is needed for the production configuration. Assign a pool name to it, and select the number of disks to use (Figure 19). Figure 19 DS3524 array creation 24 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 25. Logical disks can now be created off the pool (Figure 20). Figure 20 DS3524 logical drive creation 2. Create a host group to contain each of the host servers (Figure 21). A host group is a logical group that contains the host servers that all see the same storage volumes. Figure 21 DS3524 host group creation Setup of the IBM Flex System x240 Our Windows Server cluster consists of two dual-socket IBM Flex System x240 compute nodes with 64 GB of RAM, and eight 1GbE NIC ports each. The setup involves the installation of Windows Server 2012 Datacenter Edition on each server followed by the confirmation of network and storage connectivity. Then, Hyper-V and Microsoft Clustering can be enabled and configured. Highly available VMs can then be created to perform the various production tasks that your organization requires. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 25
  • 26. Pre-operating system installation steps Before you install the operating system, complete the following steps: 1. Confirm that both EN2024 4-port Ethernet adapters are installed in each compute node. 2. Install the latest firmware on the x240 by using a Bootable Media Creator image. Bootable Media Creator creates a bootable image of the latest IBM x240 updates (download in advance). An external DVD drive is required. The Bootable Media Creator (BoMC) can be downloaded from this website: http://ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-BOMC IBM Fast Setup is an optional tool that can be downloaded and used to configure multiple System x, BladeCenter, or Flex System systems simultaneously. A link to this tool is at this website: http://ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-FASTSET 3. By default, the x240 compute node is set to balance power consumption and performance. To change this setting boot to UEFI mode, select System Settings  Operating Mode (Figure 22) and change the selection to what best fits your organizational parameters. Figure 22 Operating Modes settings in UEFI 4. EN2092 switches are configured as described in “Networking and VLANs” on page 8: – Inter-switch links are created and show as active in the EN2092 management consoles. – Uplinks are created and show as active in the EN2092 management consoles. – VLANs are configured for their respective ports in the EN2092 management consoles. 26 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 27. 5. DS3524 iSCSI storage must be configured, as described in “Configuration” on page 24. DS3524 iSCSI storage must be ready for iSCSI qualified name (IQN) assignments to map the volumes to the servers. 6. The two local disks must be configured as a RAID 1 array. IMM address: The default IMM address for each x240 compute node is 192.168.70.1xx, where xx is equal to the two-digit slot number in which the compute node is installed (Slot 1 = 01). OS installation and configuration To install and configure the operating system on each x240 compute node, complete the following steps: 1. Install Windows Server 2012 Datacenter Edition. Windows Server 2012 Datacenter Edition offers unlimited Windows VM rights on the host servers and is the preferred version for building private cloud configurations. Windows Server 2012 Standard Edition now supports clustering as well, but it provides licensing rights for up to two Windows VMs only (additional licenses are needed for more VMs). Windows Server 2012 Standard Edition is intended for physical servers that have few or no VMs that run on it. 2. Set your server name, and join the domain. 3. Install the Hyper-V role and Failover Clustering feature. 4. Run Windows Update to ensure that any new patches are installed. 5. Multipath I/O is used to provide balanced and fault-tolerant paths to DS3524. Multipath I/O requires an additional DS3524 DSM-specific driver1 to be installed on the host servers before you attach the storage. 6. The Microsoft MPIO prerequisite driver is also installed if the driver is not on the system. This driver is part of Windows and installs automatically when the IBM driver is installed. Network configuration To complete network configuration, complete the following steps: 1. For the iSCSI network interfaces, set the MTU size to 9000 to support Jumbo frames. The larger packet size helps the storage performance. Complete this step under the device properties of each NIC (Figure 23). Figure 23 Jumbo frame settings for host server 1 Go to http://ibm.com/support and select downloads for the DS3524 (http://bit.ly/10CiWbd). Scroll down to the Storage Manager section of downloads and locate the correct download in the form Disk-SM-Windows-x86-Month-Year-Version-xx.xx.xx. The MPIO driver is in the Windows directory in the compressed file that you download. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 27
  • 28. 2. Set up NIC teaming. One key new feature of Windows Server 2012 is in-box NIC teaming. This in-box teaming can provide fault tolerance and link aggregation and be tailored to host or VM connectivity. Three separate Windows Server 2012 teams are created in this configuration. One team is used to support host server management traffic. A second team is used to support Cluster Private/CSV communication and Live Migration (across separate vNICs and VLANs). A third team provides VM communication. Carefully identify and enumerate the network interfaces in each host to ensure that teams are spread across the two physical devices and routed to the correct switches. Two network interfaces run to each switch. One way to enumerate the ports is to disable a port on a switch and see the change that is reflected under network devices. The setting for Windows Server 2012 in-box NIC teaming is in the Server Manager console, as shown in Figure 24. Figure 24 NIC teaming in Server Manager 3. Create the team to support cluster public communication with the host servers by using the two dedicated NIC ports, as described in “Networking and VLANs” on page 8. 28 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 29. Create this team by using the default switch independent teaming mode and address hash load balancing mode (Figure 25). These modes provide 2 Gbps of outbound traffic bandwidth and 1 Gb of inbound traffic bandwidth. Figure 25 Windows Server 2012 NIC team 4. Create a second team with the teaming properties with the Cluster Private/Live Migration network interfaces. However, do not specify any VLANs now. 5. Create the team to support VM communication with the host servers by using the two dedicated NIC ports, as described in “Networking and VLANs” on page 8. Create this team by using the default switch independent teaming mode and Hyper-V port load balancing mode. Ethernet traffic for each VM is assigned to one of the team members as the default path. The VM traffic is spread evenly across the team. In a failure, traffic is reassigned to an alternative team member. The VLAN setting is configured under Hyper-V. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 29
  • 30. When Windows Server 2012 NIC teaming is complete, three teams display under the NIC teaming management utility (Figure 26). Figure 26 Windows Server NIC teaming 6. Create a vSwitch for use by the host for Cluster Private/CSV communication and Live Migration. PowerShell is used to create this vSwitch (instead of Hyper-V Virtual Switch Manager) to take advantage of additional options and flexibility only available with PowerShell. PowerShell is part of Windows Server 2012. The CLI can be started by entering PowerShell at the command line, running the start command, or clicking the PowerShell icon. 7. Determine the network adapters that are available to work with by running the following PowerShell command: Get-NetAdapter 8. Record the name of the VM team that is created for Cluster Private/CSV and Live Migration. 9. Create the vSwitch on top of this team by running the following PowerShell command: New-VMSwitch -name ClusterPrivate -netadaptername TeamName -MinimumBandwidthMode Weight -AllowManagementOS $true 10.Add the second vNIC interface to the vSwitch (allow management OS access) by running the following command: Add-VMNetworkAdapter -ManagementOS -Name LiveMigration -SwitchName ClusterPrivate 11.Reserve a minimum of 10% of the available bandwidth for the Cluster Private/CSV network by running the following command: Set-VMNetworkAdapter -ManagementOS -Name ClusterPrivate -MinimumBandwidthWeight 10 12.Reserve a minimum of 90% of the available bandwidth for the Live Migration network by running the following command: Set-VMNetworkAdapter -ManagementOS -Name LiveMigration -MinimumBandwidthWeight 90 30 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 31. 13.Set the correct VLAN ID for each of these networks by running the following command: Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName ClusterPrivate -Access -VlanId 30 Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName LiveMigration -Access -VlanId 31 14.After you set the VLAN IDs, confirm your network adapter name and VLAN assignments by running the following command: Get-VMNetworkAdapterVlan -ManagementOS The output is shown in Figure 27. PS C:Usersadministrator.C4> Get-VMNetworkAdaptervlan -ManagementOS VMName VMNetworkAdatperName Mode VlanList ------ -------------------- ----------LiveMigration Access 31 ClusterPriv Access 30 Figure 27 Results of the PowerShell VMNetworkAdapter configuration 15.Record the Windows Team network device name that is intended for use by the VMs (Figure 28). Figure 28 Available networking devices that can be used to create a vSwitch IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 31
  • 32. 16.Use Hyper-V Manager to create a vSwitch that is based on this device. Clear the check box that allows management traffic on this device (Figure 29). Figure 29 vSwitch settings 17.Confirm that the switch name is the same on all cluster nodes to ensure that Live Migration works correctly. 18.Assign TCP/IP addresses and confirm network connectivity for all network connections on each VLAN. 19.The cluster public network must be at the top of the network binding order (VLAN 40). 20.The iSCSI, Cluster Private, and Live Migration networks must not have any defined default gateway. In addition, the Client for Microsoft networks and File and Print Sharing can be disabled for these interfaces. Storage connections The DS3524 provides shared storage that is used to create highly available and fault-tolerant drives for use by the cluster. 32 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 33. The following steps complete the configuration and presentation of the disks on the DS3524. The process of making the iSCSI connections from Windows Server 2012 back to these disks is described. Complete the following steps: 1. Each disk is used to ensure that the DS3524 storage volumes are accessible only to the specific servers that are assigned to them. IQN names are assigned to each server, and the IQN names can be seen in the Microsoft iSCSI Initiator Properties window in the Control Panel. The IQN name for each server changes after the host servers join the Windows domain. Record the IQN names for each server to complete the host mapping in the DS3524 Storage Manager (Figure 30). Figure 30 Server IQN name in Windows Server 2012 iSCSI Initiator Properties 2. From the Total Storage Manager application, add each of the clustered hosts to the host group (Figure 31). Figure 31 Add Host to Host Group IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 33
  • 34. 3. Select iSCSI as the interface type, add the unique IQN name for each host, and assign a chosen name (Figure 32). Figure 32 Host definition 4. Select Windows Clustered if you are not using Disk Pools and are queried for a Host type (Figure 33). Figure 33 Host type The DS3524 disks are now ready and visible to the host servers. iSCSI connections are made from each server to the DS3524 to complete the storage connections. 5. Using the Microsoft iSCSI initiator, connect each host to a server path. Use the Quick Connect option if you are not using any advanced features. 34 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 35. If a CHAP secret is defined on the target (DS3524), click the Discover Target Portal tab, enter the target IP, and click Advanced (Figure 34). Figure 34 Target discovery with advanced options 6. When complete, a minimum of four paths that are defined between the server and the storage are shown (Figure 35). Figure 35 iSCSI storage paths 7. The Volumes and Devices tab now displays the targets that are available to the host server. The disks also appear in Windows Disk Manager, although a disk rescan might be required. 8. From a single server, bring each disk online, and format it as a GPT disk for use by the cluster. Assigning drive letters is optional because drive letters are used for specific clustering roles, such as CSV, and Quorum is not required. Validate that each potential host server can see the disks and bring them online. Tip: Only one server can have the disks online at a time until all disks are added to Cluster Shared Volumes. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 35
  • 36. Cluster creation Microsoft Windows clustering joins the host servers in to a highly available configuration that allows both servers to run VMs to support a production environment. VM workloads must be balanced across both hosts. Be careful to ensure that the combined resources of all VMs do not exceed the resources that are available on N-1 cluster nodes. Staying under this threshold allows a single server to be taken out of the cluster and minimizes the impact to your production servers. A policy of monitoring resource utilization, such as the CPU, memory, and disk (both space and I/O) helps keep the cluster running at optimal levels. By monitoring resource utilization, you can plan to add more resources as needed. Using the Failover Cluster Manager, run the cluster validation wizard to assess the two physical host servers as potential cluster candidates and to address any errors. Consider the following information as you run the wizard: The cluster validation wizard checks for available cluster compatible host servers, storage, and networking (Figure 36). Figure 36 Cluster validation wizard Ensure that the intended cluster storage is online to only one of the cluster nodes. Temporarily disable the default IBM USB Remote NDIS Network Device on all cluster nodes. This device causes the validation to issue a warning during network detection because all the nodes share the same IP address. Address any issues that are flagged during the validation. Use the Failover Cluster Manager to create a cluster with the two physical host servers. You need a cluster name and IP address. 36 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 37. Figure 37 shows the Failover Cluster Manager with the two hosts visible. Figure 37 Failover Cluster Manager Add the disks to Cluster Shared Volumes. Use Hyper-V Manager to set the default paths for VM creation to use the Cluster Shared Volumes. VM setup and configuration Perform the setup and configuration of new VMs by using the Failover Cluster Manager utility. The Failover Cluster Manager utility automatically makes the VM highly available and able to migrate (by using Live Migrate) between each cluster member. The operating system can be installed on a VM by using various methods. A straightforward approach is to modify the VM DVD drive settings to specify an image file that points to the Windows installation ISO image. Then, start the VM to begin the installation. Other deployment methods are acceptable as well: A virtual hard drive (VHD) file with a Sysprep image Windows Deployment Service (WDS) server System Center Configuration Manager (SCCM) With the operating system installed and the VM running, complete the following steps before you install the application software: 1. Run Windows Update. 2. Update or install the integration services in the VM. Ensure that both the host and VM have the same version of integration services. 3. Activate Windows. Hyper-V supports Dynamic Memory in VMs. Dynamic Memory allows flexibility in the assignment of memory resources to VMs. However, certain applications might experience performance-related issues if the memory settings of the VM are configured incorrectly. Research how Dynamic Memory might affect the virtualization of specific applications before you implement Dynamic Memory. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 37
  • 38. For a high-level overview of dynamic memory, see The Server Virtualization on Windows Server 2012, found at: http://download.microsoft.com/download/5/D/B/5DB1C7BF-6286-4431-A244-438D4605DB1D/ WS%202012%20White%20Paper_Hyper-V.pdf Optional four-node configuration Increasing the number of cluster nodes from two to four, if needed, is a straightforward process. You might consider increasing the number of cluster nodes to ensure sufficient compute nodes to achieve an N+1 level of redundancy. Your configuration must have sufficient compute nodes to run all VM workloads with one of the cluster nodes down. A two-node cluster must fail over all workloads to the remaining cluster node. With a larger cluster, this workload is distributed among several operational compute nodes. The following changes to the configuration are required to support four nodes: Compute nodes Two more x240 compute nodes are required. The specifications must match the original compute nodes. Networking No changes need to be made to the network switching hardware. The existing configuration is sufficient to support the two additional cluster nodes. An updated EN2092 Flex Enterprise switch configuration table is shown in Table 4 as a reference for the additional cluster nodes. Table 4 EN2092 switch port layout for four cluster nodes Port Switch 1 Switch 2 Switch 3 Switch 4 Internal ports Internal Port A1 iSCSI (VLAN 10) Mgmt Team (VLAN 40) iSCSI (VLAN 20) Mgmt Team (VLAN 40) Internal Port B1 LM and Cluster Priv Team (VLANs 30 and 31) VM Team (VLAN 40) LM and Cluster Priv Team (VLANs 30 and 31) VM Team (VLAN 40) Internal Port A2 iSCSI (VLAN 10) Mgmt Team (VLAN 40) iSCSI (VLAN 20) Mgmt Team (VLAN 40) Internal Port B2 LM and Cluster Priv Team VM Team (VLAN 40) LM and Cluster Priv Team (VLANs 30 and 31) VM Team (VLAN 40) Internal Port A3 iSCSI (VLAN 10) Mgmt Team (VLAN 40) iSCSI (VLAN 20) Mgmt Team (VLAN 40) Internal Port B3 LM and Cluster Priv Team VM Team (VLAN 40) LM and Cluster Priv Team (VLANs 30 and 31) VM Team (VLAN 40) Internal Port A4 iSCSI (VLAN 10) Mgmt Team (VLAN 40) iSCSI (VLAN 20) Mgmt Team (VLAN 40) Internal Port B4 LM and Cluster Priv Team VM Team (VLAN 40) LM and Cluster Priv Team (VLANs 30 and 31) VM Team (VLAN 40) 38 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 39. Port Switch 1 Switch 2 Switch 3 Switch 4 External ports External Port E1 Not used AD Server (VLAN 40) Not used AD Server (VLAN 40) External Port E2 Not used Storage Mgmt (Cntrl-A) (VLAN 40) Not used Storage Mgmt (Cntrl-A) (VLAN 40) External Port E3 Not used Storage Mgmt (Cntrl-B) (VLAN 40) Not used Storage Mgmt (Cntrl-B) (VLAN 40) External Port E4 iSCSI - Cntrl-A (VLAN 10) Not used iSCSI - Cntrl-A (VLAN 20) Not used External Port E5 iSCSI - Cntrl-B (VLAN 10) Not used iSCSI - Cntrl-B (VLAN 20) Not used External Port E6 LACP Team (Inter-switch Link) (VLANs 30 and 31) LACP Team (Inter-switch Link) (VLAN 40) LACP Team (Inter-switch Link) (VLANs 30 and 31) LACP Team (Inter-switch Link) (VLAN 40) External Port E7 LACP Team (Inter-switch Link) (VLANs 30 and 31) LACP Team (Inter-switch Link) (VLAN 40) LACP Team (Inter-switch Link) (VLANs 30 and 31) LACP Team (Inter-switch Link) (VLAN 40) External Port E8 No uplink LACP Team (Corp Uplink) (VLAN 40) No uplink LACP Team (Corp Uplink) (VLAN 40) External Port E9 No uplink LACP Team (Corp Uplink) (VLAN 40) No uplink LACP Team (Corp Uplink) (VLAN 40) Storage Profile and evaluate the storage needs to ensure that sufficient resources are available to support operational needs. Ensure that you have a combination of space and sufficient disk spindles to support the required I/O for a particular environment. If needed, the DS3524 storage controller supports the EXP3524 storage expansion modules for additional storage and I/O capacity. Establish the additional connections between each of the host servers and the new iSCSI connections. Summary Upon completing implementation steps, an operational, highly available Microsoft Hyper-V failover cluster helps you form a high-performance, interoperable, and reliable IBM private cloud architecture. With Enterprise-class multilevel software and hardware, fault tolerance is achieved by configuring a robust collection of industry-leading IBM Flex Systems, storage systems, and networking components to meet the Microsoft Private Cloud Fast Track program guidelines. The program’s unique framework promotes standardized and highly manageable cloud environments, which help satisfy even the most challenging business-critical virtualization demands. IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 39
  • 40. Appendix This section describes the IBM Reseller Option Kit. IBM Reseller Option Kit Getting your clients the operating system that they want has never been easier. The IBM Reseller Option Kit (ROK) is a software delivery option that enables distributors and resellers to order Microsoft Windows Server products separately from IBM server hardware. Each IBM ROK package is tuned for IBM servers but is not yet installed. This product is purchased as a server option, such as RAM, hard disk drives, or processors. The installation-ready reseller kit provides the Windows Server license separately from IBM branded servers with all the benefits and reliability of an IBM provided Windows Server image. Tuned to run on System x servers, ROK includes certified and tested drivers and an OS image. ROK also contains the IBM ServerGuide, a tool that helps to simplify and automate installation and configuration. For more information, see the Announcement Letter, found at: http://ibm.com/common/ssi/cgi-bin/ssialias?infotype=dd&subtype=ca&&htmlfid=897/ENU S212-513 Related links IBM Bootable Media Creator: http://ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-BOMC IBM Director Agent Download (Platform Agent): http://ibm.com/systems/software/director/downloads/agents.html IBM Fast Setup: http://ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-FASTSET IBM Firmware Update and Best Practices Guide, found at: http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5082923 IBM Flex System EN2092 1Gb Ethernet Scalable Switch User’s Guide, found at: http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc. networkdevices.doc/88y7927.pdf IBM Flex System x240 Compute Node Types 7863, 8737, and 8738 Installation and Service Guide, found at: http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc. 8737.doc/dw1ko_book.pdf IBM Reseller Option Kit for Windows Server 2012: http://ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/E NUS212-513&appname=totalstorage IBM Server Guide: http://ibm.com/support/entry/portal/docdisplay?lndocid=serv-guide IBM Support: http://www.ibm.com/support 40 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 41. IBM System Storage DS3524 Storage Users Guide, found at: http://ibm.com/systems/networking/hardware/ethernet/b-type/b48y/ IBM x86 Server Cloud Solutions: http://ibm.com/systems/x/solutions/cloud/ Bill of materials Table 5 lists the bill of materials for the configuration. Table 5 Bill of materials SBB part number Description Quantity Rack configuration 9360-4PX IBM 42U 1200mm Deep Dynamic Rack 1 39Y8941 DPI Single-phase 30A/208V C13 Enterprise PDU (US) 2 40K9614 L6-30 power cord 2.8m 2 39Y8948 DPI Single-phase 60A/208V C19 Enterprise PDU (US) 2 40K9615 IEC 309 2P+G power cord 4.3m 2 Chassis configuration 8721HC1 IBM Flex System Enterprise Chassis 1 Includes 2500W Power Modules 2 Includes IBM Flex System Chassis Management Module 1 IBM Flex System Console Breakout Cable 1 1.8m Black Cat5e Cable (Corporate Uplinks and AD) 6 40K5627 1.5m Green Cat5e Cable (iSCSI Links) 4 40K8932 0.6m Yellow Cat5e Cable (ISL Links) 4 40K5564 1.5m Blue Cat5e Cable (Storage Management) 4 49Y4297 IBM Flex System EN2092 1Gb Ethernet Scalable Switch 4 49Y4297 IBM Flex System EN2092 1Gb Ethernet Scalable Switch (Upgrade 1) 4 16A/100-250V, C19 to IEC 320-C20 2m Rack Power Cable 2 Service pack1: 3 Year onsite Repair 24x7 4 Hour Response 1 Compute node configuration 8737MC1 ComputeNodes: IBM Flex System x240 Compute Node 2 49Y1379 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM 16 49Y7903 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter 4 90Y8879 IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD 4 81Y9421 Additional Intel Xeon Processor E5-2670 8C 2.6GHz 20MB Cache 1600MHz 115W 2 81Y9420 Intel Xeon Processor E5-2670 8C 2.6GHz 20MB Cache 1600MHz 115W 2 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 41
  • 42. SBB part number Description Quantity Service pack2: 3 Year onsite Repair 24x7 4 Hour Response 2 Operating system OOY6283 Operating system: Windows Server 2012 Datacenter (2 skt) 2 DS3524 storage configuration 1746C4A Storage1: IBM System Storage DS3524 Express 1 68Y8434 2GB Cache Upgrade 2 68Y8433 1Gb iSCSI 4 Port Daughter Card 2 49Y2048 600GB 10,000 rpm 6Gb SAS 2.5" HDD 24 Service pack3: 3 Year onsite Repair 24x7 4 Hour Response 1 Networking worksheets Use these worksheets to document your network configuration. Switch 1 Table 6 shows the EN2092 switch layout for switch 1. Table 6 EN2092 Switch layout (switch 1) Switch ports Device Port setting and VLANs Internal Port A1 Compute Node 1 - iSCSI Port 1 Untagged/VLAN 10 Internal Port B1 Compute Node 1 - Live Migr/Cluster Priv Tagged/VLANs 30 and 31 Internal Port A2 Compute Node 2 - iSCSI Port 1 Untagged/VLAN 10 Internal Port B2 Compute Node 2 - Live Migr/Cluster Priv Tagged/VLANs 30 and 31 Internal Port A3 Optional Compute Node 3 - iSCSI Port 1 Untagged/VLAN 10 Internal Port B3 Optional Compute Node 3 - Live Migr/Cluster Priv Tagged/VLANs 30 and 31 Internal Port A4 Optional Compute Node 4 - iSCSI Port 1 Untagged/VLAN 10 Internal Port B4 Optional Compute Node 4 - Live Migr/Cluster Priv Tagged/VLANs 30 and 31 External Port E4 iSCSI – Controller-A Untagged/VLAN 10 External Port E5 iSCSI – Controller-B Untagged/VLAN 10 External Port E6 (Switch 3) Inter-switch link LACP Team Tagged/VLANs 30 and 31 External Port E7 (Switch 3) Inter-switch link LACP Team Tagged/VLANs 30 and 31 External Port E8 No uplink External Port E9 No uplink External Port E1 External Port E2 External Port E3 42 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 43. Switch 3 Table 7 shows the EN2092 switch layout for switch 3. Table 7 EN2092 switch layout (switch 3) Switch ports Device Port setting and VLANs Internal Port A1 Compute Node 1 - iSCSI Port 2 Untagged/VLAN 20 Internal Port B1 Compute Node 1 - Live Migr/Cluster Priv Tagged/VLANs 30 and 31 Internal Port A2 Compute Node 2 - iSCSI Port 2 Untagged/VLAN 20 Internal Port B2 Compute Node 2 - Live Migr/Cluster Priv Tagged/VLANs 30 and 31 Internal Port A3 Optional Compute Node 1 - iSCSI Port 3 Untagged/VLAN 20 Internal Port B3 Optional Compute Node 3 - Live Migr/Cluster Priv Tagged/VLANs 30 and 31 Internal Port A4 Optional Compute Node 3 - iSCSI Port 2 Untagged/VLAN 20 Internal Port B4 Optional Compute Node 3 - Live Migr/Cluster Priv Tagged/VLANs 30 and 31 External Port E4 iSCSI – Controller-A Untagged/VLAN 20 External Port E5 iSCSI – Controller-B Untagged/VLAN 20 External Port E6 (Switch 1) Inter-switch link LACP Team Tagged/VLANs 30 and 31 External Port E7 (Switch 1) Inter-switch link LACP Team Tagged/VLANs 30 and 31 External Port E8 No uplink External Port E9 No uplink External Port E1 External Port E2 External Port E3 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 43
  • 44. Switch 2 Table 8 shows the EN2092 switch layout for switch 2. Table 8 EN2092 Switch Layout (Switch 2) Switch ports Device Port setting and VLANs Internal Port A1 Compute Node 1 - Mgmt Team Untagged/VLAN 40 Internal Port B1 Compute Node 1 - VM Comm Team Untagged/VLAN 40 Internal Port A2 Compute Node 2 - Mgmt Team Untagged/VLAN 40 Internal Port B2 Compute Node 2 - VM Comm Team Untagged/VLAN 40 Internal Port A3 Optional Compute Node 3 - Mgmt Team Untagged/VLAN 40 Internal Port B3 Optional Compute Node 3 - VM Comm Team Untagged/VLAN 40 Internal Port A4 Optional Compute Node 4 - Mgmt Team Untagged/VLAN 40 Internal Port B4 Optional Compute Node 4 - VM Comm Team Untagged/VLAN 40 External Port E1 AD Server Untagged/VLAN 40 External Port E2 Storage Management (Cntrl-A) Untagged/VLAN 40 External Port E3 Storage Management (Cntrl-B) Untagged/VLAN 40 External Port E4 Untagged/VLAN 20 External Port E5 Untagged/VLAN 20 External Port E6 (Switch 4) Inter-switch link LACP Team Untagged/VLAN 40 External Port E7 (Switch 4) Inter-switch link LACP Team Untagged/VLAN 40 External Port E8 Uplink LACP Team Untagged/VLAN 40 External Port E9 Uplink LACP Team Untagged/VLAN 40 44 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 45. Switch 4 Table 9 shows the EN2092 switch layout for switch 4. Table 9 EN2092 switch layout (switch 4) Switch Ports Device Port setting and VLANs Internal Port A1 Compute Node 1 - Mgmt Team Untagged/VLAN 40 Internal Port B1 Compute Node 1 - VM Comm Team Untagged/VLAN 40 Internal Port A2 Compute Node 2 - Mgmt Team Untagged/VLAN 40 Internal Port B2 Compute Node 2 - VM Comm Team Untagged/VLAN 40 Internal Port A3 Optional Compute Node 3 - Mgmt Team Untagged/VLAN 40 Internal Port B3 Optional Compute Node 3 - VM Comm Team Untagged/VLAN 40 Internal Port A4 Optional Compute Node 4 - Mgmt Team Untagged/VLAN 40 Internal Port B4 Optional Compute Node 4 - VM Comm Team Untagged/VLAN 40 External Port E1 AD Server Untagged/VLAN 40 External Port E2 Storage Management (Cntrl-A) Untagged/VLAN 40 External Port E3 Storage Management (Cntrl-B) Untagged/VLAN 40 External Port E4 Untagged/VLAN 20 External Port E5 Untagged/VLAN 20 External Port E6 (Switch 2) Inter-switch link LACP Team Untagged/VLAN 40 External Port E7 (Switch 2) Inter-switch link LACP Team Untagged/VLAN 40 External Port E8 Uplink LACP Team Untagged/VLAN 40 External Port E9 Uplink LACP Team Untagged/VLAN 40 Multiple VLANs If multiple VLANs are used with the VMs, switches 2 and 4 need the port configuration changes shown in Table 10 to allow multiple VLANs across the port. The VLAN definitions and routing also need to be determined and addressed in the two switches. Table 10 Configuration changes for switches 2 and 4 if multiple VLANs are used Switch ports Device Port setting and VLANs Internal Port B1 Compute Node 1 - VM Comm Team Tagged/VLANs TBD Internal Port B2 Compute Node 2 - VM Comm Team Tagged/VLANs TBD Internal Port B3 Optional Compute Node 3 - VM Comm Team Tagged/VLANs TBD Internal Port B4 Optional Compute Node 4 - VM Comm Team Tagged/VLANs TBD Internal Port A2 (Switch 2) Inter-switch link LACP Team Tagged/VLAN TBD Internal Port B2 (Switch 2) Inter-switch link LACP Team Tagged/VLAN TBD External Port E1 Uplink LACP Team Tagged/VLAN TBD External Port E2 Uplink LACP Team Tagged/VLAN TBD IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 45
  • 46. VLAN layout Table 11 describes the configurations for the five VLANs that are described in Table 1 on page 10. Table 11 VLAN configuration Device IP addresses VLAN 10 (iSCSI) IP address Controller-A iSCSI Port 1 192.168.10.xx Controller-B iSCSI Port 1 Compute Node 1 - iSCSI Port 1 Compute Node 2 - iSCSI Port 1 Optional Compute Node 3 - iSCSI Port 1 Optional Compute Node 4 - iSCSI Port 1 VLAN 20 (iSCSI) IP address Controller-A iSCSI Port 2 192.168.20.xx Controller-B iSCSI Port 2 Compute Node 1 - iSCSI Port 2 Compute Node 2 - iSCSI Port 2 Optional Compute Node 3 - iSCSI Port 2 Optional Compute Node 4 - iSCSI Port 2 VLAN 30 (Cluster Priv/CSV) IP address Compute Node 1 - Cluster Private/CSV 192.168.30.xx Compute Node 2 - Cluster Private/CSV Optional Compute Node 3 - Cluster Private/CSV Optional Compute Node 4 - Cluster Private/CSV VLAN 31 (Cluster Priv/Live Migr) IP address Compute Node 1 - Live Migration 192.168.31.xx Compute Node 2 - Live Migration Optional Compute Node 3 - Live Migration Optional Compute Node 4 - Live Migration 46 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 47. Device IP addresses VLAN 40 (Cluster Pub/Mgmt and VM Comm) IP address Compute Node 1 - (WS12 Team - Cluster Public) 192.168.40.xx Compute Node 2 - (WS12 Team - Cluster Public) Cluster IP address Storage Controller-A (Mgmt - Switch2) Storage Controller-A (Mgmt - Switch4) Storage Controller-B (Mgmt - Switch2) Storage Controller-B (Mgmt - Switch4) Compute Node 1 VM WS12 Team No Host Exposure Compute Node 2 VM WS12 Team No Host Exposure Author This paper was produced by a technical specialist working at the International Technical Support Organization, Raleigh Center. Scott Smith is an IBM System x Systems Engineer working at the IBM Center for Microsoft Technology. Over the past 15 years, Scott has worked to optimize the performance of IBM x86-based servers that run the Microsoft Windows Server operating system and Microsoft application software. Recently, his focus has been on Microsoft Hyper-V-based solutions with IBM System x servers, storage, and networking. He has extensive experience in helping IBM clients understand the issues that they face and in developing solutions that address them. Thanks to the following people for their contributions to this project: David Ye, IBM Solutions Architect Vinay Kulkarni, IBM Performance Engineer Cole Kiblinger, IBM Systems Networking Engineer Marco Rengan, IBM Cloud Marketing Manager David Watts, IBM Redbooks® Stephen Smith, IBM Redbooks Now you can become a published author, too! Here’s an opportunity to spotlight your skills, grow your career, and become a published author - all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture 47
  • 48. Stay connected to IBM Redbooks Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html 48 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture
  • 49. Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. © Copyright International Business Machines Corporation 2013. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. 49
  • 50. This document REDP-4981-01 was created or updated on June 5, 2013. ® Send us your comments in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400 U.S.A. Redpaper ™ Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: BladeCenter® FlashCopy® IBM® IBM Flex System™ IBM Flex System Manager™ Redbooks® Redpaper™ Redbooks (logo) ® System Storage® System x® The following terms are trademarks of other companies: Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. 50 IBM Flex System Solution for Microsoft Hyper-V (2-node) Reference Architecture