This document provides an overview of a capstone project to design, create, and implement a cloud computing lab (CCL) at Durham Technical Community College. The project goals are to create an operational CCL that allows students to remotely access and use virtual machines from the lab or at home. Key aspects of the project include using PXE booting to load ESXi onto diskless lab computers, provisioning virtual machines using vSphere AutoDeploy, and providing storage using OpenFiler. NetLab will be used for scheduling and remote access to the virtual machines.
2. Project Description
Design, create, and implement:
• Secure system
• Remote access
• Virtual machines (Linux, Windows, and OSX)
• Supporting both persistent and non-persistent VM
environments
• Utilizing existing classroom equipment (dual-purpose)
• Scheduling interface
3. Project Goals
Creating an operational Cloud Computing Lab
• Lab computers are capable of running a diskless
Hypervisor
• VMs are loaded to lab machines on request
• Students are able to access the CCL from home
• Lab machines alternate between VM server and host
workstation according to class schedule
4. Project Goals
Learning about project management
• Students have had experience in a simulated work
environment
• Students have had experience planning, designing and
implementing a large project
• Students learned about group organization and
communication
5. Project Goals
Sharing our work
• The CloudMaster will be reproducible by other colleges
• Our system will be thoroughly documented to assist
recreation
• The documentation will be made openly available to
anyone wishing to implement the CCL
7. How the System will Work
Overview
From the End User’s perspective:
End user makes a reservation using NetLab
• course selection
• time selection
• VM type (pod) - depends on enrolled courses
User will log in to NetLab at the reserved time and access
the VM
8. NetLab User Login
• Remote access
• Web browser
• Java plugin
• Console Connection
sharing
• Built in chat function
10. Class Availability
In test environment:
• Currently utilizes computers in a single classroom
• Testing will use 5 or 6pm as the test cut-off time for lab computers to
boot into CCL environment
• Testing lab isolated from Campus LAN
In production environment:
• Cut-off time may vary by classroom and PC availability
• Additional classrooms lab equipment may be added to resource
pool
• Classroom vs. CCL schedule will be documented and approved by
faculty
11. Inside NetLab
• Administrator login home
screen
• Add/modify user accounts
• Manage classes and rosters
• Design custom PODs
• View current network settings
and status
14. Behind the Scenes
Host systems (lab desktop workstations) are prepared in
advance for a virtual state:
Systems set up in advance to PXE boot
• boot order set
• Wake on LAN
• Diskless systems
• Will boot ESXi 5.1 into RAM from TFTP server
16. PXE Booting
• Pre-Boot Execution Environment
• Derived from DHCP
• Implemented in the NIC and BIOS
• Used to load Operating System images onto a target
computer
• Requires NIC capable of PXE (e.g. Intel® 82574L
Gigabit Ethernet Controller)
17. PXE Booting – The Steps
Boot order set in
BIOS
Universal Network
Driver Interface
loaded from NIC
PXE client
Broadcasts a
request for an IP
address
DHCP server
Responds with an
IP address
PXE client
requests name of
boot image
Boot Server
responds with boot
image filename
TFTP request from
PXE Client for the
NBP file
TFTP reply from
Boot Server
Boot Server sends
the boot image file
NBP to client
18. Diskless Hosts
• No local disk needed
• Requires PXE bootable system
• Loads OS into RAM from network
• Install and post-install configuration not persistent
• Persistent storage required to save changes
19. Configure the PXE
Service Point
Requirements of Lab Setup
• PCs boot in morning with BIOS
• PC restarts/boots with ESXi 5.1 in the evening.
• PC shuts down ESXi 5.1 at night.
Possible PC States
• Off
• Running OS for lab (Windows 7)
• Running ESXi 5.1
20. PXE Service Point –
Scenarios
Scenario #1 -
PC boots in morning from OFF to lab OS (Windows 7)
• Done in BIOS
• Uses auto power-on settings
• Local HDD accessed
Security
• BIOS can be locked with password
21. PXE Service Point –
Scenarios
Scenario #2 -
PC boots in evening from OFF to ESXi 5.1
• Wake on LAN magic packet - mc-wol.exe/Syslinux and pxelinux.0
• Uses PC MAC address
• Scheduled for boot after class time (~6pm)
• TFTP server is brought online using server-side scripting
• PXE boot request fulfilled
• ESXi image is pushed out to PC
22.
23. PXE Service Point –
Scenarios
Scenario #3 -
PC restarts in evening from lab OS (Windows 7) to ESXi 5.1
• Time based ACL will be set up on the switch
• When ACL on, DHCP traffic blocked inbound from campus server
• ACL blocking outbound DHCP traffic all the time from the test
environment
• Host systems receive IP info and direction from the local test
environment Active Directory domain
24. PXE Service Point –
Scenarios
Scenario #4 -
Shutdown PC at night from ESXi
• Set up script using PowerCLI to shutdown PC from vCenter server
• Script runs through Windows Task Scheduler on vCenter server
• Clean shutdown requires VMware Tools installed
25. vSphere AutoDeploy
• Installed on vCenter Server
• Plug-in provides front-end GUI interface
• Works with PXE boot infrastructure
• Stateless hosts
• Provision hosts with specified ESXi image
• Uses vSphere host profiles
• Image Builder and Power CLI
29. VIBs and Image Builder
• VMware Installation Bundle or VIB (software package)
• Image profile = set of VIBs
• Used by AutoDeploy to provision hosts
• VIBs can be added to VMware base image and exported to ISO
image
• Kept as zip file in directory on TFTP server
In our lab environment:
• vSphere Management Assistant – GUI interface
• vCenter used to update host VMs
30. AutoDeploy Rules Engine
• Rule type determines how
image sent to host
• VM image sent to specific host
network address or NIC
• Can specify that host be part
of vCenter inventory list
• AutoDeploy only applies rules
when the new rule is first
tested
32. vSphere Clustering
• All host servers in the CCL to be centrally managed
• CPU and memory resources pooled for use by VMs
• VMware’s Distributed Resource Scheduler provides
intelligent load balancing across ESXi hosts for
performance and efficiency
https://www
.vmware.co
m/files/ima
ges/vspher
e_imgs/
VMW-
DGRM-
vSPHR-
DRSOVER
VIEW-101-
280x180.gif
33. vSphere High Availability
• vSphere High Availability constantly monitors servers in
the cluster for signs of hardware or system failure
• In the event of hardware or operating system failure, HA
will minimize downtime by automatically restarting the
VM on another machine with available resources
https://www.vmware.com/files/images/high-availability-image3.jpg
34. vCenter Server Appliance
• vCSA eliminates need for Windows OS to manage
vSphere environment
• Specialized Linux appliance – pre-configured VM
o More secure / only required parts of OS installed
o Fewer updates required / less maintenance
• Reduced cost of ownership
o Less Microsoft licenses
o Fewer resources
• Installed on standalone ESXi 5.1 server
35. Current Server Resources
Main server for the project:
• Server with ESXi 5.1 server, OpenFiler, vCenter
Appliance provides all services needed for hosting,
assigning IP addresses, and storing VM’s.
36. Main Server Details:
•RAID10 (8 drives: 4x140GB and 4x900GB)
•ESXi 5.1 custom images for NIC support
•Vmware vCenter Server Appliance
•Google Chrome for web access
•Next steps include finalizing Auto
Deployment tools installation and
configuration
37. Bandwidth
Requirements and Setup
• 100 Mb/s or 1 Gb/s Ethernet connection to Campus
LAN
• 5.0 Mb/s or higher Internet connection with higher
outbound burst capability
Initial Proposed Lab Setup
• 20 workstations with 1 Gb/s NIC cards in each
• vCenter Server with 1Gb/s NIC
• Two Cisco 2960 switches – trunked with 1 GE
connection
Note: This sets limitation on host full networking capabilities
38. Bandwidth
Proposed Solution
• Eliminate one 2960 switch
• Directly connect both 10 GE ports of switch to vCenter
Server
• Improves throughput for host machines
• Host machines able to utilize full networking
capabilities
• Still in testing
39. Making the Hardware Work
• Existing desktop NICs did
not support PXE booting
and Wake on LAN
• Ordered Intel® 82574L
Gigabit Ethernet Controller
(qty 3)
• Cost ~$40-50 each
• Installed NICs and current
Intel driver
40. NIC Features
• PCIe
• 10/100/1000 Mbps auto-
negotiation
• Maintain full bandwidth
capacity
• Low cost/low
power/compact
• WoL and PXE supported
• Remote management
support
• Support for most network
OS
Image: http://www.avadirect.com/images/items/NIC-INL-
EXPI9301CT.jpg
43. Storage Server Details:
•Openfiler was created by Xinit Systems
•Free software made available for public use in
May of 2004
•Provides File-Based NAS and Block-based SAN
•Directory services compatible (LDAP or Active
directory)
•Supports Kerberos 5 authetication
•Ease of use - GUI or CLI for both installation and
configuration
44. OpenFiler Configuration
• Installed as a Virtual
Machine Instance on
ESXi 5.1 server
• Dual NIC cards for
access from 192. and
169. networks
• Network Access List
restricted to 192. and
169. networks
• Additional storage space
added in ESXi as a
virtual hard disk http://www.openfiler.com/products
46. Selection of NetLab over
VCL
NetLab
• Already established
• Supports NET and NOS
coursework
• Scheduling interface
accessible for testing
• Expected faster ramp up
time
• Increased chance of
project success
• Existing cost is known
factor
VCL
• Open Source
• More time to implement
• Extensive programming
knowledge needed
(Java, C++, PHP, SQL,
Perl)
• Potential hardware
expense based upon NC
State setup
• Long term potential is a
consideration
47. Bringing it all Together
• PXE booted diskless hosts obtain IP
• Using DHCP, obtain bootfile ESXi 5.1 from TFTP
server
• Image provisioned in accordance with AutoDeploy
rule (incl. target domain, IP range, target MAC
address)
• Client computer boots ESXi 5.1 directly into RAM
• New diskless hosts will be added to vCenter
resources
• vCenter will match diskless hosts with their
preconfigured profiles
Continued….
48. Bringing it all Together
• Preconfigured profiles will have access to permanent
storage (OpenFiler)
• Students will log into NetLab and select PODs (VMs)
based on reservation
• NetLab Server will communicate with vCenter then it
will load preconfigured PODs (VMs)
• ESXi 5.1 server running VM OpenFiler will send the
VMs in the PODs to diskless hosts
• Students will access reserved PODs (VMs) through
NetLab Server
49. Licenses – VMWare
How To Download VM ESXi 5.1 licenses:
• VMware website – www.vmware.com
• Create a VMware account
• Navigate to products and go to vSphere Hypervisor
under free products
• Click on download free now
• Opens page to make an account
50. Licenses – NetLab
• Currently have NetLab Academy Edition
• Allows 16 pods to be active at once
• Sufficient for test environment
If upgrade needed:
• NetLab Professional Edition
• Allow 32 pods to be active
• Upgrade fee $13,700
51. Licenses –
Cost Considerations
• No current DTCC agreement with VMware IT Academy -
$750/year
• School has enterprise license agreement with Microsoft
• vCenter server 5.1.0b basic with 1 year subscription and
support - $ 13,243.00
• VM Licenses sold in packs and keys managed through
vCenter
52. Malware & Antivirus
Protection
• NetLab
o Private "sandbox“ - no antivirus necessary
o All traffic passes through campus firewalls
o Uses safe proxy connection to outside users
• Network Devices
o DTCC uses Trend Micro Antivirus on network devices
53. Backup Management
• NetLab
o daily offsite backup supported by NetDevGroup
• ESXi
o vSphere Data Protection
o first party solution
o fully integrated with vCenter Server
o Additional HDD and a standalone backup server will
be used for redundant backups of all VMs
54. Physical Security
• RFID card readers on
doors
• Administrative passwords
on all servers
• Full time campus security
• Concrete walls and
reinforced windows
• Motion-sensing lights
Image: http://www.kaba.co.nz
Image: http://spadeoftrades.com
55. Accomplishments
• Created test environment using AutoDeploy
• Documented current test environment setup and security items to
evaluate
• Able to PXE boot ESXi 5.1 onto stateless/diskless host server
• Set up OpenFiler on ESXI 5.1 physical server
• Share NFS permanent storage with stateless host servers
• Boot host servers from off using Magic Packet (WOL)
• Initial polling configured for network bandwidth testing and
performance tracking for physical server
• ESXi installed on Mac Mini (8 VM’s hosted)
56. Still in Progress
• Manually or automatically load VM into memory on host server
• Configuration and stress testing of test environment to simulate
network traffic
• Save VM info using OpenFiler – In Testing
• Using Windows Task Scheduler and server side scripts to automate
shutdown and power on of lab host systems
• Automate PXE boot process
• Configure BIOS for auto-on at specified times
• Test permanent/persistent storage and its capability for all
diskless/stateless computers in the lab – In Testing
• Created security policy for test lab
57. Lessons Learned
• Project Management skills
• Effective utilization of resources
• Time management
• Communication
• Research of complex technical topics
• Hands on lab experience and application
• Overcoming obstacles / problem solving
58. Special Thanks
• Lee Rogers
• Jenny White
• Harry Bullbrook
And some regular old fashioned kudos to the
NETSECT289 class of 2014!
61. Works Cited
Apache Software Foundation. “VCL.” 2012. Mar 2013. <https://vcl.apache.org/docs/VCL231InstallGuide.html>
Kiwi - SolarWinds. “Syslog Server and Network Configuration Manager.” 2013. Mar 2013.
<http://www.kiwisyslog.com>
Microsoft Windows. “Schedule a Task.” 2013. Mar 2013.
<http://windows.microsoft.com/en-us/windows7/schedule-a-task>
Miller, Wes. “The Desktop Files: Network-Booting Windows.” 2008. Mar 2013.
<http://technet.microsoft.com/en-us/magazine/2008.07.desktopfiles.aspx>
NC State University. “Virtual Computing Lab – Powered by Apache VCL.” Mar 2013. <http://vcl.ncsu.edu/>
NDG. “NetLab+ Product Comparison.” Mar 2013. <http://www.netdevgroup.com/products/compare.html>
NDG. “NetLab+ System Requirements.” Mar 2013. <http://www.netdevgroup.com/products/requirements>
Openfiler. “Openfiler Products.” 2012. Mar 2013. <http://www.openfiler.com/products>
Paessler the Network Monitoring Company. “PRTG: Finally There is a Network Monitoring Software
That is Powerful and Easy to Use!” 2013. Mar 2013. <http://www.paessler.com/prtg>
TFTPd32. “The industry standard TFTP server.” 2011. Mar 2013.
<http://tftpd32.jounin.net/tftpd32_download.html>
62. Works Cited
VMware. “Build a Flexible, Efficient Datacenter.” 2013. Mar 2013.
<http://www.vmware.com/products/datacenter-virtualization/vsphere/auto-deploy.html>
VMware. “PXE Boot the ESXi Installer Using gPXE.” 2013. Mar 2013.<http://pubs.vmware.com/vsphere-51/index.jsp>
VMware. “Using vSphere ESXi Image Builder CLI .” and “Rules and Rule Sets. “ 2012. Mar 2013.
<http://pubs.vmware.com/vsphere-50/index.jsp>
VMware. “VMware AutoDeploy Administrator’s Guide.” 2010. Mar 2013.
<http://labs.vmware.com/wp-content/uploads/2010/08/autodeploy_fling.pdf>
VMware. “VMware vSphere & vSphere with Operations Management Pricing.” 2013. Mar 2013.
<http://www.vmware.com/products/datacenter-virtualization/vsphere/pricing.html>
VMware. “vSphere Auto Deploy Demo.” 2011. Mar 2013. http://www.youtube.com/watch?v=G2qZl-760yU
Virtuallyghetto. “Running ESXi 5.0 & 5.1 on 2012 Mac Mini 6,2” 2012
Mar. 2014. <http://www.virtuallyghetto.com/2012/12/running-esxi-50-51-on-2012-mac-mini-62.html>