20. Half duplex működés miatt
2000-es években…
• Klaszterezési
lehetőség
•X86 standard
platform
• 1 alkalmazás /
szerver
• magas energia és
hűtési igény
22. Többmagos processzorok
• Egymagos processzor.
Core Core
Core Core
App
OS
App
OS
App
OS
App
OS
Single
Core
CPU
Single-Core
CPU
Core
App
OS
App App
• Virtualizáció nélkül a
több mag kihasználatlan
• Több mag, több
virtuális gép = jobb
kihasználtság
App
OS
Core
Core Core
Core
23. SAN hálózatok kialakulása
Ethernet Fibre Channel
LAN SAN BSAN A
Today • Fiber Channel a
szerver és storage
között
• Bonyolultabb
management
• Adatvesztés nélkül
(HA)
36. Összefoglalás
• Virtulalizáció segítségével
sokat javíthatunk szervereink
átlag 15%-os kihasználtságán
• 1 szerver / 1 alkalmazás
helyett 1 szerver több virtuális
gép, több magos procik
• FCOE használatával jelentős
hálózatj költségek
takaríthatóak meg
Core Core
Core Core
10-GB Ethernet
Link
FCoE Traffic
Other Networking
Traffic
37. Hasznos források
Cisco Blade severs
http://www.cisco.com/en/US/products/ps10280/index.html
Cisco C servers
http://www.cisco.com/en/US/products/ps10493/index.html
Fiber Channel Over Ethernet
http://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet
Vmware ESXi 4.1
http://www.vmware.com/support/vsphere4/doc/vsp_esxi41_vc41_rel_notes.h
tml
Cisco Unified Call Manager
http://www.cisco.com/en/US/products/sw/voicesw/ps556/index.html
38. JÖN! JÖN! JÖN!
Kipróbált és bevált távmunka
megoldások
Tombol a nyár
2011 júnus 9. (csütörtök) 10:00
Előadó:
Szekeres Viktor
Gloster telekom Kft
Témáinkból:
• Távolról de biztonságosan: VPN
megoldások
• A céges Wifi otthon is meg: hogy is
van ez?
• Romaming nélkül telefonálok
külföldről, de hogyan?
Regisztráció:
https://gloster.we
bex.com/gloster/
onstage/g.php?t=
a&d=846365962
41. Agenda
• VMware rövid története
• VMware vSphere alapfunkciók
• Kompetitív összehasonlítás
42. Arrow ECS Intro
• Valamikori neve DNS Hungária
• Számos kereskedelmi ITtermék hazai disztribútora
• Oracle – Symantec – RSA – Netapp - VMware
• Hivatalos VMware oktatóközpont
3Presentation Title
46. A vSphere 4.1 legfőbb üzenete
CoreMessage
Egyszerűbb és olcsóbb üzemeltetés- A virtualizációs piac legmagasabb
konszolidációs arányával a vSphere a lehető legkevesebb fizikai eszköz
felhasználásával képes a legtöbb virtuális gép kiszolgálására.
Hatékonyabb működés- vSphere-t használva ugyanazon a fizikai vason a
hardver ára az akár 8-10-szeresre nőtt kihasználtsági hatékonyság miatt jóval
gyorsabban megtérül.
Megbízhatóság – a vSphere az üzelfolytonossági funkciók optimális
kombinációját tartalmazza, ezáltal akár a 99.999%-os rendelkezésre állás is
biztosítható bármilyan operációs rendszer és alkalmazás számára
A választás szabadsága - vSphere egyedülálló módon, több mint 80
operációs rendszert támogat, illetve szinte az összes megvásárolható hardveren
képes működni, akár a legmodernebb processzorokat is ideértve.
47. VMware előtt
All require same power
All emit same heat
All require physical space
Setup, (re-)configuration
Maintenance, support…
49. Virtual Infrastructure
Interconnect
Pool
CPU
Pool
Memory
Pool
Storage
Pool
CRM
Operating System
CRM
Operating System
VPN
Operating System
VPN
Operating System
File/Print
Operating System
File/Print
Operating System
Exchange
Operating System
Exchange
Operating System
CRM
Operating System
CRM
Operating System
VPN
Operating System
VPN
Operating System
File/Print
Operating System
File/Print
Operating System
Exchange
Operating System
Exchange
Operating System
The New Datacenter
50. Referencia: EPAM Systems
•Multinacionális cég, szoftverfejlesztési outsourcing szolgáltatásokat nyújt a világ minden pontján
• VMware konszolidáció
• 130 helyett csak 13 szerver
• Bérköltség csökkenés: ~ 1 M HUF / hó
• Áramfogyasztás csökkenés: 1.2 M HUF / hó (2007-es árakon)
• Terület: 3 szerverszoba helyett 1 db rack
• És akkor még nem is beszéltünk olyan dolgokról, mint:
– Biztonságtechnika (beléptető rendszerek, tűz és víz)
– UPS, Karbantartás, Audit
• Új projekthez infrastruktúra kialakítás ideje: 27 perc
• Teljes háttér infrával (db, web, app, client)
51. ESX: The Only Production Proven
Hypervisor
Large financial services customer: 1255 days
continuous uptime and counting
52. Single VM Performance: Well-Known
Database OLTP Workload†
TransactionRate(Ratioto1-wayVM)
Next Generation Intel® Xeon® based 8-pCPU server
RHEL 5.1
Oracle 11gR1
In-house ESX Server
† A fair-use implementation of the
TPC-C workload; results are not
TPC-C compliant
< 15% overhead for 8 vCPU VM
8,900 total DB transactions per second
Near-perfect scalability from 1 to 8 vCPUs
60,000 I/O operations/second
54. vSphere 4.1 Delivers “Cloud Scale”
10,000 VMs / vCenter
500 hosts / vCenter
3,000 VMs / cluster
99% of VMware’s 170K Customers
Can Run Their Entire Datacenter in a Single VMware Cluster.
320 VMs / ESXi
55. • Virtuális gépek
szupergyors létrehozása
sablonokból
• Rendszerfelügyelet,
teljesítmény monitorozás
• Enterprise szintű
funkciók biztosítása
• AD integrált hozzáférés-
szabályozás
• A VI környezet leállás
nélküli patch
menedzsmentje
Központi felügyelet: VMware vCenter 4
56. Transparent Memory Page Sharing
• VMkernel detects identical
pages in VMs’ memory and
maps them to the same
underlying physical page
• No changes to guest OS
required
• VMkernel treats the shared
pages as copy-on-write
• Read-only when shared
• Private copies after write
• Page sharing is always active
unless administratively disabled
178x512Mb=89Gb
1:4 overcommitment rate
58. VMware DRS - Capacity on Demand
• A HW erőforrások
felhasználásának
dinamikus kezelése
• Optimális terhelés
kialakítása
VMware vSphere™
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
59. Zero Downtime Maintenance with
VMware DRS
Resource
Pool
Enter
Maintenance Mode
Perform
Maintenance
Maintenance
Complete
No
application
outage
No user
impact
No server
configuration
changes
Eliminate
planned
downtime!
Dynamically move Virtual Machines
to alternative host servers
60. OS
APP
OS
APP
OS
APP
DPM consolidates workloads onto
fewer servers when the cluster
needs fewer resources
Places unneeded servers in
standby mode
Brings servers back online as
workload needs increase
ESX supports Intel Speed
Step/AMD Power now for
individual host power
optimization
Minimizes power consumption
while guaranteeing service levels
No disruption or downtime
to virtual machines
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
VMware vSphere™
Green IT – VMware DPM
DPM powers off
server when
requirements
are lower
DPM brings
servers back
online when
load increases
61. Additional 20% Reduction in Power Costs
with DPM…
Assumptions: 50 out of 100 servers can be powered down for 8 hrs/day on weekdays and 16 hrs/day on weekends.
Total power consumption per server ( operating power + cooling power) = 1130.625 watts/hr
Cost of energy = $ 0.0813 per kWH (source: Energy Information Administration)
62. Hot Virtual Disk Extend
Expand virtual disks online
Respond quickly to growing
requirements without downtime
VMFS Volume Grow
Expand VMFS Volume on the same
LUN it was created
Facilitate adding more virtual machines
to an existing volume
Facilitate data growth for the virtual
machines
Increase flexibility to simplify capacity
planning
Storage Abstraction with VMFS
ESX
OS
APP
OS
APP
OS
APP
Datastore
Virtual
Disks
20GB
100GBLUN
Extend 10G
of virtual disk
Add new virtual disk
VMFS Volume Grow
to grow the datastore
Extend 8G of
virtual disk
20GB
No change
to datastore
VMFS Volume Grow
to grow the datastore
40GB
63. Virtual machine disks consume
only the amount of physical
space in use
Virtual machine sees full
logical disk size at all times
Full reporting and alerting
on allocation and
consumption
Significantly improve storage
utilization
Eliminate need to over-
provision virtual disks
Reduce storage costs by up to
50%
Thin Provisioning
ESX
OS
APP
OS
APP
OS
APP
Datastore
Virtual
Disks
20GB
40GB
20GB
20GB
60GB
20GB
100GB
Thick Thin Thin
40GB 100GB
64. Storage vMotion
– Supports NFS, Fibre Channel, and iSCSI
– Supports moving VMDKs from thick to thin
formats
– Can migrate RDMs to RDMs and RDMs to
VMDKs (non-passthrough)
– Leverages new vSphere 4 features to speed
migration
65. vNetwork Distributed Switch
Aggregated datacenter level
virtual networking
Simplified setup and change
Easy troubleshooting,
monitoring and debugging
Enables transparent third party
management of virtual
environments
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
OS
APP
VMware vSphere™
vNetwork Distributed Switch
vSwitch vSwitch vSwitch
Cisco Nexus 1000V
66. 27
Host Profiles
Host Profile
– Memory Reservation
– Storage
– Networking
– Date and Time
– Firewall
– Security
– Services
– Users and User Groups
– Security
ClusterReference Host1
2
3
4
5
68. VMware HA
Magas Rendelkezésreállás - Minden VM esetében
HA node-ok egymást figyelik
VM-ekben futó OS figyelése
Testreszabható újraindítási sorrend
69. VMware Fault Tolerance
Single identical VMs running
in lockstep on separate hosts
Zero downtime, zero data
loss failover for all virtual
machines in case of
hardware failures
No complex clustering or
specialized hardware
required
Single common mechanism
for all applications and OSs
VMware vSphere™
OS
APP
OS
APP
OS
APP
29
70. 31
VMware Data Recovery
•VMware’s Backup/Recovery
Solution based on APIs for Data
Protection
– Agentless disk-based backup
and recovery
– De-duplication and
incremental backups to save
disk space
71. Memory Compression
Description Benefits
• Optimized use of memory
• Safeguard for using memory overcommit feature
with confidence
• Performance
A new hierarchy for
VMware’s memory
overcommit technology
(a VMware key
differentiator)
1,000x faster
than swap-in!
Hypervisor
OS
72. Capabilities
Bridge, firewall, or isolate VM zones
based on familiar VI containers
Monitor allowed and disallowed
activity by application-based
protocols
One-click flow-to-firewall blocks
precise network traffic
Benefits
Well-defined security posture
within virtual environment
Monitoring and assured policies,
even through Vmotion and VM
lifecycle events
Simple zone-based rules reduces
policy errors
vShield Zones
73. Update Manager
Central automated, actionable VI patch
compliance management solution
Define, track, and enforce software update
compliance for ESX hosts/clusters, 3rd
party ESX extensions, Virtual Appliances,
VMTools/VM Hardware, online*/offline
VMs, templates
Patch notification and recall
Cluster level pre-remediation check
analysis and report
Framework to support 3rd party IHV/ISV
updates, customizations: mass install,
/update of EMC’s PowerPath module
Enhanced compatibility with DPM for
cluster level patch operations
Performance and scalability
enhancements to match vCenter
74. Disaster recovery: SRM
Production
Data Center
Recovery Site or
DR Hosting Site
Easily copy system and
data to recovery site
• Backup and restore for full images
• Host-based replication
• Array-based replication
Recover to any hardware
• No need for identical duplicate
hardware
• Can waterfall hardware to
recovery site
Eliminate idle hardware
• Run other workloads on standby
hardware
• Easily and quickly repurpose
hardware
76. Hypervisor Architecture’s Impact on
Scalability
VMware
ESXi 4.0
70 MB Citrix XenServer v5
1.8 GB
Windows Server
2008 R2 RC Server
Core with Hyper-V
3.6 GB
Hyper-V Server 2008 R2 RC
is even bigger at…
4.4 GB
Size does matter!
Apart from impact to
performance it also has
impact on security!
77. Most Comprehensive OS Support
VMware vSphere™ MS Hyper-V
Win Server 2008 (up to 4P vSMP)
Win Server 2003 SP2 (up to 2P vSMP)
Win Server 2000 SP4 (1P only)
SLES10 (1P only)
Windows Vista SP1
Windows XP Pro SP2/SP3
Windows NT 4.0
Windows 2000
Windows Server
2003
Windows Server
2008
Windows Vista
Windows XP
RHEL5
RHEL4
RHEL3
RHEL2.1
SLES10
SLES9
SLES8
Ubuntu 7.04
Solaris 10 for x86
NetWare 6.5
NetWare 6.0
NetWare 6.1
Debian
CentOS
FreeBSD
Asianux
SCO OpenServer
SCO Unixware
…
vSphere = 4x Guest OS-es + 8vCPU for any OS
78. VMware vSphere Delivers: Efficiency thru Utilization & Automation
Hardware Scale Out
Virtual Hardware (VM) Scale Out
CPU Efficiency
Memory Efficiency
Power Efficiency
Storage Usage Efficiency
x
4-way vCPU only on
limited number of OSs
64 GB vRAM
Network Management Efficiency
Automated Patching Efficiency
x Requires HW-assist
Reuse gen-OS schdler
x No cluster-level power
mgmt
x None
x
Thin disks are not
recommended. No
storage monitoring tools
Hot-add/remove Virtual Resources x No hot-add CPU
Add: virtual disk, vMem
Hyper-V R2 SP1
160 logical cores
1 TB RAM
8-way vCPU
255 GB vRAM
Supports HW-assist
Virt-specific scheduler
Ballooning
Transparent page sharing
Memory compression
DPM: Cluster-level
power management
Distributed switch
3rd party virtual switch
Transparent host patch
Auto guest patching
Thin provisioning
Storage Management
Add: vCPU, vMem
Add: virtual disk, vNIC
vSphere 4.1 XenServer 5.6 FP1
x No hot-add CPU, vMem
Add: virtual disk, vNIC
~
Lack of affinity rules
minimizes its usefulness
x Req. HW-assist for Win
Reuse gen-OS schdler
~
Very static ballooning,
no sharing
~
64 logical cores
512 GB RAM
x Host patching, but no
auto guest patching
~
8-way vCPU
32 GB vRAM
~
Thin disk with only
select SAN vendors
~
In-depth setup required
in Config Mgr
64 logical cores
1 TB RAM
~ Ballooning only
~
vSwitch req separate
mgmt and CLI; single
point of failure
79. VMware vSphere Delivers: Agility With
Control
Control for Server Maintenance
Control for Storage Maintenance
Control of Server Resources
Allocation
Fault Tolerance for VMs
Control during NIC Failure
Better Security than Physical
x Quick Storage Migrate
has downtime
VMware Enhanced
Storage vMotion
x No logical poolsVMware DRS
Logical Resource Pool
x Requires 3rd-party
x No VM-level protectionVMware Fault
Tolerance
x Nothing comparable
x Nothing comparable
x Relies on network
vendor to provide
Integrated NIC teaming
with dynamic load
balancing
x Nothing comparable
Control during Host or VM Failure
~
NIC teaming but no
load balancing
~
Only one VM at a
time per host
~
Only for host failure
Up to 16 nodes
Hyper-V R2 SP1
VMware vMotion with
Maintenance Mode
(up to 8 VMs at a time per host )
XenServer 5.6 FP1
Control of I/O Resource Allocation
for guaranteed quality of service x PRO lacks quality of
service guarantee
VMware Network I/O
and Storage I/O control
vSphere 4.1
Thin Hypervisor to Reduce Attack
Surface x XenServer
1.8GB disk footprintx Hyper-V w/ Server Core
>3GB disk footprint
VMware ESXi
70-100MB disk footprint
~
Only one VM at a
time per host
~
Only for host failure
Up to 16 nodes
VMware HA
Up to 32 nodes
VMware VMsafe API
3rd party support
~
WLB is complex;
separate mgmt req’d;
No logical pools
~ WLB is complex;
separate mgmt req’d
80. VMware vSphere Delivers: Freedom of
Choice
Choice thru Guest OS Support
Choice thru Hardware Support
Choice thru Application Support
Integrating with Existing Mgmt Tools
Choice in “Cloud” Service Provider
Interoperability between Internal &
External Cloud
Choice in Using Existing Apps in
the Cloud
Leader category
(according to analysts)
x Existing apps don’t move
easily to MS cloud
Run existing apps w/o
rewriting code
x 14 OSs supported,
Windows biased
x Building a MS-only
offering, lock-in
vCloud program for
service providers
Over 70 OSs supported,
More Windows than MS
x Apps in MS cloud
don’t come back out
vCloud ensures
interoperability
~
Can integrate, but SC
competes w/ existing
Hundreds of integrations
to vCenter API via SDK
Hyper-V R2 SP1
vSphere 4.1 XenServer 5.6 FP1
x 24 OSs supported
x Limited HCL: ~100 storage,
~100 NICs, ~200 Servers
~
Next-tier category
(according to analysts)
x
x
x
~
Citrix Essentials API
not widely adopted
Enhanced VMotion Compatibility ~
Downgrade processor
functionality to Pentium 4
vMotion across generations
of CPU of same family
~
Uses Windows drivers
Potential driver issues
Leader category
(according to analysts)
Large HCL: >850 HBAs,
>400 NICs,>1600 Servers
Only available in
paid versions
Citrix OpenCloud lacks
clarity & enterprise
support
Citrix OpenCloud lacks
clarity & enterprise
support
Citrix OpenCloud lacks
clarity & enterprise
support
83. VMware tanfolyam akció
– A június 6-10 és június 20-25 között induló
VMware vSphere: Automation Fast Track alapozó
kurzusainkra csak a Webex hallgatói részére most
25%-os kedvezményt adunk a listaárból!
(360.000.- helyett 280.000.- ft)
Jelentkezés: www.arrowecs.hu
A jelentkezés során hivatkozzon a WEBEX jelszóra.