O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Exchange 2010 New England Vmug

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio

Confira estes a seguir

1 de 58 Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Quem viu também gostou (20)

Anúncio

Semelhante a Exchange 2010 New England Vmug (20)

Anúncio

Exchange 2010 New England Vmug

  1. 1. Exchange 2010 on VMware – New England VMUG Andrew Lewis, Sr. Specialist – Messaging & Collaboration, VMwareAndrew Lewis, Sr. Specialist Messaging & Collaboration, VMware © 2009 VMware Inc. All rights reserved Confidential
  2. 2. AgendaAgenda Introductions Exchange on vSphere Overview and Updates Exchange on vSphere Performance ESX host Best Practices for Exchange Exchange 2010 Capacity Planning Availability & Recovery Options Customer Success Stories 2 Confidential
  3. 3. Exchange is Maturing Exchange 2003 32-bit Windows Exchange 2007 64-bit Windows Exchange 2010 64-bit Windows 900MB database cache 32+ GB database cache 32Kb block size I/O pattern 4Kb block size High read/write ti 8Kb block size 1:1 read/write ratio optimization Further 50% I/O ratio ratio 70% reduction in disk I/O reduction 3 Confidential
  4. 4. >95% of Apps Match Native Performance on Virtual Machines ESX 3.5 ESX 4ESX 3ESX 2 20% 30% <10% 20% <2% 10%30% 60% Overhead ons 16 GB 2 vCPU 64 GB 4 vCPU 255 GB 8 vCPU G 1 vCPU 20% - 30% <10% - 20% <2% - 10%30% - 60% Overhead plicatio 800 Mb/s 16 GB 9 Gb/s 64 GB 30 Gb/s 255 GB 380 Mb/s < 4 GB %ofApp 20,000 100,000 > 300,000< 10,000 % Application Performance Requirements 4 Confidential 1. Source: VMware Capacity Planner assessments
  5. 5. Server Hardware is More Powerful More physical cores per socket More physical memoryMore physical memory Smaller datacenter footprint More network bandwidth (10Gb ethernet)More network bandwidth (10Gb ethernet) Optimized for virtualization • AMD RVI • Intel EPT 5 Confidential
  6. 6. Support Options Scenario 1: • Support through MS Premier contract htt // t i ft /kb/897615/• http://support.microsoft.com/kb/897615/en-us Scenario 2: • Support through Microsoft Server Virtualization Validation Program• Support through Microsoft Server Virtualization Validation Program • ESX 3.5 U2 and above (including vSphere) • Windows Server 2008 and aboveWindows Server 2008 and above • Exchange 2007 and above (including Exchange 2010) Scenario 3: • Support through server OEM • http://www.vmware.com/support/policies/ms_support_statement.html Scenario 4: • Support through VMware GSS, TSA Net 6 Confidential
  7. 7. Key Benefits of a vSphere platform: Server Consolidation: • Utilize all your server processor cores. • Maintain role isolation without additional hardware expense• Maintain role isolation without additional hardware expense. Operational advantages: • Design for today’s workload rather than guessing about tomorrowDesign for today s workload rather than guessing about tomorrow. • Design for specific business requirements... • Rapidly provision Exchange Servers with virtual machine templates. • Reduce hardware and operational Costs of maintaining an Exchange Lab. • Enhance testing and troubleshooting using cloned production virtual machines. Higher availability with less complexity: • Reduce planned downtime due to hardware or BIOS updates with VMware VMotion™. • Reduce unplanned downtime due to hardware failure or resource constraints. • Implement simple and reliable Exchange disaster recovery. 7 Confidential
  8. 8. AgendaAgenda Introductions Exchange on vSphere Overview and Updates Exchange on vSphere Performance ESX host Best Practices for Exchange Exchange 2010 Capacity Planning Availability & Recovery Options Customer Success Stories 8 Confidential
  9. 9. J t t Exchange 2010 Performance Analysis Jetstress • Storage performance assessment for Exchange provided by Microsoft. • Uses Exchange libraries to simulate multi-threaded Exchange-like workload across storage configuration. L dGLoadGen • Exchange deployment performance assessment provided by Microsoft.Microsoft. • Runs end-to-end tests from client to measure typical Exchange activities. • SendMail, Logon, CreateTask, RequestMeeting, etc. 9 Confidential
  10. 10. vSphere Scale Up Performance of Exchange Virtual is within 5% of Physical.Virtual is within 5% of Physical. Only 25% CPU with 4000 Users. 10 Confidential
  11. 11. vSphere Scale Out Performance of Exchange Performance remains good with more VMs and Users. CPU Utilization is under 60% with 8,000 users and 4 VMs.CPU Utilization is under 60% with 8,000 users and 4 VMs. 11 Confidential
  12. 12. Storage Protocol Performance Comparison 8 VMs with 2000 Heavy Online Users = 16000 Users Total Performance of FC, iSCSI, and NFS on NetApp FAS6030 All three protocols performed great – FC was the Best 250 ms) Fibre Channel 60 70 Fibre Channel iSCSI 150 200 ailLatency(m iSCSI NFS 40 50 Utilization iSCSI NFS 50 100 AvgSendMa 10 20 30 %CPU 0 Heavy Online Profile Double Heavy Online Profile A 0 Heavy Online Profile Double Heavy Online Profile 12 Confidential
  13. 13. Testing Exchange in a Private vSphere Cloud What happens when Exchange VMs are allowed to roam? vSphere has the ability to load balance with DRS Algorithm for DRS includes cost / benefit analysis 12000 IOPS Change with LoadGen Load 16,000 Users 8000 10000 PS , 8 VMs 4 Time Zones 4000 6000 IOP Users in Time Changing Load • Start of Day 0 2000 11 Hours of LoadGen Testing with Four Time Zones Users in Time Zones Start Users in Time Zones Stop y • End of Day 13 Confidential
  14. 14. Testing Exchange in a Private vSphere Cloud 45 50 n(%) CPU Utilization Without DRS 25 30 35 40PUUtilization 0 5 10 15 20 ESXHostCP Server1 Server2 0 E 50 60 n(%) CPU Utilization With DRS 30 40 50 PUUtilization 0 10 20 ESXHostCP Server 1 Server 2 VMotion Events 14 Confidential 0 E
  15. 15. Testing Exchange in a Private vSphere Cloud Up to an 18% gain in performance, with an average of 8% Although Exchange is not CPU constrained workload, it still benefits from load balancing Cost / Benefit algorithm of DRS was correct in movingCost / Benefit algorithm of DRS was correct in moving Exchange VMs even though CPU was not constrained Not supported for MS Cluster Nodespp User Groups 1 2 3 4 5 6 7 8 Avg N DRS 734 584 844 693 795 974 775 1004 800No DRS 734 584 844 693 795 974 775 1004 800 DRS 684 554 854 673 704 844 745 854 739 % Advantage for DRS 7% 5% -1% 3% 13% 15% 4% 18% 8% th 15 Confidential 95th Percentile Sendmail Latencies
  16. 16. P f h b lid t d b VM d P t Summarizing Performance Performance has been validated by VMware and Partners. • Minimal CPU overhead observed on ESX 3.5 (5-10%) • Minimal CPU overhead observed on vSphere (2 7%)• Minimal CPU overhead observed on vSphere (2-7%) • No impact on disk I/O latency • RPC latency comparabley p • No virtualization performance degradation observed All three storage protocols performed great – FC was the Best. DRS can increase performance and reduce resource consumption on stand-alone mailbox servers. 16 Confidential
  17. 17. AgendaAgenda Introductions Exchange on vSphere Overview and Updates Exchange on vSphere Performance ESX host Best Practices for Exchange Exchange 2010 Capacity Planning Availability & Recovery Options Customer Success Stories 17 Confidential
  18. 18. Virtual Memory Best Practices • Do not over-commit memory until VC reports that steady state usage is below the amount of physical memory on the serveris below the amount of physical memory on the server. • Set the memory reservation to the configured size of the VM, resulting in a per-VM vmkernel swap file of zero bytes. Setting ti ld li it VM tireservations could limit VMotion. • It is important to “right-size” the configured memory of a VM. Memory will be wasted if the Exchange VMs are not utilizing they g g configured memory. • Enable DRS to ensure workloads are balanced in the ESX cluster. DRS and reservations can guarantee critical workloads have theDRS and reservations can guarantee critical workloads have the resources they require to operate optimally. • To minimize guest OS swapping, the configured size of the VM should be greater than the average memory usage of Exchangeshould be greater than the average memory usage of Exchange running in the guest. Follow Microsoft guidelines for memory and swap/page file configuration of Exchange VMs. 18 Confidential
  19. 19. Storage Storage Virtualization Concepts • Storage array – consists of physical disks that are presented as logical ( ) S Sdisks (storage array volumes or LUNs) to the ESX Server. • Storage array LUNs – formatted as VMFSformatted as VMFS volumes. • Virtual disks – t d t th tpresented to the guest OS; can be partitioned and used in guest file systemssystems. • Raw Device Mappings (RDM) – can be presented to either VMs or physical servers. 19 Confidential
  20. 20. Storage Best Practices • Deploy Exchange VMs on shared storage – allows VMotion, HA, Sand DRS. Aligns well with mission-critical Exchange deployments, often installed on shared storage management solutions. • Ensure heavily used VMs not• Ensure heavily-used VMs not accessing same LUN concurrently. • Storage Multipathing – Setup a i i f f th f ESXminimum of four paths from an ESX Server to a storage array (requires at least two HBA ports). • Create VMFS file systems from VirtualCenter to get best partition alignment.g 20 Confidential
  21. 21. VMFS RDM VMFS and RDM Trade-offs VMFS • Volume can host many virtual machines (or can be dedicated to one virtual RDM • Maps a single LUN to one virtual machine; isolated I/O. machine). • Increases storage utilization, provides better flexibility, easier administration, • More LUNs = easier to hit the LUN limit of 256 that can be presented to ESX Server.better flexibility, easier administration, and management. • Large third-party ecosystem with V2P products to aid in certain support Server. • Leverage array level backup and replication tools that integrate with Exchange databasesproducts to aid in certain support situations. • Does not support quorum disks required Exchange databases • RDM volumes can help facilitate swinging Exchange between physical for third-party clustering software. • Fully supports VMware Site Recovery Manager. servers and VMs. • Required for Microsoft Clustering. Clustered databases and logs shouldg g be on RDM disks. • Full support for Site Recovery Manager 21 Confidential Manager.
  22. 22. Best Practices Resource Management & DRS Best Practices • The source and target ESX hosts must be connected to the same gigabit network and the same shared storage.network and the same shared storage. • A dedicated gigabit network for VMware VMotion is recommended. • The destination host must have enough resourcesThe destination host must have enough resources. • The VM must not use physical devices like CD ROM or floppy. • The source and destination hosts must have compatible CPU models or• The source and destination hosts must have compatible CPU models, or migration with VMware VMotion will fail. • To minimize network traffic it is best to keep VMs that communicate with each other together (e.g. Mailbox and GCs) on the same host machine. • VMs with smaller memory sizes are better candidates for migration than larger oneslarger ones. • NOTE: VMware does not currently support VMware VMotion or VMware DRS for Microsoft Cluster nodes; however, a cold migration 22 Confidential is possible once the guest OS is shut down properly.
  23. 23. AgendaAgenda Introductions Exchange on vSphere Overview and Updates Exchange on vSphere Performance ESX host Best Practices for Exchange Exchange 2010 Capacity Planning Availability & Recovery Options Customer Success Stories 23 Confidential
  24. 24. Collect Current Messaging Stats Use the Microsoft Exchange Server Profile Analyzer to collect information from your current environment. Example:p •1 physical location •16,000 users •Mailbox profilesMailbox profiles •150 messages sent/received per day •Average message size of 50KB •500MB mailbox quota•500MB mailbox quota 24 Confidential
  25. 25. Passive Database Overheads 25 Confidential
  26. 26. Exchange Server Minimums and Recommended Maximums Exchange 2010 server role  Minimum  Recommended  Maximum  Edge Transport 1 x processor core 12 x processor  cores Hub Transport 1 x processor core 12 x processor  corescores Client Access 2 x processor core  12 x processor  cores Unified Messaging 2 x processor core  12 x processor  cores Mailbox 2 x processor core  12 x processor p p cores Client Access/Hub Transport combo‐role (Client Access  and Hub Transport roles running on the same physical 2 x processor core  12 x processor  coresand Hub Transport roles running on the same physical  server) cores Multi‐role (Client Access, Hub Transport and Mailbox  2 x processor  24 x processor  26 Confidential server roles running on the same physical server) cores cores
  27. 27. Megacycle • A Megacycle is a unit of measurement used to represent processor capacity. T hl ill t t 1 GH d i t l• To roughly illustrate, a 1 GHz processor can produce approximately 1,000 megacycles of CPU throughput. • For a larger example, a two-socket, quad-core server (8 cores) withg p q ( ) 3.33 GHz CPUs can produce approximately 26,400 megacycles. • Each Exchange user placed on the server will subtract from this capacity, at varying rates depending on the activity and size of the mailboxat varying rates depending on the activity and size of the mailbox. • Don’t forget that we must take into account CPU requirements for both the active and passive mailboxes that will be hosted on the server . From Microsoft TechNet (link): Megacycles are estimated based on a measurement of Intel Xeon x5470 3.33 GHz processors (2 x 4  core arrangement) A 3 33 GHz processor core = 3 300 megacycles of performance throughputcore arrangement). A 3.33‐GHz processor core = 3,300 megacycles of performance throughput.  Other processor configurations can be estimated by comparing this measured platform to server  platforms tested by the Standard Performance Evaluation Corporation (SPEC). For details, see the  SPEC CPU2006 results at the Standard Performance Evaluation Corporation Web site. 27 Confidential
  28. 28. User Profile and Message Activity Mailbox Server Processor Capacity Planning (TechNet) (http://technet.microsoft.com/en-us/library/ee712771.aspx) Messages  sent or  received per  Database  cache per  mailbox in  Single database  copy (stand‐ alone) with  Multiple database  copies (mailbox  resiliency) with  Megacycles for  active mailbox  or stand‐alone  Megacycles  for passive  mailbox  mailbox per  day megabytes  (MB)  estimated IOPS  per mailbox  estimated IOPS per  mailbox  mailbox  50 3 0.06 0.05 1 0.15 100 6 0.12 0.1 2 0.3 150 9 0.18 0.15 3 0.45 200 12 0.24 0.2 4 0.6 250 15 0.3 0.25 5 0.75 300 18 0.36 0.3 6 0.9 350 21 0.42 0.35 7 1.05 400 24 0 48 0 4 8 1 2400 24 0.48 0.4 8 1.2 450 27 0.54 0.45 9 1.35 500 30 0.6 0.5 10 1.5 28 Confidential
  29. 29. Designing for Peak Utilization • It is recommended that standalone servers with only the mailbox role be designed to not exceed 70% utilization during peak period. If deploying multiple roles on the server, then the mailbox role should bep y g p designed not to exceed 35%. • For solutions leveraging mailbox resiliency, it is recommended that the configuration not exceed 80% utilization after a single or doubleconfiguration not exceed 80% utilization after a single or double member server failure when the server only has the mailbox role installed. If deploying multiple roles on the server, then the mailbox role should be designed not to exceed 40%.should be designed not to exceed 40%. • CPU utilization is determined by taking the CPU Megacycle Requirements and dividing it by the total number of megacycles available on the server (which is based on the CPU and number of cores)on the server (which is based on the CPU and number of cores). 29 Confidential
  30. 30. Determining Database Cache Size • The first step in planning for Mailbox Server memory is to determine the amount of required database cache by multiplying the mailbox count by the memory requirements based on the user profile. • For example, 4,000 users sending/receiving 150 messages per day will require 36 GB of database cache. (4000 * 9 MB = 36 GB). Messages sent or received per mailbox  per day Database cache per mailbox in  megabytes (MB) http://technet.microsoft.com/en‐us/library/ee712771.aspx per day megabytes (MB)  50 3 100 6 150 9 200 12 250 15 300 18 350 21350 21 400 24 450 27 500 30 30 Confidential
  31. 31. Determining Total Memory • The next step is to determine the amount of required physical memory by• The next step is to determine the amount of required physical memory by determining which server configuration provides 36 GB of database cache. F l i l l M ilb ith 48 GB f h i l RAM• For example, a single role Mailbox server with 48 GB of physical RAM will provide 39.2 GB of database cache; therefore, 48 GB of physical RAM is the ideal memory configuration based on this mailbox count/user filprofile. Server physical  memory (RAM)  Database cache size:  (Mailbox role only)  Database cache size: Multiple‐ role (for example, Mailbox +  Hub Transport)  2GB 512 MB Not supported 4GB 1 GB Not supported 8GB 3 6 GB 2 GB8GB 3.6 GB 2 GB 16GB 10.4 GB 8 GB 24GB 17.6 GB 14 GB 32GB 24.4 GB 20 GB32GB 24.4 GB 20 GB 48GB 39.2 GB 32 GB 64GB 53.6 GB 44 GB 96GB 82.4 GB 68 GB 31 Confidential 128GB 111.2 GB 92 GB
  32. 32. Calculating Storage Requirements • Mailbox Server Storage Design (TechNet) • Exchange 2010 Mailbox Server Role Requirements Calculator 32 Confidential
  33. 33. Sample Storage Configuration (4,000 Users – 150 sent/received) CPU: 6 vCPU Memory: 48 GBMemory: 48 GB 33 Confidential
  34. 34. Server Role Ratios (Processor Cores) Hub Transport and Client Access Server Planning Server Role Ratios (Processor Cores) Server role ratio  Recommended processor core ratio  Mailbox:Hub 7:1 (no antivirus scanning on Hub)  5:1 (with antivirus scanning on Hub) Mailbox:Client Access 4:3 Mailbox: Combined Hub/CAS 1:1Mailbox: Combined Hub/CAS 1:1 Memory Requirements Exchange 2010 server role  Minimum  Recommended g supported  Hub Transport 4 GB 1 GB per core (4  GB minimum)GB minimum) Client Access 4 GB 2 GB per core (8  GB minimum) Cli A /H b T bi d 4 GB 2 GB (8Client Access/Hub Transport combined  role 4 GB 2 GB per core (8  GB minimum) When doing processor core ratios, remember to factor in the expected peak utilization of your  34 Confidential g p p p y mailbox servers. 
  35. 35. Th B ildi Bl k A h (S d l M ilb S ) Scaling Exchange for the Enterprise The Building Block Approach (Standalone Mailbox Servers) • Best Practice for Standalone Mailbox Servers • Pre sized VMs with predictable performance patterns• Pre-sized VMs with predictable performance patterns • Improved performance when scaling up (memory page sharing) • Simplified deployment Building block CPU and RAM sizing for 150 sent/received http://technet microsoft com/en us/library/ee712771 aspx p p y http://technet.microsoft.com/en-us/library/ee712771.aspx Building Block 500 1000 2000 4000 Profile  150  150  150  150  sent/received sent/received sent/received sent/received Megacycle Requirement 1,500 3,000 6,000 12,000 vCPU (based on 3.33- 2 (Minimum) 2 (Minimum) 4 6vCPU (based on 3.33 GHz processor- based server) 2 (Minimum) (.6 Actual) 2 (Minimum) (1.3 Actual) 4 (2.6 Actual) 6 (5.1 Actual) Cache Requirement 4.5 GB 9 GB 18 GB 36 GB 35 Confidential Total Memory Size  16 GB 16 GB 24 GB 48 GB
  36. 36. Scaling Exchange for the Enterprise Th DAG A h (Cl d M ilb S ) • The new DAG feature in Exchange 2010 necessitates a The DAG Approach (Clustered Mailbox Servers) Exchange 2010 necessitates a different approach to sizing the Mailbox Server role, forcing the administrator to account for bothadministrator to account for both active and passive mailboxes. • Mailbox Servers that are members of a DAG can host one or more passive databases in addition to any active d t b f hi h thdatabases for which they may be responsible. • The amount of passive mailp hosted changes as you add/delete DAG nodes. 36 Confidential
  37. 37. vSphere Configuration Maximums h // / df/ h 4/ 40/ 40 fi df • vSphere Virtual machines are limited to 8 vCPU and 255 GB of RAM. http://www.vmware.com/pdf/vsphere4/r40/vsp_40_config_max.pdf • Each ESX host can only accommodate up to 255 LUNs. • Each vSphere LUN is limited to 2 TB (without SAN extents). Be sure to take vSphere configuration maximum into account, especially when configuring storage. For example, when sizing a DAG, limiting database sizes to 1 TB will ensure that we don’t come too close to our 2 TB LUN limit. 37 Confidential
  38. 38. Resource Requirements by Server Role Sample Exchange 2010 Resource Requirements Exchange Role Physical Resources (per server) Mailbox Server (4 servers) CPU: 6 cores (60% max utilization) Memory: 48 GB Sample Exchange 2010 Resource Requirements OS and Application File Storage: 64 GB (OS & Application files) DB Storage: 110 x 300 GB 10K RPM FC/SCSI/SAS 3.5" (RAID 1/0)( ) Log Storage: 6 x 300 GB 10K RPM FC/SCSI/SAS 3.5" (RAID 1/0) Restore LUN: 12 x 300 GB 10K RPM FC/SCSI/SAS 3 5"12 x 300 GB 10K RPM FC/SCSI/SAS 3.5 (RAID 5) Network: 1 Gbps Client Access Server (3 servers) CPU: 4 cores M 8 GBMemory: 8 GB Storage: 24 GB (OS & application files) Network: 1 Gbps Hub Transport Server (2 servers) CPU: 2 coresHub Transport Server (2 servers) CPU: 2 cores Memory: 4 GB Storage: 20 GB (OS, application, & log files) 32 GB (DB, protocol/tracking logs, & temp files) Network: 1 Gbps 38 Confidential Network: 1 Gbps
  39. 39. Sample Hardware Layout 39 Confidential
  40. 40. Performance Monitoring Subsystem esxtop Counters VirtualCenter Counter ESXTop counters for Exchange. y p CPU %RDY %USED Ready Usage Memory %ACTV Activey SWW/s SWR/s Swapin Swapout Storage ACTV Commands DAVG/cmd KAVG/cmd deviceWriteLatency & deviceReadLatency kernelWriteLatency & kernelReadLatency Network MbRX/s packetsRx MbTX/s packetsTx vSphere offers integrated support for PerfMon! 40 Confidential
  41. 41. Capacity Planning Summary • Follow Microsoft guidelines for processor, memory, and storage of the mailbox server role • Be sure to take into account passive databases if you are usingBe sure to take into account passive databases if you are using DAGs • Design for peak utilization – 70% for standalone and 80% for l t d ilbclustered mailbox servers • Understand and adjust for vSphere Configuration Maximums (256 LUNs, 2 TB LUN size limit, etc.) • Use the building block approach with standalone mailbox servers; use the DAG method and the storage calculator for clustered mailbox serversservers • Follow Microsoft guidelines for Hub Transport and Client Access Server ratios; remember the CAS is much heavier in 2010 41 Confidential
  42. 42. AgendaAgenda Introductions Exchange on vSphere Overview and Updates Exchange on vSphere Performance ESX host Best Practices for Exchange Exchange 2010 Capacity Planning Availability & Recovery Options Customer Success Stories 42 Confidential
  43. 43. Business-level Approach • What are you trying to protect?y y g p • What are your RTO/RPO requirements? • What is your Service Level Agreement (SLA)? • How will you test/verify your solution? 43 Confidential
  44. 44. Local Site Options 44 Confidential
  45. 45. Simple Standalone Server with VMware HA • Can use Standard Windows and Exchange editions • Does not require Microsoft clustering • Simple to configure and easy to manage • Quickly restore service during host failure • Combine with application-aware availability solution • VMotion, DRS, and HA are fully supported! • Protects from hardware failure only • Does not provide application protection 45 Confidential
  46. 46. What is a DAG? • DAG stands for Database Availability Group and consists of 2 or more mailbox servers that are grouped together for mutual protection. • Unlike a traditional active/passive server configuration, failover occurs by database rather than by server. • DAGs utilize the Microsoft Clustering Service although there is• DAGs utilize the Microsoft Clustering Service although there is no requirement for shared quorum disks. 46 Confidential
  47. 47. How is DAG different from CCR? Exchange 2007 CCR • Failover of entire server • 2x storage req. • No shared storage • IP replication Exchange 2010 DAG • Failover of DBs• Failover of DBs • No passive servers • 2x or more storage req2x or more storage req. • No shared storage • IP replication 47 Confidential
  48. 48. VMware HA + DAGs (no MS support) • Protects from hardware and application failurepp • Immediate failover (~ 3 to 5 secs) • No passive servers! • HA decreases the time the database is in an ‘unprotected state’ • Windows Enterprise Exchange• Windows Enterprise, Exchange Standard or Enterprise editions • Complex configuration and capacity l iplanning • 2x or more storage req. • Not officially supported by MicrosoftNot officially supported by Microsoft 48 Confidential
  49. 49. Caveats & Restrictions • Clustering is not supported by VMware on iSCSI or NFS disks • Mixed environments not supported using Qlogic and Emulex HBAs on the same host using ESX Server 3.x and ESX Server 4.x across ESX hosts • DRS and HA must be disabled in the virtual machine properties of• DRS, and HA must be disabled in the virtual machine properties of Microsoft Clustered VMs • Microsoft does not support migration of running virtual machines (VM ti ) th t l t ft h i t l d t(VMotion) that run cluster software, however internal and customer PoC testing has shown VMotion to have no affect on the operation of a CCR or DAG member. 49 Confidential
  50. 50. Remote Site Options 50 Confidential
  51. 51. VMware SRM + DAGs (no MS support) • DAG provides local site protection • Storage replication keeps DR facility in sync • During a site failure the admin has full control of recovery• During a site failure, the admin has full control of recovery • Once workflow is initiated, SRM automates the recovery process • The entire process can be tested without actually failing over services!The entire process can be tested without actually failing over services! 51 Confidential
  52. 52. DAG + Delayed Copy Replay • DAG provides local site protection • Log replication keeps DR facility in sync • Requires manual database activation• Requires manual database activation • Administrator can remove logs to adjust recovery point • No redundancy until new passive databases are establishedNo redundancy until new passive databases are established 52 Confidential
  53. 53. Backup and Recovery 53 Confidential
  54. 54. Agent-based Backup • Standard method for physical or virtual • Agent runs in the VM guest and handles database quiescing • Data is sent over the IP network• Data is sent over the IP network • Can affects CPU utilization in the Guest OS 54 Confidential
  55. 55. Array-based Backup • Backup vendor software coordinates with VSS to create a supported backup image of the Exchange databases • Snap-shotted databases can later be streamed to tape as flat files withSnap shotted databases can later be streamed to tape as flat files with no IO impact to the production Exchange Server. 55 Confidential
  56. 56. Summary • Understand what the business expects for availability and recovery • For hardware failure protection, VMware HA offers a low cost, much simpler alternative to Microsoft Clustering • Database Availability Groups can be combined with HA for faster recovery of mailbox servers (not MS-supported)y ( pp ) • Site Recovery Manager allows for the failover of entire datacenters! • Microsoft Clustering IS supported on VMware virtual machines if HA and DRS di bl d i th VM tiDRS are disabled in the VM properties • Either agent-based or array-based backups can be used to protect virtual Exchange serversg 56 Confidential
  57. 57. AgendaAgenda Introductions Exchange on vSphere Overview and Updates Exchange on vSphere Performance ESX host Best Practices for Exchange Exchange 2010 Capacity Planning Availability & Recovery Options Customer Success Stories 57 Confidential
  58. 58. Notable Customers • United States Navy/Marine Corps – 750,000 mailboxes • University of Plymouth – 40,000 mailboxes, replaced MSCS • VMware IT – 9,000 very heavy mailboxes • University of Texas at Brownsville – 25,000 mailboxes, using vSphere for site resiliencyusing vSphere for site resiliency • Undisclosed customer – 20,000+ mailboxes, Exchange 2010 early adopter 58 Confidential

×