IT Organizations of all sizes are moving their workloads to the public cloud in order to gain business agility, unlimited workload scalability, and free their time to work on the projects that matter. One of the leaders in public cloud is the Google Cloud Platform (GCP)
4. Why Are Companies Looking To The Cloud?
Immediate datacenter on demand, in the cloud
Pay based on usage (usually per minute)
Allows you to focus on your application & the
business
Advanced features are built in (HA, load balancing,
etc)
Offering massive scalability
Keep in mind, just like your in-house datacenter, you
6. Why Are Companies Looking To The Cloud?
But…. Is the Cloud right for you?
What should you run in the cloud and not run?
And how do you move from your datacenter, to the
cloud, without downtime and risking your whole
business???
7. 1
2
3
4
What makes up the Google Cloud Platform (GCP)
Find out how Google Cloud can help
How to plan, test, and cutover your workload before going it into
production
And how to migrate ten, hundreds, or thousands of workloads to
the Google Cloud Platform, while minimizing downtime
Here’s What You’ll Learn During This Event
10. ProprietaryGoogle Cloud Platform 10
For the past 16 years,
Google has been building the world’s
fastest, most powerful cloud infrastructure
on the planet.
12. ProprietaryGoogle Cloud Platform 12
Built on the same infrastructure
that powers Google
Fastest, most
reliable network
Super-flexible
compute
Always available
storage
Superior economics
Robust, easy to
use Big Data
solutions
13. ProprietaryGoogle Cloud Platform 13
Compute Engine App EngineContainer Engine
Continuum of Compute
Virtualized hardware Abstracted computing power
Manages your container cluster and
actively schedules your containers
Infrastructure
at Google speed
Build your scalable app
faster
14. ProprietaryGoogle Cloud Platform 14
High-performance Virtual Machines
Consistently performant, scalable, highly secure & reliable.
(Really) Pay for what you use
We bill in minute-level increments so you don’t pay for unused computing time, and automatically apply
sustained use discounts.
Fast, Easy Provisioning
Quickly deploy large clusters of virtual machines with intuitive tools.
Compliance & Security
All data written to disk in Compute Engine is encrypted on the fly and then transmitted
and stored in encrypted form.
Compute Engine Batch
Run short duration, heavy compute jobs. The more flexibility in timing and location you give us the better the
pricing!
$
15. ProprietaryGoogle Cloud Platform 15
Standard PD
(Persistent Disk)
SSD PD
(Solid State Drive Persistent Disk)
Local SSD
(Solid State Drive)
Block Storage
Throughput
High IOPS
Low Latency
Low IOPS
High Latency
Streaming IO
Boot Volumes
Bulk Storage
High Perf Scratch
Hadoop
SQL and NoSQL Databases
File Servers
Security
All storage is encrypted over the wire and at rest
Integrity
Data is stored redundantly, we checksum all data
and take incremental snapshots (PD only)
Consistency
Performance is consistently high and pricing does
not changes from month to month
Simplicity
Simple pricing, pay for space only. Simple
configuration no need to create multiple volumes
or manage RAID arrays.
16. ProprietaryGoogle Cloud Platform 16
More than 70 points of
presence across 33
countries creating the
broadest reaching network
of any Cloud provider
Google’s Network Edge
17. ProprietaryGoogle Cloud Platform 17
Cloud Storage Cloud DatastoreCloud SQL
Storage
Store and manage data using a
fully-managed, relational MySQL
database
Powerful, simple and cost effective
object storage service
Managed, NoSQL, schemaless
database for storing non-relational
data
19. ProprietaryGoogle Cloud Platform 19
The mobile developer
Batch and burst compute
The launch of Cloud Dataflow
Developer Productivity tools
Web serving workloads
Connecting you to the cloud
Kubernetes and containers
Super scaleable storage
Focus on Innovation
21. 21
From
Any Hardware.. Any Hypervisor.. Any Cloud…
To
SUREedge-Migrator and SUREedge-DR
for
Migration and DR
Plan Capture from
any Platform
Convert to
Google Instance
Recover in
Google Cloud
Replicate
to GCP
22. Our Products
3. SUREedge Enterprise Manager
• Global management of all your SE instances
2. SUREedge DR
• Local BCDR
• Site-to-site DR
• Cloud DR
1. SUREedge Migrator
• Workload migration to cloud
• On-boarding to converged infrastructure
23. 23
SUREedge Features
Any to Any…
Physical, virtual, cloud-based sources
Virtual, hyper-converged or cloud targets
Wide Application Support
Application and network config capture & recovery
Stable data images, unlimited point-in-time copies
Smart Data Management
Ultra-bandwidth friendly global deduplication
Military grade encryption in flight and at rest
Efficient compression (Google snappy)
Manageability
Efficient workflow, agentless capture
Recovery planning and testing
Global management
Anywhere to Anywhere
Smart Data Management
Manageability
Application Aware Smart Data Capture
24. 24
• SUREedge instance in Cloud
• Recover for testing as needed
• SUREedge instance at source site
• Agentless application-aware
capture from virtual or physical
Migration and DR to GCP
Global Monitoring
Cloud
Deduplicated,
Compressed,
Encrypted
Bi-directional
Replication
Transform RecoverCapture ReplicatePlan
Customer site
Test
25. 25
Cloud100s - 1,000s
Open-port
Exposure
!
On-Premise
Production Apps
--- ---
Network and Data Security
CloudOn-Premise
1 Port
Encrypted
Deduped
Compressed
--- ---
Production servers are
isolated from open port.
Others
Host based
replication solution
with agents on each
host/VM
Sureline
26. 26
WAN Bandwidth
On-Premise Cloud
Production Apps
--- ---
Full Data
100s TB
100 TB@ Gb/s
~300 Hrs.
(~1 month)
If WAN breaks…
restart replication from
beginning
CloudOn-Premise Incremental Data
Global dedupe
Compressed
BW throttling
100s TB 10s TB
Month Hours
If WAN breaks…
restart replication from
point of failure
SurelineOthers
27. 27
Cloud
Complex Workloads
On-Premise
Workloads comprised of
multiple interdependent
applications
Move individually,
reconnect later!
Java
Server
Apache
UI SQL
CloudOn-Premise
Java
Server
Apache
UI SQL
Java
Server
Apache
UI SQL
Sureline
Move together,
recover together;
maintain interdependencies
rather than recreating them.
Others
28. Migration Best Practice
P2V / V2V Tools + Image Service
One system, one disk at a time
Max 2TB; no multi-disk, multi-
volume support
Useful on LAN only
No-incremental update. Large cut-
over window
Simple environment with single tier
applications
Unreliable - does 80% of the job;
remaining 20% takes 80% of the
time and cost
SUREedge
Plan and Automate
• Multiple systems
• Scheduling
• Pre and post operations
• Reconfigure CPU, memory, network
at recovery, OS upgrade, licensing
Test before cut-over
Incremental updates; small cut-
over window / little or no down
time
No limits to size; or source type
Network error resilient; easy
onboarding over WAN
29. Key Benefits
Network Security
Data Security
Data Integrity
Optimal BW
Manageability
Success Rate
Lower Total Cost
Onboarding
Migration
DR
Reduced Risk
30. 30
Customer Case Study - Ramp to Cloud
Ad Tech Company using Big Data
Customer had Ubuntu 8.x on Physical and
Containers
Hardware was 6-8 years old
Eminent need to migrate to Cloud
Customer’s Revised Plan
Deploy SUREedge Instance
Capture server images
Boot Image in SUREedge as VM
Upgrade OS
Replicate to cloud and test
Final synchronization and cutover
Customer’s Original Plan
Hand create Ubuntu 12.x AMI in cloud
Install application
Move application data
Test
Challenge: OS migration and upgrade needed
during migration
Requirement: Upgrade OS, migrate to cloud,
all without impacting production
One Server at a Time
6 month+ plan
200 Servers moved
in 14 days
Transform to alt. platform Replicate RecoverPlan Capture from any platform
33. 33
Migration and DR to GCP
V
M
1
V
M
2
V
M
3
V
M
1
V
M
2
V
M
3
V
M
1
V
M
2
V
M
3
V
M
1
V
M
2
V
M
3
V
M
1
V
M
2
V
M
3
V
M
1
V
M
2
V
M
3
V
M
1
V
M
2
V
M
3
V
M
1
V
M
2
V
M
3
Recovery N/W
Global Monitoring
Test N/W
Cloud
• SUREedge instance at source site
• Agentless application-aware
capture from virtual or physical
• SUREedge instance in Cloud
• Recover for testing as needed
2
Customer site
V
M
1
V
M
2
V
M
3
V
M
1
V
M
2
V
M
3
V
M
1
V
M
2
V
M
3
V
M
1
V
M
2
V
M
3
1
3
VPN / Direct connect
Deduplicated, Compressed,
Encrypted Bi-directional Replication
Dedupe
37. For more information on Google Migration…
For more information on Sureline
Systems, contact:
• Email:
info@SurelineSystems.com
• Phone: 408–331-7940
For information on Google Cloud
Platform visit: Cloud.Google.com
Contact Sureline for:
• Free custom demonstration
AND
• Limited-time $200 incremental credit for Google Cloud trial
Notas do Editor
Landing page - http://www.actualtech.io/migrate-google-webinar
** slides full screen
** make us all organizers
** click SHOW MY SCREEN
** Start broadcast
** RECORD
——
Hello and Welcome to….. How to Migrate Workloads to the Google Cloud Platform
My name is David Davis and I'll be the moderator, and initial speaker, on today's event, sponsored by Sureline Systems
If you have questions during the webinar, please use the gotomeeting question box to enter your question.
During the webinar there are 2 points where we ask you to answer a couple of quick survey questions – we appreciate your help in answering those.
THREE luck attendee today will walk away with a $100 VISA gift card so stay tuned for that drawing!
We’ve got a lot to cover so Let’s get started!!
So that you have a little background on who is presenting today, let me first tell you about myself.
My name is David Davis and I’m a VMware vExpert, VCP, CCIE, and a video training author, on the topic of virtualization, for pluralsight.com. I started my career in IT as a server and network admin, working in the datacenter. Later I was an IT manager at medium enterprise where we had a very successful server consolidation project, consolidating roughly 80% of our servers. It was then that I learned about the great power and efficiency that virtualization could provide. Since then, I’ve been writing, speaking, and creating video training around virtualization. I’ve spoke at VMware user groups and Vmworld in the US, Canada, and Europe. I’m the co-owner of actualtechmedia.com where we create technical marketing content and demand generation for companies in virtualization, storage, and cloud computing. My blog is virtualizaitonsoftware.com and you can find me on Twitter as @DavidMDavis.
Unfortunately, our speaker from Google Cloud has an unexpected last minute event and wasn’t able to join us on today’s webinar HOWEVER, we have his presentation, which I’ll be covering AND, he authorized us to give out $200 google cloud credits to today’s attendees so stay tuned for information on how to get one of those.
I’m glad to be joined by Mr Jack Woy-CHOW-SKI – Jack tell us a little about yourself…
Now that you know who we are, let’s start this webinar off by talking about some of the ongoing challenges we are facing in IT.
GCP, is it the answer?
What you need to know to migrate to the cloud
How to plan, test, and cutover
How to migrate tens or thousands of workloads and minimize downtime
Lets start with compute
First up, Google is a big platform that hundreds of millions (billions?) use every day, and we provide an incredible platform that allows our customers to run their businesses on the same platforms, in the same datacenters that we run these services at incredible scale.
Google Cloud Platform setup
But we realize that different levels of abstraction aren't equally well suited to all applications, so we offer a continuum of compute, from virtualized hardware in compute engine, through containers and up to the top of the stack with App Engine which allows you to build and run PaaS apps.
Which is right for you or your customer? Yes.
The reality is that there are many different types of applications in various states of existing and new architectures, so we offer this breadth so customers can adopt in the way that makes best sense for them.
Transparency in maintenance
So, how do you decide which storage option is right for your app?
Boot volumes and Bulk Storage are great cases for Standard PD because they have low IOPS, low throughput, and basically just want to occupy the cheapest reliable space available.
Streaming IO is a use case that works for Standard PD as well. While the other two options have a much lower cost per IOPS, the cost per throughput is generally better on Standard PD. Reading large sequential blocks is what disks are good at.
SQL databases (like MySQL and Postgres), and NoSQL databases (like Cassandra, Mongo, and Redis) tend to be transactional and IOPS heavy. Smaller instances can run on Standard PD, but important production databases should usually be on some form of SSD - either SSD PD or Local SSD depending on how extreme the IOPS needs.
File servers (like NFS, Gluster, and Ceph), can be more streaming or more transactional depending on what the clients are doing. SSD PD is generally a good choice here.
High performance scratch disk where keeping the traffic local is the right architecture should go on Local SSD
For Hadoop deployments, where IO needs exceed what the Google Cloud Storage Connector for Hadoop can provide, you should use Local SSD underneath a Hadoop optimized filesystem
Our secure network is the largest ISP in the world by traffic and powers Google’s own services: Search, YouTube, Maps, Docs.
Serve from the point nearest your users minimizing latency and increasing reliability
You can connect your infrastructure to Google via more than 70 edge locations in 33 countries creating the broadest reaching network of any cloud provider.
We provide a variety of ways to connect …
innovation items:
Big Data, and the launch of Cloud Dataflow
Kubernetes, Containers, Hybrid
Developer Tools and Debugging/Trace/Monitoring
Super Scaleable Storage options from SSD to Object, and everything in Between
Transparent Maintenance, industry leading local SSD
The most powerful SDN on the planet!!
Company focused on system and data transformation and application mobility
Technology allows movement of systems and data from any source – physical, virtual, public or private cloud – to any platform, esp into GCP, using a controlled and manageable process and workflow.
Fix build!
Products:
Migrator for migrating and on-boarding systems from anywhere, to anywhere
DR – local, site-to-site, or cloud – especially between incompatible platforms
Enterprise Manager – managing multiple SE instances, giving global view of Mig/DR
Any to any: capture from any type of source – physical, virtual, cloud – and move to any virtualized destination, handling any transformations and compatibility issues
Application support: to assure capture of all needed components and configuration, and consistent data and system images.
Smart data management: unique global deduplication technology as well as compression to efficiently use storage and bandwidth, encryption to protect data in flight and at rest; move as little as possible, and move it safely
Manageability: Efficient DR and migration planning and workflows to simplify and automate operations as well as easily handle special cases and exceptions; including built-in support for testing and verification. All managed globally across enterprise from a single interface.
Solution architecture:
* instance on site to perform agentless capture, local recoveries, local testing and operations (upgrades, etc.)
*instance at GCP to receive data streams and perform cloud recoveries for testing and final cutover.
The two instances communicate safely and efficiently, using encryption to protect the data and global deduplication and compression to greatly reduce bandwidth usage.
Done within framework of our workflow:
Plan: discover and add servers and configuration information, and organize it into groups or “Plans” to be replicated or transferred as one group
Initiate efficient capture – no agents required, instance reaches out to the systems and gets the data, capturing intelligently (no zeroes, no sparse file sections, etc.)
Immediately on completion data is transferred from source to destination, encrypted and deduplicated – safe, efficient
Transformed into target environment’s required format, along with other requriements(v12n drivers, etc)
Recovered in the destination environment. In DR case DR tests can be performed to verify the process and assure it will work when needed. In migration systems can be instantiated for testing eg, in an isolated network, to assure final cutover will go smoothly.
In both cases data is/can be synced to catch changes to the source data, like an incremental backup, to assure small RTOs (DR) or reduce cutover window (Migration)
Other solutions require live, in-cloud counterparts for each source system, as well as an individual connection between them. Results in many holes in the firewall and a broad security attack landscape. Also requires lots of bandwidth, and cost of running target systems in the cloud
SUREedge technology only requires a single connection between the sites, from the source SE instance to the destination instance. One hole to punch in the firewall, one connection to protect, and target systems can be brought up on demand.
Per-system causes moves a lot of duplicate data across the WAN; even if individual systems are deduplicated, common data across systems – like the OS – is repeatedly pushed across.
Also, if connectivity is lost – and since the transfer is across the WAN and thus at a slower speed than local network, interruptions are more likely – then the system replication must start again from scratch. Also consumption tracked on a per-system basis, making it difficult to control overall WAN bandwidth consumption.
SE’s global deduplication eliminates duplicate data across all source systems, and even across multiple source sites. This significantly reduces the amount of data that needs to be pushed over the WAN.
Since captures are local, they complete more quickly reducing the time capture resources are consumed, and reducing the likelihood of interruption. Resilient in the face of connectivity issues. Compression reduces even further, and bandwidth throttling allows control over when WAN is used for moving images – more precise than trying to control many individual connections.
Many products move systems as individual servers, requiring that they be “reassembled” in the target environment. Doesn’t work for multi-server and multi-tier applications. If you have a web app with backend systems then they are moved one at a time, and need to be re-introduced to each other at the target site after recovery.
Using plans and recovery sets, SE defines the relationship between systems and allows them to be captured together, recovered together and brought up in the appropriate sequence. (Can also transform VM characteristics to increase or reduce CPU, memory, modify networking, etc.)
Tools bad:
One-at-a-time, don’t expect complex interrelationships between systems; limitations on sizes, supported filesystems; not for WAN so bandwidth-expensive; no process or workflow, track by hand; failure is common, and tracking causes and remedying them must be done by hand.
SE good:
Build plans to model multi-system/multi-tier apps, etc.; handle transformations and special cases within workflow; built for WAN efficiency and resiliency; system and operation status tracked for you; robust error reporting
These features - data protection, bandwidth reduction, manageable workflow, lead to: easier migration, higher success rates, reduced risk, lower cost and faster time to operation in GCP.
Customer had large number of older systems that they were afraid to even turn off, lest they never turn on again. Wanted to move into the cloud for the normal reasons, but they had several issues: they were running back-revision versions of their OSes not supported by the cloud platform; other systems were running as containers and needed to be converted into full-fledged VMs; and they were running a custom in-house application. They needed to not only move, but to upgrade the OS, convert their containers, port and test the app, all without impacting production.
Original proposal was simply to hand create the systems in the cloud, port the application, and then somehow move the data and server functionality one at a time. The operation was expected to take six months, optimistically, and would incur significant downtime to portions of the app throughout. “Even if it was free, it was going to be painful.”
When we evaluated the project we built the OS upgrade, application port and testing into the process. We deployed SE on site and started capturing servers, and then brought them up locally in their data center’s V12N environment to perform upgrades and port the application. The upgrade and app installation was then automated and built into the workflow. In addition we automated the process of converting containers into full VMs, “injecting” the OS components automatically. The upgraded images were then replicated to the cloud, brought up and tested there. To finish things off a final incremental syncrhonization was performed, a final transfer was made and the systems were brought up in the cloud and cut over. All in all 200 servers were captured, upgraded, converted and cut over in two weeks time, with only minor interruption.
And now I’d like to hand things over to Sanjay Kale who will show you the highlights of SUREedge Migrator and take you through a test-drive of the workflow.
Solution architecture: instance on site to perform capture and local recoveries, instance in the cloud to receive data streams and perform cloud recoveries for testing and final cutover. Efficient communication between them – globally deduplicated, compressed.