Сегодня сложно не использовать облачные решение и их возможности в цифровом мире. Конкуренция между облачными провайдерами приводит к высокому качеству и большому выбору услуг, а GCP и AWS являются одними из лидеров на рынке облачных услуг. Они имеют огромные различия в услугах и возможностях, что очень затрудняет ответ на вопрос "Какой же клауд провайдер выбрать?"
Бизнес решения не редко сходится на том, чтобы не выбирать преимущества среди обоих провайдеров, а использовать преимущества обеих (или более) провайдеров с необходимыми сервисами. Так же не стоит забывать про необходимость в bare-metal решениях. Все эти потребности создают огромное количество новых проблем и сложностей для инженеров и архитекторов.
А сегодня мы поговорим с вами о том какие проблемы возникают при деплойменте приложений нашего стека в разные клауды.
DevOps Fest 2020. Станислав Коленкин. How to connect non-connectible: tips, tricks and tears
1. Continuous Delivery. Continuous DevOps. KYIV, 2020
CONTINUOUS DELIVERY. CONTINUOUS DEVOPS.
20-21,MARCH 2020
KYIV, UKRAINE
How to connect non-connectible:
tears, more tears, tips and tricks
2. Continuous Delivery. Continuous DevOps. KYIV, 2020
Introduction
If your business uses cloud computing–as most businesses do these
days–it’s very likely that you have at least one public cloud solution.
The “public cloud” refers to cloud computing services such as
storage, software, and virtual machines that are provided by third
parties over the internet. Some of the biggest public cloud providers are
Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
Increasingly, however, companies are growing interested in a “cloud
agnostic” strategy. So what does “cloud agnostic” mean, and how can
your own business be cloud agnostic?
4. Continuous Delivery. Continuous DevOps. KYIV, 2020
Cloud Agnostic
One of the greatest benefits of cloud computing is its flexibility. If
you’re running out of storage, for example, your public cloud solution can
automatically scale it up for you so that your operations will continue
seamlessly.
Being “cloud agnostic” takes this idea of the flexible cloud one step
further. As the name suggests, cloud agnostic organizations are those
capable of easily running their workloads and applications within any
public cloud.
6. Continuous Delivery. Continuous DevOps. KYIV, 2020
Cloud Agnostic
The fact that an organization is “cloud agnostic” doesn’t mean that
it’s completely indifferent as to which cloud provider it uses for which
workloads. Indeed, the organization will likely have established
preferences for their cloud setup, based on factors such as price, region,
and the offerings from each provider.
Rather, being cloud agnostic means that you’re capable of switching
tracks to a different public cloud provider should the need arise, with
minimal hiccups and disruption to your business.
9. Continuous Delivery. Continuous DevOps. KYIV, 2020
Cloud Agnostic: Pros
● No vendor lock-in: As mentioned above, being cloud agnostic makes
the risk of vendor lock-in much less likely. Companies that are cloud
agnostic can “diversify their portfolio” and become more resilient to
failure and changes in the business IT landscape.
● More customization: Using a strategy that’s cloud agnostic and
multi-cloud lets you tweak and adjust your cloud roadmap exactly as
you see fit. You don’t have to miss out on a feature that’s exclusive to
a single provider just because you’re locked into a different solution.
● Redundancy. Having systems in place across various clouds means
you are covered should any one encounter problems.
10. Continuous Delivery. Continuous DevOps. KYIV, 2020
Cloud Agnostic: Cons
● Greater complexity: Being cloud agnostic sounds great on paper, but
the realities of implementation can be much more difficult. Creating
a cloud strategy with portability built in from the ground up
generally incurs additional complexity and cost.
● “Lowest common denominator”: If you focus too much on being
cloud agnostic, you may only be able to use services that are offered
by all of the major public cloud providers.
12. Continuous Delivery. Continuous DevOps. KYIV, 2020
Strategies for Being Cloud Agnostic
Nevertheless, there are a number of “low-hanging fruit” technologies
that you can adopt on the path toward being cloud agnostic. These will
be advantageous for your business no matter where you stand on the
cloud agnostic spectrum.
For example, container technologies such as Docker is an invaluable
part of being cloud agnostic. Essentially, a “container” is a software unit
that packages source code together with its libraries and dependencies.
This allows the application to be quickly and easily ported from one
computing environment to another.
Don’t forget about containerd.
13. Continuous Delivery. Continuous DevOps. KYIV, 2020
Strategies for Being Cloud Agnostic
Kubernetes is an open-source container-orchestration system for
automating application deployment, scaling, and management. It was
originally designed by Google, and is now maintained by the Cloud
Native Computing Foundation.
But exist a lot of others orchestration systems such as:
• Docker swarm
• Mesos
• Openshift (Kubernetes under hood)
• etc
14. Continuous Delivery. Continuous DevOps. KYIV, 2020
Strategies for Being Cloud Agnostic
Another tactic for being cloud agnostic is to use managed database
services. These are public cloud offerings in which the provider installs,
maintains, manages, and provides access to a database. The major public
clouds such as AWS, Microsoft Azure, and Google all offer theoretical
possibilities for migrating between providers.
Deploy on any cloud, including fully on-premise deploys, is the
easiest and most cost effective way to remain cloud agnostic. This is
because with virtually one click, you can save your settings and deploy to
whatever environment your enterprise wishes. In short, simplicity equals
operational cost efficiency.
.
16. Continuous Delivery. Continuous DevOps. KYIV, 2020
Multi-Cloud problems
Problems:
• Network connectivity
• Routing
• Multi project connectivity (Hub and Spoke Architecture)
• Resolve DNS names
• Access to Cloud provider’s services
• IPs intersection (include problem with DNS)
17. Continuous Delivery. Continuous DevOps. KYIV, 2020
Multi-Cloud: Network Connectivity
MCR - Megaport Cloud Router to enable multi-cloud connectivity
between Google Virtual Private Cloud (VPC) and Amazon VPC without
deploying physical infrastructure. For other cloud service provider VPCs,
you can replace the second half of the tutorial with instructions from
Megaport.
https://www.megaport.com/
18. Continuous Delivery. Continuous DevOps. KYIV, 2020
Multi-Cloud: Network Connectivity
MCR - Megaport Cloud Router to enable multi-cloud connectivity
between Google Virtual Private Cloud (VPC) and Amazon VPC without
deploying physical infrastructure. For other cloud service provider VPCs,
you can replace the second half of the tutorial with instructions from
Megaport.
21. Continuous Delivery. Continuous DevOps. KYIV, 2020
Multi-Cloud: Network Connectivity
● An HA VPN gateway in GCP with two
interfaces.
● Two AWS virtual private gateways, which
connect to your HA VPN gateway.
● An external VPN gateway resource in GCP
that represents your AWS virtual private
gateway. This resource provides information
to GCP about your AWS gateway.
● Two tunnels from one AWS virtual private
gateway to one interface of the HA VPN
gateway.
● Two tunnels from the other AWS virtual
private gateway to the other interface of the
HA VPN gateway.
22. Continuous Delivery. Continuous DevOps. KYIV, 2020
Multi-Cloud: Network Connectivity
● GCP Propagating of supernet via VPN tunnel is possible
● GCP All routes are advertised, no summarization
● In GCP it’s possible to add a custom route at GCP Cloud Router
(https://cloud.google.com/router/docs/how-to/advertising-overv
iew) and this route will be advertised via tunnel, including
supernet route, if advertisement is enabled.
● GCP Route advertisement might be disabled and only custom
ranges are advertised via the tunnel.
● In AWS we can NOT propagate custom routes and supernet
27. Continuous Delivery. Continuous DevOps. KYIV, 2020
AWS DNS
Route53 resolver provides
two capabilities:
● Route 53 Resolver
Endpoints for
inbound queries
● Conditional
Forwarding Rules for
outbound queries.
The IP address of the DNS
server is the base of the VPC
network range plus two
36. Continuous Delivery. Continuous DevOps. KYIV, 2020
AWS Load Balancer
When an internal load balancer is created, it receives a public DNS
name with the following form:
internal-name-123456789.region.elb.amazonaws.com
The DNS servers resolve the DNS name of your load balancer to the
private IP addresses of the load balancer nodes for your internal load
balancer. Each load balancer node is connected to the private IP
addresses of the back-end instances using elastic network interfaces. If
cross-zone load balancing is enabled, each node is connected to each
back-end instance, regardless of Availability Zone. Otherwise, each node
is connected only to the instances that are in its Availability Zone.
40. Continuous Delivery. Continuous DevOps. KYIV, 2020
GCP Load Balancer
Global access (BETA) is an optional parameter for internal LoadBalancer Services that
allows clients from any region in your VPC network to access the internal TCP/UDP
load balancer. Without global access, traffic originating from clients in your VPC network
must be in the same region as the load balancer. Global access allows clients in any
region to access the load balancer. Backend instances must still be located in the same
region as the load balancer.
Global access is enabled per-Service using the following annotation:
networking.gke.io/internal-load-balancer-allow-global-access: "true".
Global access is not supported with legacy networks. Normal inter-region traffic costs
apply when using global access across regions. Global access is currently Beta and is
supported only on Rapid Channel clusters as of GKE 1.16.
41. Continuous Delivery. Continuous DevOps. KYIV, 2020
GCP Load Balancer
Global access disabled Global access enabled
Clients must be in the same region as the load
balancer. They also must be in the same VPC
network as the load balancer or in a VPC
network that is connected to the load
balancer's VPC network by using VPC
Network Peering.
Clients can be in any region. They still must be
in the same VPC network as the load balancer
or in a VPC network that's connected to the
load balancer's VPC network by using VPC
Network Peering.
On-premises clients can access the load
balancer through Cloud VPN tunnels or
interconnect attachments (VLANs). These
tunnels or attachments must be in the same
region as the load balancer.
On-premises clients can access the load
balancer through Cloud VPN tunnels or
interconnect attachments (VLANs). These
tunnels or attachments can be in any region.
44. Continuous Delivery. Continuous DevOps. KYIV, 2020
Access to Cloud Services
It is not a rare case when an application running on Google
Kubernetes Engine (GKE) needs to access Amazon Web Services (AWS)
APIs. Any application has needs. Maybe it needs to run an analytics
query on Amazon Redshift, access data stored in Amazon S3 bucket,
convert text to speech with Amazon Polly or use any other AWS service.
This multi-cloud scenario is common nowadays, as companies are
working with multiple cloud providers.
45. Continuous Delivery. Continuous DevOps. KYIV, 2020
Access to Cloud Services
Cross-cloud access introduces a new challenge; how to manage
cloud credentials, required to access from one cloud provider to services
running in the other. The naive approach, distributing and saving cloud
provider secrets is not the most secure approach; distributing long-term
credentials to each service, that needs to access AWS services, is
challenging to manage and a potential security risk.
46. Continuous Delivery. Continuous DevOps. KYIV, 2020
GCP Workload Identity
Workload Identity is the recommended way to access Google Cloud
services from within GKE due to its improved security properties and
manageability. To learn more, refer to the alternatives below.
Workloads running on GKE must authenticate to use Google Cloud
APIs such as the Compute APIs, Storage and Database APIs, or Machine
Learning APIs. Once you configure the relationship between a
Kubernetes service account and a Google service account, any workload
running as the Kubernetes service account automatically authenticates
as the Google service account when accessing Google Cloud APIs.
47. Continuous Delivery. Continuous DevOps. KYIV, 2020
GCP Workload Identity
KSA - Kubernetes service account
GSA - GCP service account
49. Continuous Delivery. Continuous DevOps. KYIV, 2020
AWS fine-grained IAM roles for service accounts
The IAM roles for service accounts feature is available on new
Amazon EKS Kubernetes version 1.14 clusters, and clusters that were
updated to versions 1.14 or 1.13 on or after September 3rd, 2019.
After you have enabled the IAM OIDC identity provider for your
cluster, you can create IAM roles to associate with a service account in
your cluster.
54. Continuous Delivery. Continuous DevOps. KYIV, 2020
Access to Cloud Services
Anthos Config Connector is a
Kubernetes add-on that allows
customers to manage GCP
resources, such as Cloud Spanner
or Cloud Storage, through your
cluster's API.
Even though Config Connector
is designed for GKE, it can be easily
installed in any Kubernetes
environment.