SlideShare uma empresa Scribd logo
1 de 13
Baixar para ler offline
Infatuation Leads to Love—
How Container Orchestration and
Federation Enables More
Multi-Cloud Competition
Transcript of a discussion on new ways to gain container orchestration, use Serverless
models, and employ inclusive management to keep the container love alive and well.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript.
Sponsor: DigitalOcean.
Dana Gardner: Welcome to the next edition of BriefingsDirect. I’m Dana Gardner,
Principal Analyst at Interarbor Solutions, your host and moderator.
The use of containers by developers -- and now increasingly IT operators -- has grown
from infatuation to deep and abiding love. But as with any long-term affair, the
honeymoon soon leads to needing to live well together -- and maybe even getting some
relationship help along the way.
And so it goes with container orchestration and automation solutions, which are rapidly
emerging as the means to maintain the bliss between rapid container adoption and
broad container use among multiple cloud hosts.
This BriefingsDirect cloud services maturity discussion focuses on new ways to gain
container orchestration, to better use Serverless models, and employ inclusive
management to keep the container love alive.
Here to help unpack insights into the new era of using
containers to gain ease with multi-cloud deployments
are our panelists, Matt Baldwin, Founder and CEO at
StackPointCloud, based in Seattle. Welcome, Matt.
Matt Baldwin: How are you?
Gardner: I’m great. We’re also here with Nic Jackson,
Developer Advocate at HashiCorp, based in San
Francisco. Hello, Nic.
Nic Jackson: Hey, how are you doing?
Gardner: Doing well. We are here too with Reynold Harbin, Director of Product
Marketing at DigitalOcean, based in New York. Hello, Reynold.
Reynold Harbin: Hi, Dana. Thanks for having us.
Baldwin
Gardner: Delighted to have you with us. Nic, let’s start with you. HashiCorp has gone a
long way to enable multi-cloud provisioning. What are some of the trends now driving the
need for multi-cloud? And how does container management and orchestration fit into the
goal of obtaining functional multi-cloud use, or even interoperability?	
Jackson: What we see mainly from our enterprise customers is
that people are looking for a number of different ways so that they
don’t get locked into one particular cloud provider. They are looking
for high-availability and redundancy across cloud providers. They
are looking for a migration path from private cloud to a public cloud.
Or they want a burstable capacity, which means that they can take
that private cloud and burst it out into public cloud, if need be.
Containers -- and orchestration platforms like Kubernetes, Nomad
and Swarm -- are providing standard interfaces to developers. So
once you have the platform set up, the running of an application can
be mostly cloud-agnostic.
Gardner: There’s a growing need for container management and
orchestration for not only cloud-agnostic development, but potentially as a greasing of
the skids, if you will, to a multi-cloud world.
Harbin: Yes. If you make the investment now to architect
and package your applications with containers and intelligent
orchestration, you will have much better agility to move your
application across cloud providers.
This will also enable you to quickly leverage any new
products on any cloud provider. For example DigitalOcean
recently upgraded our High CPU Droplet plans, providing
some of the best values for accessing the latest chipsets
from Intel. For users with containerized applications and
orchestration, they could easily improve application
performance by moving workloads over to that new product.
Gardner: And, Matt, at StackPointCloud you have
created a universal control plane for Kubernetes. How does that help in terms of ease of
deployment choice and multi-cloud use?
Ease-of-use increases flexibility
Baldwin: We’ve basically built a management control plane for Kubernetes that gives
you a single pane of glass across all your cloud providers. We deal with the top four, so
Amazon, Microsoft Azure, Google and DigitalOcean. Because we provide that single
pane of glass, you can build the clusters you need with those providers and you can
stand up federation.
In Kubernetes, multi-cloud is done via that federation. The federation control plane
connects all of those clusters together. We are also managing workloads to balance
Jackson
Harbin
workloads across, say, some on Amazon Web Services (AWS) and some on
DigitalOcean, if you like.
That’s what we have been doing with our star product. We are still on that journey, still
building more things. Because it’s moving quite fast, federation is shifting and changing.
We are keeping pace and trying to make it all easier to use.
Our whole point is usability. We think that all this tooling needs to become really, really
easy to use. You need to be able to manage multi-cloud as if it’s a single cloud.
Gardner: Reynold, with DigitalOcean being one of the major cloud providers that Matt
mentioned, why is it important for you to enable this level of multi-cloud use? Is it a
matter of letting the best public cloud services values win? Why do you want to see the
floodgates open for public cloud choice and interoperability?
Harbin: Thousands of businesses and over a million developers use DigitalOcean --
primarily because of the ease in provisioning and of being able to spin up and manage
their infrastructure. This next step of having orchestration tools and containers puts even
more flexibility into the hands of developers and businesses.
For customers who want to use data centers on DigitalOcean, or data centers on other
providers, we want to enable flexibility. We want developers to more easily burst into
public clouds as they need, and gain all the visibility they want in a common way across
the various infrastructure providers that they want to use.
Serverless pros and cons
Gardner: Developers are increasingly interested in a Serverless model, where they let
the clouds manage the allocation of machine resources. This also helps in cost
optimization. How do the container orchestration and management tools help? How
does Serverless, and the demand for it, also fit in?
Jackson: Serverless adds an extra layer of complexity, because the different cloud
providers have different approaches to doing Serverless. A Serverless function running
on Google or Azure or AWS -- they all have different interfaces. They have different ways
of deploying, and the underlying code has to be abstracted enough so that it can run
across all the different providers. You have to really think about that from a software
architectural problem, from that perspective.
In my opinion, you would allow yourself to get locked in if you use things like the Native
Queuing or Pub/Sub, which works really well with a particular cloud provider’s
Serverless platform.
One of the recent projects I’m super-excited about is OpenFaaS, by Alex Ellis. What
OpenFaaS tries to do is provide that cloud-agnostic method of running functions-as-a-
Introducing
Simple and Reliable
Cloud Object Storage
service (FaaS). This is not necessarily Serverless, you still have to manage the
underlying servers, but it does allow you to take advantage of your existing Kubernetes,
Nomad, or Docker Swarm Clusters. It then gives you the developer workflow, which I
think is the ultimate end-goal, rather than thinking about decoupling the complexity of the
infrastructure.
Gardner: Reynold, any thoughts on Serverless?
Harbin: I agree. We are on this road of making it
easier for the application developer so they don’t
have to worry about the underlying infrastructure.
For certain applications, Serverless can help in that
goal, but at the same time, you’re adding
complexity. You have to think about the application,
the architecture, and which services are going to
be most useful in terms of applying Serverless.
We want to enable our developers to use whatever technologies will help them the most.
And for certain applications, Serverless will be relevant. OpenFaaS is really interesting,
because it makes it easier to write to one standard, and not have to worry about the
underlying virtual servers or cloud providers.
Jackson: The other neat thing about OpenFaaS is the maintainability. When you look at
application lifecycle management (ALM), which not enough people pay enough attention
to, Serverless is so new that ALM is still unknown.
But with OpenFaaS -- and one of the things that I love about that platform -- you are
baking functions into Docker containers so you can run those as standard microservices
outside of the OpenFaaS platforms, if you want. So you can see that kind of
maintainability. It gives you an upgrade path, despite being completely decoupled from
any particular cloud provider’s platform. So you gain flexibility.
If you want to go multi-cloud, you can run OpenFaaS on a federated Nomad or federated
Kubernetes cluster and you have your own private multi-cloud FaaS approach, which I
think is super cool.
Gardner: It sounds as if we would like to see the same trajectory we saw with containers
take place with Serverless, there is just a bit of a lag there in terms of the interoperability
and the extensibility.
Baldwin: There is also the Serverless framework they can use that helps to abstract out
the Serverless endpoints. So abstract at Lambda or Kubeless or any other, Fission;
Kubeless and Fission are just two other projects that are more geared toward
Kubernetes than others.
Gardner: Nic, tell us about your organization, HashiCorp. What are you up to?
Simplify, simplify
Jackson: We are all about delivering developer tooling to enable modern applications.
We have products like Nomad, which is a scheduler; Terraform, for infrastructure-as-
code; Consul, which you can use for key value configurations and service discovery;
You have to think about the
application, the architecture, and
which services are going to be
most useful in terms of applying
Serverless.
Packer for creating gold master images; and Vault, which is becoming very popular for
managing “secrets” and things like that.
We are putting together a suite of products that can make integration super-easy, but
they actually work well standalone, too. You could just run Terraform if you want to, or
maybe you are just going to use Nomad and Consul, or maybe Consul and Vault. But
the aim is that we want to simplify a lot of the problems that people have when they start
building highly available, highly distributed and scalable infrastructures.
Gardner: Reynold, tell us about DigitalOcean, and why you are interested in supporting
organizations like StackPointCloud and HashiCorp as they better provide services and
value to their customers.
Harbin: DigitalOcean is a very intuitive cloud services platform on which to run
applications. We are designed to help developers and businesses build their
applications, deploy them, and scale them faster, more efficiently, and more cost
effectively. Our products basically are cloud services with various configurations to
maximize CPU or memory available in our data centers around the world.
We also have storage, including object storage, for a unlimited scale; or block storage
that you can attach a volume of any size to, depending on your needs. And then we also
include networking services for securing and scaling -- from firewalling to load balancing
your applications.
All of these products are designed to be controlled, either through a simplified UI or
through a very simple API, a RESTful API, so that tools like Terraform or Kubernetes
orchestration through StackPointCloud can all be done through the single pane of glass
of your choice. And the infrastructure that underlies it is all controlled via the API.
The reason we are leaning to these kinds of
partnerships and tooling is because that’s what
our users want, what developers want. They want
easier ways to provision and manage
infrastructure. So if you want to use an
orchestration tool, then we want to make that as
easy and as seamless as possible.
Gardner: The infatuation with containers has moved into the full love affair level, at least
based on what I see in the market. But how do we keep this from going off the rails? We
have seen other cases where popularity can lead to some complexity. For example, with
the way virtual machines (VMs) were adopted to a point where sprawl became such an
issue.
What are the challenges we are facing, and how can organizations better prepare
themselves for a world of far more containers, and perhaps a world of more Serverless?
Container complexity
Baldwin: Containers are going to introduce a lot of complexity. I will just dig into one
level of complexity, which is security. How to protect one host talking to another host?
You need to figure out how to protect one service talking to another service. How do you
secure that, how do you incur that traffic, how do you ensure that identity is handled?
Users and developers want easier
ways to provision and manage
infrastructure.
When you begin looking at other pieces of the puzzle, things like ServiceMesh. We look
at things like Kubernetes and Istio as complementary because you are going to need to
be able to observe all of these environments. You are going to have to do all the things
that you would have done with VMs, but there’s just an abundance of these things.
That’s kind of what we are seeing, and that’s the level of complexity.
The tooling is still trying to catch up, and a lot of the open source tools are still in
development, with some of the components still in alpha. There is a lot of need for ease-
of-use around these tools, a lot of need for better user interfaces. We are at the
beginning where, yes, we are trying to handle containers, and lots of containers all over
the place, and trying to figure out how these things are talking to each other, and being
able to just troubleshoot that.
How do you trace when your application starts to have an issue? How do you figure out
where in that environment the issue is showing up? You start to learn how to you use
tools like the Zipkin or you introduce OpenTracing into your stack, things like that.
Gardner: Matt, what would you encourage people to do now, experiment with more
tools, acquaint themselves with those tools, make demands on tools, how to head this
off this from a user perspective?
Tip-toe through the technology
Baldwin: I would begin by stepping into the water, going into the shallow end of the
pool by just starting to explore the technology.
I have seen organizations jump into these technologies. Take Kubernetes as an
example. I have seen organizations adopt Kubernetes really early, and then they started
to build their own Platform as a Service (PaaS) on top of it without actually being
involved in the project and being aware of what’s happening in the project.
So there is the danger of duplicating things that are happening in the roadmap,
duplicating something that’s in the roadmap that will be done in six months in the project.
And now you are stuck on Kubernetes version 1.2, and how do you move to the next
version of Kubernetes?
So I think there is a danger there with too early of an adoption, if you start to build too
much. But at the same time there is a need to conduct proof of concepts (POCs), to start
to shift some of your smaller services into new areas.
I think you need to introduce Istio into test environments and start to look at what that
does for you, and start looking at all the use cases around it, things like traffic shifting.
There are issues like how to do a A-B deployments, service meshes can actually give
you that and start to play with that and start to plan for the future, but maybe not
Introducing
Simple and Reliable
Cloud Object Storage
completely start to customize whatever you just built, because there is always a threat
that the project isn’t fully baked yet.
Gardner: Sounds like it might be time to be thinking strategically, as well as tactically in
how you approach these things. Maybe even get some enterprise architects involved so
that you don’t get too bogged down before the standards are cooked.
Nic, what do you see as the challenges with bringing containers to use in a multi-cloud
environment? What should people be thinking about to hedge against those challenges?
Sensible speed
Jackson: Look at just how fast things have
moved. I mean, Kubernetes as a product
practically didn’t exist two years ago. Nomad
didn’t really exist two years ago. I think it was
only just launched at HashiCorp in 2015. And
those products are still evolving.
And I think it was a really good comment that
you have to be careful about building on top of
these things, and then stray too far away from the stable branch. You could end up in a
situation where you can’t follow an upgrade path -- because one thing that’s for certain,
the speed of evolution isn’t going to slow down.
Always try to keep abreast of where the technology is, and always make sure you have
a great path. You can do that through being sensible about abstraction. In the same way
that you would not necessarily depend on a concrete implementation in your code, you
would depend on interfaces. You have to take a similar approach to your infrastructure,
so we should be looking at depending upon interfaces, so that if a new component
comes along -- something that’s better than Kubernetes – you can actually hot-swap
them out without having to go through years of re-platforming.
Gardner: Reynold, how do you see solving complexity in the evolution of these
technologies, and ways that early-adopters can resist getting bogged down as they
continue to mature?
Harbin: The two main points that Matt and Nic have brought up are really good ones.
Certainly visibility and security of these applications and these environments is really
important from a functionality perspective.
As Nic mentioned, the pace at which new technologies are being developed is intense.
You have to have an environment where you can test out these various tools, see what
works for you, do it in a way that you can get these ideas and run them and test them
and see how this technology can help your particular business. And a lot of this
infrastructure in many ways is almost disposable, because you can spin it up as you
need to, test it and then spin it down -- and it might only need to live for an hour or for a
couple of days.
Being aware of the tools, what’s happening in terms of new functionality, and then being
able to test that either locally or in a cloud environment is really going to be important.
Look at how fast things have
moved. Kubernetes practically
didn’t exist two years ago. Nomad
didn’t really exist two years ago.
And those products are still
evolving.
Gardner: I was expecting at least one of you to bring up DevOps. That thinking about
development in conjunction with production, and making this more of a seamless
process would help. Am I off base? Matt, should DevOps be part of this solution set?
Shared language
Baldwin: Yes, it should be part of it. I guess my personal opinion on DevOps is that we
are moving more toward where Ops needs to become more and more invisible. It’s more
about shipping, and it’s more about focusing on the apps versus the infrastructure. And
so I just see more like the capital O going to lowercase o.
What I do think is interesting right now is that
developers and operators are now speaking
the same language. If you are looking at
Kubernetes, developers and operators are
now speaking the same language. They are
speaking in Kubernetes, and so that’s a very
big deal. So now the developer is building it in
the same way that the operator is going to
understand it. The operator is going to
understand how the microservice is built; the
developer is going to understand how it’s built. They are all going to understand
everything.
And then with multi-cloud, you could also do things like have your staging environment in
one cloud and you promote your code so that your operators are running the code over
in production on another provider and you could promote that code across the network,
so you can do things like that, too.
I think there is some of the traditional DevOps tooling, things like Chef, things like
Puppet, I don’t think have as much of a future as they used to have, because they did a
lot of app management on the hosts and now that the apps are not living on the host
anymore, there is not a lot for those tools to do. So just build out a host at Amazon AWS
and then just deploy Kubernetes and then just let Kubernetes take over from there.
Some of those tools, their importance will lessen, like you won’t have to know Puppet as
much; you likely won’t ever need to know Puppet.
Gardner: Nic, are you in the same camp, more Dev, less Ops, lowercase o?
More Dev, less Ops?
Jackson: I think it depends on two things. The first thing is the scale of your
organization. When you look at a lot of tools, and you look at a lot of information that’s
out there, it makes an assumption that everybody is operating at fixed scale, and I don’t
think that’s the case. Pretty much any business that’s operating in a digital world, which
is pretty much any business these days, you can take advantage of modern
development techniques. When you start depending on the scale, then it also shifts who
is potentially going to be doing the infrastructure side of things.
Developers and operators are now
speaking the same language,
Kubernetes. So now the developer
is building it in the same way that
the operator is going to
understand it.
Smaller companies, I think you are going to get more Dev than you will Ops because
that may not be a scale that can support a dedicated operations team. But larger
enterprise organizations, you may have more of a platform team, more of an operations
person who is using code to manage infrastructure.
In either case, there’s a requirement that developers have to have an appreciation and
an understanding of the platform to which they are deploying their code. They need to
have that because they need to have an understanding of how things like service
discovery works. How are the volumes working for persistent storage, how are things
going to work in terms of scale and scalability? So if you are going to be load testing it,
what are sort of the operational thresholds in terms of I/O for CPU or disk, and things like
that?
I think DevOps is a really powerful concept. I certainly love working in a world where I
can interact and work with the operations and the infrastructure teams. I benefit as a
software engineer, and I think the infrastructure engineers benefit because those sorts of
skills that we both have, we can share. So I really hope DevOps doesn’t go away, but I
think the level at which that interaction occurs does very much depend on scale of your
organization.
Shop around
Gardner: Are there examples of some organizations, large or small, that have embraced
containers, have multi-cloud in their sights, are maybe thinking about Serverless?
Baldwin: I have an example. This customer was a full-on Amazon shop, and they had
not migrated to microservices. Their first step was to move to Docker, and then we
moved them up to Kubernetes. These guys were an adtech firm and they had, as you
can imagine, ingress traffic that had a high charge to it, and that was billed by Amazon.
So they spent a lot of time negotiating a better cloud price-point with Google. What they
were able to do is stand up a Kubernetes cluster on Google Cloud and then shift the
workload that was needed at that better price-point. At the same time, they kept the rest
of the workload at Amazon because they were still relying on some of the other
underlining services of Amazon, things like Amazon Relational Database Service
(Amazon RDS).
So they didn’t want to completely move to Google, but they wanted to move something
that they were taking a really large hit on, on cost, and move that to Google. So I think
you are going to see multi-cloud first get used as a vendor tactic against the cloud
providers to try and negotiate a better price point. So if you are doing adtech, now you
are in a position where you can actually negotiate with Amazon, Google or whomever,
and get a better price and just move your workload to whomever gives it to you.
Introducing
Simple and Reliable
Cloud Object Storage
So that makes it a lot more competitive. That was an early example, one of the earlier
federation examples we have.
Gardner: The economic paybacks from that could be very significant, if you can
leverage better deals from your cloud providers. That could be a very significant portion
of your overall expenses.
Baldwin: It’s giving the power back to the consumer. We basically have a cloud
monopoly, and then smaller ones. So we have Amazon AWS, and so how do you work
against Amazon to reduce the price points, how do you try to break that?
And once you start to get power back to the consumer, that starts to weaken the hold on
the end-user.
Gardner: Nic, an example that we can look to perhaps in a different way, one that
provides a business advantage?
Go public
Jackson: One of the things that we see for a lot of enterprise customers is the cloud
adoption phase. So I can’t give you the exact numbers, but the total market in terms of
compute for the big four cloud providers is about 30 percent. There is something like 60
percent to 70 percent of all of the existing compute still running in private data centers. A
lot of organizations are looking at moving that forward. They want to be able to adopt
cloud, for whatever reason. They want better tooling to be able to do that.
You can create a federated Kubernetes cluster, or a federated Nomad cluster, and you
can begin shifting your workload away from the private data center and into the cloud.
You can gain that clear migration path. It allows you to run both of those platforms side
by side, the distinct platform that the organization understands but also the modern
platform that requires learning in terms of tooling and behavior.
That’s going to be a typical approach for a lot of the
large enterprises. We are going to see a lot of the
shift from private data centers into public clouds. A
lot of the cloud providers are offering pretty attractive
reasons in terms of licensing to do that rather than
renew your license for your physical infrastructure.
Why don’t you just move it off into your cloud
provider?
But if you’re running tens of billions of dollars worth of
business, then any downtime is incredibly expensive. So you will want to ensure that you
have the maximum high availability.
Baldwin: You can see that Microsoft is converting a lot of their enterprise agreements to
move people over to Azure.
Jackson: Well, it’s not just Microsoft. I mean, Dell/EMC is one of the most aggressive. I
could imagine a great sales strategy for them is to say, “Well, hey, rather than buying a
new Dell server, why don’t you just lease one of these servers in the Dell cloud and we
That’s going to be a typical
approach for a lot of the large
enterprises. We are going to see a
lot of the shift from private data
centers into public clouds.
will manage it for you.” And you basically you’re just shifting from a capital expenditure
(CapEx) to an operational expenditure (OpEx) model.
I think Oracle has a similar strategy, the Oracle cloud is up and coming. So the potential
is rather than paying for an Oracle database license you could just move that database
into the Oracle cloud and save yourself a lot of trouble around the maintenance of the
physical data center.
Gardner: Reynold, any thoughts on examples of how orchestration of containers may be
moving more toward Serverless models that have great benefits for your end users? As
a public cloud, where do you see a good example of how this all works to everyone’s
advantage?
No more either/or
Harbin: As developers move toward containers and orchestration, they can begin
looking at cloud providers not as a choice of either/or but as, “I get to use all of them,
and I get to use the products and services that are best for my particular application.”
An example of that would be a customer who was hosting their application and their
storage on Amazon AWS, and a month ago DigitalOcean released our new object
storage product called Spaces. Essentially they gained all the benefits of the AWS S3
object storage, but the cost is 10 times lower, at least for bandwidth.
If this particular customer could containerize their application, which basically publishes
and posts content to object storage and delivers a lot of that to end users, they would
have the flexibility to take advantage of new products like Spaces that are being rolled
out all the time by various cloud providers. In this case, they could have easily moved
their application to DigitalOcean, take advantage of our new object storage product, and
essentially lowered the total cost.
But it’s not just DigitalOcean products. New
technologies that can make your applications
better are being released all the time, as open
source projects and commercial products.
Companies will gain agility if their applications
are containerized, as they will be able to use
new technologies much more easily.
Baldwin: There are some great abstraction layers -- things like Minio that you don’t
necessarily need to interact with the underlying object storage. You have a layer that
allows you to be ignorant of that, and such de-coupling is super-useful.
Gardner: I’m afraid we are about out of time, but I wanted to give each of you an
opportunity to tell us how to learn more about your organization.
Matt Baldwin, how could people follow you and also learn more about StackPointCloud?
Baldwin: If you wanted to give Kubernetes a shot, we provide a turnkey marketplace
and management platform. So you just hit the site, log in with social credentials like
GitHub, and then you can start to build clusters. You can check it out via our blog on
Companies will gain agility if their
applications are containerized, as
they will be able to use new
technologies much more easily.
Stackpoint.io. We also run all of the major markets for the Kubernetes community, up
and down the West and East Coasts.
So you can engage with us at any of the Kubernetes events in Seattle, San Francisco,
New York, and wherever. Yeah, also just drop any Kubernetes slack channel and just
ping us, ping me on baldwinmathew, also @baldwinmathew on Twitter.
Gardner: Nic, same thing, how can people follow you and learn more about HashiCorp?
Jackson: HashiCorp.com is a great landing site because you can bounce out to the
various product sites from there. We also have a blog, which we are pretty active with.
We are generally publishing at least a couple of pieces of information ourselves on there
every week but we are also syndicating other stuff that we find, not necessarily always
related to HashiCorp but just interesting technology things.
So you can get access to the blog through there and on Twitter following HashiCorp,
myself, I am @sheriffjackson, so you can follow me on Twitter, I try to share stuff that I
find interesting.
Gardner: And Reynold, learning more about DigitalOcean as well as following you or
other evangelists that you think are worthy?
Harbin: The community site on DigitalOcean has 1,700 really well-curated articles. So
do.co/community would be a good start, and we have several really technology-agnostic
articles about containerization, as well as specific technologies like Kubernetes. They
are articles, they are well written and they will teach you just how you can get started.
And then of course, the DigitalOcean website is a good resource just for our own
product.
Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored
BriefingsDirect discussion on container orchestration and automation solutions as a
means to encourage broader adoption of containers and multi-cloud use.
We’ve learned about new ways to gain container control, we’ve also heard about
Serverless and discussed some of the models around DevOps in order to grease the
skids toward more competitive cloud deployments and development.
So thanks to our guests, Matt Baldwin, Founder and CEO of StackPointCloud,
Nic Jackson, Developer Advocate at HashiCorp, and Reynold Harbin, Director of
Product Marketing at DigitalOcean.
I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for
this ongoing series of BriefingsDirect discussions. A big thank you to our sponsor
DigitalOcean for supporting these presentations.
Follow me on Twitter @Dana_Gardner and find more podcasts at BriefingsDirect.com.
Thanks again for joining! Please pass this content along your IT community and do
come back next time.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript.
Sponsor: DigitalOcean.
Transcript of a discussion on new ways to gain container orchestration, use Serverless
models, and employ inclusive management to keep the container love alive and well.
Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved.
You may also be interested in:
• As enterprises face mounting hybrid IT complexity, new management solutions beckon
• How mounting complexity, multi-cloud sprawl, and need for maturity hinder hybrid IT's ability to
grow and thrive
• Get ready for the Post-Cloud World
• Inside story on HPC’s AI role in Bridges 'strategic reasoning' research at CMU
• Philips teams with HPE on ecosystem approach to improve healthcare informatics-driven
outcome
• Inside story: How Ormuco abstracts the concepts of private and public cloud across the globe
• How Nokia refactors the video delivery business with new time-managed IT financing models
• IoT capabilities open new doors for Miami telecoms platform provider Identidad IoT
• Inside story on developing the ultimate SDN-enabled hybrid cloud object storage environment
• How IoT and OT collaborate to usher in the data-driven factory of the future

Mais conteúdo relacionado

Mais procurados

Hitchhiker's Guide to Open Source Cloud Computing
Hitchhiker's Guide to Open Source Cloud ComputingHitchhiker's Guide to Open Source Cloud Computing
Hitchhiker's Guide to Open Source Cloud Computing
Mark Hinkle
 

Mais procurados (20)

Future of Open Source in a Cloudy World
Future of Open Source in a Cloudy WorldFuture of Open Source in a Cloudy World
Future of Open Source in a Cloudy World
 
Private Cloud with Open Stack, Docker
Private Cloud with Open Stack, DockerPrivate Cloud with Open Stack, Docker
Private Cloud with Open Stack, Docker
 
Fossetcon: Crash Course on Open Source Cloud Computing
Fossetcon: Crash Course on Open Source Cloud ComputingFossetcon: Crash Course on Open Source Cloud Computing
Fossetcon: Crash Course on Open Source Cloud Computing
 
Hitchhiker's Guide to Open Source Cloud Computing
Hitchhiker's Guide to Open Source Cloud ComputingHitchhiker's Guide to Open Source Cloud Computing
Hitchhiker's Guide to Open Source Cloud Computing
 
Crash Course in Open Source Cloud Computing
Crash Course in Open Source Cloud ComputingCrash Course in Open Source Cloud Computing
Crash Course in Open Source Cloud Computing
 
Cloud Computing Expo West - Crash Course in Open Source Cloud Computing
Cloud Computing Expo West - Crash Course in Open Source Cloud ComputingCloud Computing Expo West - Crash Course in Open Source Cloud Computing
Cloud Computing Expo West - Crash Course in Open Source Cloud Computing
 
Interop - Crash Course In Open Source Cloud Computing
Interop - Crash Course In Open Source Cloud ComputingInterop - Crash Course In Open Source Cloud Computing
Interop - Crash Course In Open Source Cloud Computing
 
Docker, cornerstone of cloud hybridation ? [Cloud Expo Europe 2016]
Docker, cornerstone of cloud hybridation ? [Cloud Expo Europe 2016]Docker, cornerstone of cloud hybridation ? [Cloud Expo Europe 2016]
Docker, cornerstone of cloud hybridation ? [Cloud Expo Europe 2016]
 
Microservices and docker
Microservices and dockerMicroservices and docker
Microservices and docker
 
Kubernetes - An introduction
Kubernetes - An introductionKubernetes - An introduction
Kubernetes - An introduction
 
Cloud Native Architectures for Devops
Cloud Native Architectures for DevopsCloud Native Architectures for Devops
Cloud Native Architectures for Devops
 
OpenCloudConf: It takes an (Open Source) Village to Build a Cloud
OpenCloudConf: It takes an (Open Source) Village to Build a CloudOpenCloudConf: It takes an (Open Source) Village to Build a Cloud
OpenCloudConf: It takes an (Open Source) Village to Build a Cloud
 
Build a Cloud Day SF - Crash Course on Open Source Cloud Computing
Build a Cloud Day SF - Crash Course on Open Source Cloud ComputingBuild a Cloud Day SF - Crash Course on Open Source Cloud Computing
Build a Cloud Day SF - Crash Course on Open Source Cloud Computing
 
InteropNY/CloudConnect 2014 - Quick Crash Course in Open Source Cloud Computing
InteropNY/CloudConnect 2014 - Quick Crash Course in Open Source Cloud ComputingInteropNY/CloudConnect 2014 - Quick Crash Course in Open Source Cloud Computing
InteropNY/CloudConnect 2014 - Quick Crash Course in Open Source Cloud Computing
 
prodops.io k8s presentation
prodops.io k8s presentationprodops.io k8s presentation
prodops.io k8s presentation
 
Crash Course in Open Source Cloud Computing
Crash Course in Open Source Cloud ComputingCrash Course in Open Source Cloud Computing
Crash Course in Open Source Cloud Computing
 
Open Source Tool Chains for Cloud Computing
Open Source Tool Chains for Cloud ComputingOpen Source Tool Chains for Cloud Computing
Open Source Tool Chains for Cloud Computing
 
Linux Foundation Collaboration Summit: Hitchhiker's Guide to the Cloud
Linux Foundation Collaboration Summit: Hitchhiker's Guide to the CloudLinux Foundation Collaboration Summit: Hitchhiker's Guide to the Cloud
Linux Foundation Collaboration Summit: Hitchhiker's Guide to the Cloud
 
FLUX - Crash Course in Cloud 2.0
FLUX - Crash Course in Cloud 2.0 FLUX - Crash Course in Cloud 2.0
FLUX - Crash Course in Cloud 2.0
 
LinuxFest Northwest: Crash Course in Open Source Cloud Computing
LinuxFest Northwest: Crash Course in Open Source Cloud Computing LinuxFest Northwest: Crash Course in Open Source Cloud Computing
LinuxFest Northwest: Crash Course in Open Source Cloud Computing
 

Semelhante a Infatuation Leads to Love — How Container Orchestration and Federation Enables More Multi-Cloud Competition

OpenNebulaConf 2014 - Cloud Automation for OpenNebula - Kishorekumar Neelamegam
OpenNebulaConf 2014 - Cloud Automation for OpenNebula - Kishorekumar NeelamegamOpenNebulaConf 2014 - Cloud Automation for OpenNebula - Kishorekumar Neelamegam
OpenNebulaConf 2014 - Cloud Automation for OpenNebula - Kishorekumar Neelamegam
OpenNebula Project
 

Semelhante a Infatuation Leads to Love — How Container Orchestration and Federation Enables More Multi-Cloud Competition (20)

Decision Makers Guide: Nomad vs Kubernetes
Decision Makers Guide: Nomad vs KubernetesDecision Makers Guide: Nomad vs Kubernetes
Decision Makers Guide: Nomad vs Kubernetes
 
Cloud foundry
Cloud foundryCloud foundry
Cloud foundry
 
Cloud Introduction .pptx
Cloud Introduction .pptxCloud Introduction .pptx
Cloud Introduction .pptx
 
HPE’s Erik Vogel on Key Factors for Driving Success in Hybrid Cloud Adoption ...
HPE’s Erik Vogel on Key Factors for Driving Success in Hybrid Cloud Adoption ...HPE’s Erik Vogel on Key Factors for Driving Success in Hybrid Cloud Adoption ...
HPE’s Erik Vogel on Key Factors for Driving Success in Hybrid Cloud Adoption ...
 
Kubernetes Vs. Docker Swarm: Comparing the Best Container Orchestration Tool ...
Kubernetes Vs. Docker Swarm: Comparing the Best Container Orchestration Tool ...Kubernetes Vs. Docker Swarm: Comparing the Best Container Orchestration Tool ...
Kubernetes Vs. Docker Swarm: Comparing the Best Container Orchestration Tool ...
 
Kubernetes: A Top Notch Automation Solution
Kubernetes: A Top Notch Automation SolutionKubernetes: A Top Notch Automation Solution
Kubernetes: A Top Notch Automation Solution
 
Cloud1 Computing 01
Cloud1 Computing 01Cloud1 Computing 01
Cloud1 Computing 01
 
Containerization Report
Containerization ReportContainerization Report
Containerization Report
 
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower Cost
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower CostHow Consistent Data Services Deliver Simplicity, Compatibility, And Lower Cost
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower Cost
 
Introduction to cloud computing
Introduction to cloud computingIntroduction to cloud computing
Introduction to cloud computing
 
CLOUD_COMPUTING_PRESENTATION.pptx
CLOUD_COMPUTING_PRESENTATION.pptxCLOUD_COMPUTING_PRESENTATION.pptx
CLOUD_COMPUTING_PRESENTATION.pptx
 
The biggest constraint to devops in the cloud has a solution
The biggest constraint to devops in the cloud has a solutionThe biggest constraint to devops in the cloud has a solution
The biggest constraint to devops in the cloud has a solution
 
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid IT
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid ITHow Containers are Becoming The New Basic Currency For Pay as You Go Hybrid IT
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid IT
 
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...
OpenNebula Conf 2014 | Cloud Automation for OpenNebula by Kishorekumar Neelam...
 
OpenNebulaConf 2014 - Cloud Automation for OpenNebula - Kishorekumar Neelamegam
OpenNebulaConf 2014 - Cloud Automation for OpenNebula - Kishorekumar NeelamegamOpenNebulaConf 2014 - Cloud Automation for OpenNebula - Kishorekumar Neelamegam
OpenNebulaConf 2014 - Cloud Automation for OpenNebula - Kishorekumar Neelamegam
 
How docker & kubernetes can optimize the cost of hosting
How docker & kubernetes can optimize the cost of hostingHow docker & kubernetes can optimize the cost of hosting
How docker & kubernetes can optimize the cost of hosting
 
Open source based container solution in Azure - May Docker Meetup
Open source based container solution in Azure - May Docker MeetupOpen source based container solution in Azure - May Docker Meetup
Open source based container solution in Azure - May Docker Meetup
 
Final White Paper_
Final White Paper_Final White Paper_
Final White Paper_
 
PaaS with Docker
PaaS with DockerPaaS with Docker
PaaS with Docker
 
What is Docker & Why is it Getting Popular?
What is Docker & Why is it Getting Popular?What is Docker & Why is it Getting Popular?
What is Docker & Why is it Getting Popular?
 

Último

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
Enterprise Knowledge
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Último (20)

Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 

Infatuation Leads to Love — How Container Orchestration and Federation Enables More Multi-Cloud Competition

  • 1. Infatuation Leads to Love— How Container Orchestration and Federation Enables More Multi-Cloud Competition Transcript of a discussion on new ways to gain container orchestration, use Serverless models, and employ inclusive management to keep the container love alive and well. Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: DigitalOcean. Dana Gardner: Welcome to the next edition of BriefingsDirect. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator. The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together -- and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts. This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use Serverless models, and employ inclusive management to keep the container love alive. Here to help unpack insights into the new era of using containers to gain ease with multi-cloud deployments are our panelists, Matt Baldwin, Founder and CEO at StackPointCloud, based in Seattle. Welcome, Matt. Matt Baldwin: How are you? Gardner: I’m great. We’re also here with Nic Jackson, Developer Advocate at HashiCorp, based in San Francisco. Hello, Nic. Nic Jackson: Hey, how are you doing? Gardner: Doing well. We are here too with Reynold Harbin, Director of Product Marketing at DigitalOcean, based in New York. Hello, Reynold. Reynold Harbin: Hi, Dana. Thanks for having us. Baldwin
  • 2. Gardner: Delighted to have you with us. Nic, let’s start with you. HashiCorp has gone a long way to enable multi-cloud provisioning. What are some of the trends now driving the need for multi-cloud? And how does container management and orchestration fit into the goal of obtaining functional multi-cloud use, or even interoperability? Jackson: What we see mainly from our enterprise customers is that people are looking for a number of different ways so that they don’t get locked into one particular cloud provider. They are looking for high-availability and redundancy across cloud providers. They are looking for a migration path from private cloud to a public cloud. Or they want a burstable capacity, which means that they can take that private cloud and burst it out into public cloud, if need be. Containers -- and orchestration platforms like Kubernetes, Nomad and Swarm -- are providing standard interfaces to developers. So once you have the platform set up, the running of an application can be mostly cloud-agnostic. Gardner: There’s a growing need for container management and orchestration for not only cloud-agnostic development, but potentially as a greasing of the skids, if you will, to a multi-cloud world. Harbin: Yes. If you make the investment now to architect and package your applications with containers and intelligent orchestration, you will have much better agility to move your application across cloud providers. This will also enable you to quickly leverage any new products on any cloud provider. For example DigitalOcean recently upgraded our High CPU Droplet plans, providing some of the best values for accessing the latest chipsets from Intel. For users with containerized applications and orchestration, they could easily improve application performance by moving workloads over to that new product. Gardner: And, Matt, at StackPointCloud you have created a universal control plane for Kubernetes. How does that help in terms of ease of deployment choice and multi-cloud use? Ease-of-use increases flexibility Baldwin: We’ve basically built a management control plane for Kubernetes that gives you a single pane of glass across all your cloud providers. We deal with the top four, so Amazon, Microsoft Azure, Google and DigitalOcean. Because we provide that single pane of glass, you can build the clusters you need with those providers and you can stand up federation. In Kubernetes, multi-cloud is done via that federation. The federation control plane connects all of those clusters together. We are also managing workloads to balance Jackson Harbin
  • 3. workloads across, say, some on Amazon Web Services (AWS) and some on DigitalOcean, if you like. That’s what we have been doing with our star product. We are still on that journey, still building more things. Because it’s moving quite fast, federation is shifting and changing. We are keeping pace and trying to make it all easier to use. Our whole point is usability. We think that all this tooling needs to become really, really easy to use. You need to be able to manage multi-cloud as if it’s a single cloud. Gardner: Reynold, with DigitalOcean being one of the major cloud providers that Matt mentioned, why is it important for you to enable this level of multi-cloud use? Is it a matter of letting the best public cloud services values win? Why do you want to see the floodgates open for public cloud choice and interoperability? Harbin: Thousands of businesses and over a million developers use DigitalOcean -- primarily because of the ease in provisioning and of being able to spin up and manage their infrastructure. This next step of having orchestration tools and containers puts even more flexibility into the hands of developers and businesses. For customers who want to use data centers on DigitalOcean, or data centers on other providers, we want to enable flexibility. We want developers to more easily burst into public clouds as they need, and gain all the visibility they want in a common way across the various infrastructure providers that they want to use. Serverless pros and cons Gardner: Developers are increasingly interested in a Serverless model, where they let the clouds manage the allocation of machine resources. This also helps in cost optimization. How do the container orchestration and management tools help? How does Serverless, and the demand for it, also fit in? Jackson: Serverless adds an extra layer of complexity, because the different cloud providers have different approaches to doing Serverless. A Serverless function running on Google or Azure or AWS -- they all have different interfaces. They have different ways of deploying, and the underlying code has to be abstracted enough so that it can run across all the different providers. You have to really think about that from a software architectural problem, from that perspective. In my opinion, you would allow yourself to get locked in if you use things like the Native Queuing or Pub/Sub, which works really well with a particular cloud provider’s Serverless platform. One of the recent projects I’m super-excited about is OpenFaaS, by Alex Ellis. What OpenFaaS tries to do is provide that cloud-agnostic method of running functions-as-a- Introducing Simple and Reliable Cloud Object Storage
  • 4. service (FaaS). This is not necessarily Serverless, you still have to manage the underlying servers, but it does allow you to take advantage of your existing Kubernetes, Nomad, or Docker Swarm Clusters. It then gives you the developer workflow, which I think is the ultimate end-goal, rather than thinking about decoupling the complexity of the infrastructure. Gardner: Reynold, any thoughts on Serverless? Harbin: I agree. We are on this road of making it easier for the application developer so they don’t have to worry about the underlying infrastructure. For certain applications, Serverless can help in that goal, but at the same time, you’re adding complexity. You have to think about the application, the architecture, and which services are going to be most useful in terms of applying Serverless. We want to enable our developers to use whatever technologies will help them the most. And for certain applications, Serverless will be relevant. OpenFaaS is really interesting, because it makes it easier to write to one standard, and not have to worry about the underlying virtual servers or cloud providers. Jackson: The other neat thing about OpenFaaS is the maintainability. When you look at application lifecycle management (ALM), which not enough people pay enough attention to, Serverless is so new that ALM is still unknown. But with OpenFaaS -- and one of the things that I love about that platform -- you are baking functions into Docker containers so you can run those as standard microservices outside of the OpenFaaS platforms, if you want. So you can see that kind of maintainability. It gives you an upgrade path, despite being completely decoupled from any particular cloud provider’s platform. So you gain flexibility. If you want to go multi-cloud, you can run OpenFaaS on a federated Nomad or federated Kubernetes cluster and you have your own private multi-cloud FaaS approach, which I think is super cool. Gardner: It sounds as if we would like to see the same trajectory we saw with containers take place with Serverless, there is just a bit of a lag there in terms of the interoperability and the extensibility. Baldwin: There is also the Serverless framework they can use that helps to abstract out the Serverless endpoints. So abstract at Lambda or Kubeless or any other, Fission; Kubeless and Fission are just two other projects that are more geared toward Kubernetes than others. Gardner: Nic, tell us about your organization, HashiCorp. What are you up to? Simplify, simplify Jackson: We are all about delivering developer tooling to enable modern applications. We have products like Nomad, which is a scheduler; Terraform, for infrastructure-as- code; Consul, which you can use for key value configurations and service discovery; You have to think about the application, the architecture, and which services are going to be most useful in terms of applying Serverless.
  • 5. Packer for creating gold master images; and Vault, which is becoming very popular for managing “secrets” and things like that. We are putting together a suite of products that can make integration super-easy, but they actually work well standalone, too. You could just run Terraform if you want to, or maybe you are just going to use Nomad and Consul, or maybe Consul and Vault. But the aim is that we want to simplify a lot of the problems that people have when they start building highly available, highly distributed and scalable infrastructures. Gardner: Reynold, tell us about DigitalOcean, and why you are interested in supporting organizations like StackPointCloud and HashiCorp as they better provide services and value to their customers. Harbin: DigitalOcean is a very intuitive cloud services platform on which to run applications. We are designed to help developers and businesses build their applications, deploy them, and scale them faster, more efficiently, and more cost effectively. Our products basically are cloud services with various configurations to maximize CPU or memory available in our data centers around the world. We also have storage, including object storage, for a unlimited scale; or block storage that you can attach a volume of any size to, depending on your needs. And then we also include networking services for securing and scaling -- from firewalling to load balancing your applications. All of these products are designed to be controlled, either through a simplified UI or through a very simple API, a RESTful API, so that tools like Terraform or Kubernetes orchestration through StackPointCloud can all be done through the single pane of glass of your choice. And the infrastructure that underlies it is all controlled via the API. The reason we are leaning to these kinds of partnerships and tooling is because that’s what our users want, what developers want. They want easier ways to provision and manage infrastructure. So if you want to use an orchestration tool, then we want to make that as easy and as seamless as possible. Gardner: The infatuation with containers has moved into the full love affair level, at least based on what I see in the market. But how do we keep this from going off the rails? We have seen other cases where popularity can lead to some complexity. For example, with the way virtual machines (VMs) were adopted to a point where sprawl became such an issue. What are the challenges we are facing, and how can organizations better prepare themselves for a world of far more containers, and perhaps a world of more Serverless? Container complexity Baldwin: Containers are going to introduce a lot of complexity. I will just dig into one level of complexity, which is security. How to protect one host talking to another host? You need to figure out how to protect one service talking to another service. How do you secure that, how do you incur that traffic, how do you ensure that identity is handled? Users and developers want easier ways to provision and manage infrastructure.
  • 6. When you begin looking at other pieces of the puzzle, things like ServiceMesh. We look at things like Kubernetes and Istio as complementary because you are going to need to be able to observe all of these environments. You are going to have to do all the things that you would have done with VMs, but there’s just an abundance of these things. That’s kind of what we are seeing, and that’s the level of complexity. The tooling is still trying to catch up, and a lot of the open source tools are still in development, with some of the components still in alpha. There is a lot of need for ease- of-use around these tools, a lot of need for better user interfaces. We are at the beginning where, yes, we are trying to handle containers, and lots of containers all over the place, and trying to figure out how these things are talking to each other, and being able to just troubleshoot that. How do you trace when your application starts to have an issue? How do you figure out where in that environment the issue is showing up? You start to learn how to you use tools like the Zipkin or you introduce OpenTracing into your stack, things like that. Gardner: Matt, what would you encourage people to do now, experiment with more tools, acquaint themselves with those tools, make demands on tools, how to head this off this from a user perspective? Tip-toe through the technology Baldwin: I would begin by stepping into the water, going into the shallow end of the pool by just starting to explore the technology. I have seen organizations jump into these technologies. Take Kubernetes as an example. I have seen organizations adopt Kubernetes really early, and then they started to build their own Platform as a Service (PaaS) on top of it without actually being involved in the project and being aware of what’s happening in the project. So there is the danger of duplicating things that are happening in the roadmap, duplicating something that’s in the roadmap that will be done in six months in the project. And now you are stuck on Kubernetes version 1.2, and how do you move to the next version of Kubernetes? So I think there is a danger there with too early of an adoption, if you start to build too much. But at the same time there is a need to conduct proof of concepts (POCs), to start to shift some of your smaller services into new areas. I think you need to introduce Istio into test environments and start to look at what that does for you, and start looking at all the use cases around it, things like traffic shifting. There are issues like how to do a A-B deployments, service meshes can actually give you that and start to play with that and start to plan for the future, but maybe not Introducing Simple and Reliable Cloud Object Storage
  • 7. completely start to customize whatever you just built, because there is always a threat that the project isn’t fully baked yet. Gardner: Sounds like it might be time to be thinking strategically, as well as tactically in how you approach these things. Maybe even get some enterprise architects involved so that you don’t get too bogged down before the standards are cooked. Nic, what do you see as the challenges with bringing containers to use in a multi-cloud environment? What should people be thinking about to hedge against those challenges? Sensible speed Jackson: Look at just how fast things have moved. I mean, Kubernetes as a product practically didn’t exist two years ago. Nomad didn’t really exist two years ago. I think it was only just launched at HashiCorp in 2015. And those products are still evolving. And I think it was a really good comment that you have to be careful about building on top of these things, and then stray too far away from the stable branch. You could end up in a situation where you can’t follow an upgrade path -- because one thing that’s for certain, the speed of evolution isn’t going to slow down. Always try to keep abreast of where the technology is, and always make sure you have a great path. You can do that through being sensible about abstraction. In the same way that you would not necessarily depend on a concrete implementation in your code, you would depend on interfaces. You have to take a similar approach to your infrastructure, so we should be looking at depending upon interfaces, so that if a new component comes along -- something that’s better than Kubernetes – you can actually hot-swap them out without having to go through years of re-platforming. Gardner: Reynold, how do you see solving complexity in the evolution of these technologies, and ways that early-adopters can resist getting bogged down as they continue to mature? Harbin: The two main points that Matt and Nic have brought up are really good ones. Certainly visibility and security of these applications and these environments is really important from a functionality perspective. As Nic mentioned, the pace at which new technologies are being developed is intense. You have to have an environment where you can test out these various tools, see what works for you, do it in a way that you can get these ideas and run them and test them and see how this technology can help your particular business. And a lot of this infrastructure in many ways is almost disposable, because you can spin it up as you need to, test it and then spin it down -- and it might only need to live for an hour or for a couple of days. Being aware of the tools, what’s happening in terms of new functionality, and then being able to test that either locally or in a cloud environment is really going to be important. Look at how fast things have moved. Kubernetes practically didn’t exist two years ago. Nomad didn’t really exist two years ago. And those products are still evolving.
  • 8. Gardner: I was expecting at least one of you to bring up DevOps. That thinking about development in conjunction with production, and making this more of a seamless process would help. Am I off base? Matt, should DevOps be part of this solution set? Shared language Baldwin: Yes, it should be part of it. I guess my personal opinion on DevOps is that we are moving more toward where Ops needs to become more and more invisible. It’s more about shipping, and it’s more about focusing on the apps versus the infrastructure. And so I just see more like the capital O going to lowercase o. What I do think is interesting right now is that developers and operators are now speaking the same language. If you are looking at Kubernetes, developers and operators are now speaking the same language. They are speaking in Kubernetes, and so that’s a very big deal. So now the developer is building it in the same way that the operator is going to understand it. The operator is going to understand how the microservice is built; the developer is going to understand how it’s built. They are all going to understand everything. And then with multi-cloud, you could also do things like have your staging environment in one cloud and you promote your code so that your operators are running the code over in production on another provider and you could promote that code across the network, so you can do things like that, too. I think there is some of the traditional DevOps tooling, things like Chef, things like Puppet, I don’t think have as much of a future as they used to have, because they did a lot of app management on the hosts and now that the apps are not living on the host anymore, there is not a lot for those tools to do. So just build out a host at Amazon AWS and then just deploy Kubernetes and then just let Kubernetes take over from there. Some of those tools, their importance will lessen, like you won’t have to know Puppet as much; you likely won’t ever need to know Puppet. Gardner: Nic, are you in the same camp, more Dev, less Ops, lowercase o? More Dev, less Ops? Jackson: I think it depends on two things. The first thing is the scale of your organization. When you look at a lot of tools, and you look at a lot of information that’s out there, it makes an assumption that everybody is operating at fixed scale, and I don’t think that’s the case. Pretty much any business that’s operating in a digital world, which is pretty much any business these days, you can take advantage of modern development techniques. When you start depending on the scale, then it also shifts who is potentially going to be doing the infrastructure side of things. Developers and operators are now speaking the same language, Kubernetes. So now the developer is building it in the same way that the operator is going to understand it.
  • 9. Smaller companies, I think you are going to get more Dev than you will Ops because that may not be a scale that can support a dedicated operations team. But larger enterprise organizations, you may have more of a platform team, more of an operations person who is using code to manage infrastructure. In either case, there’s a requirement that developers have to have an appreciation and an understanding of the platform to which they are deploying their code. They need to have that because they need to have an understanding of how things like service discovery works. How are the volumes working for persistent storage, how are things going to work in terms of scale and scalability? So if you are going to be load testing it, what are sort of the operational thresholds in terms of I/O for CPU or disk, and things like that? I think DevOps is a really powerful concept. I certainly love working in a world where I can interact and work with the operations and the infrastructure teams. I benefit as a software engineer, and I think the infrastructure engineers benefit because those sorts of skills that we both have, we can share. So I really hope DevOps doesn’t go away, but I think the level at which that interaction occurs does very much depend on scale of your organization. Shop around Gardner: Are there examples of some organizations, large or small, that have embraced containers, have multi-cloud in their sights, are maybe thinking about Serverless? Baldwin: I have an example. This customer was a full-on Amazon shop, and they had not migrated to microservices. Their first step was to move to Docker, and then we moved them up to Kubernetes. These guys were an adtech firm and they had, as you can imagine, ingress traffic that had a high charge to it, and that was billed by Amazon. So they spent a lot of time negotiating a better cloud price-point with Google. What they were able to do is stand up a Kubernetes cluster on Google Cloud and then shift the workload that was needed at that better price-point. At the same time, they kept the rest of the workload at Amazon because they were still relying on some of the other underlining services of Amazon, things like Amazon Relational Database Service (Amazon RDS). So they didn’t want to completely move to Google, but they wanted to move something that they were taking a really large hit on, on cost, and move that to Google. So I think you are going to see multi-cloud first get used as a vendor tactic against the cloud providers to try and negotiate a better price point. So if you are doing adtech, now you are in a position where you can actually negotiate with Amazon, Google or whomever, and get a better price and just move your workload to whomever gives it to you. Introducing Simple and Reliable Cloud Object Storage
  • 10. So that makes it a lot more competitive. That was an early example, one of the earlier federation examples we have. Gardner: The economic paybacks from that could be very significant, if you can leverage better deals from your cloud providers. That could be a very significant portion of your overall expenses. Baldwin: It’s giving the power back to the consumer. We basically have a cloud monopoly, and then smaller ones. So we have Amazon AWS, and so how do you work against Amazon to reduce the price points, how do you try to break that? And once you start to get power back to the consumer, that starts to weaken the hold on the end-user. Gardner: Nic, an example that we can look to perhaps in a different way, one that provides a business advantage? Go public Jackson: One of the things that we see for a lot of enterprise customers is the cloud adoption phase. So I can’t give you the exact numbers, but the total market in terms of compute for the big four cloud providers is about 30 percent. There is something like 60 percent to 70 percent of all of the existing compute still running in private data centers. A lot of organizations are looking at moving that forward. They want to be able to adopt cloud, for whatever reason. They want better tooling to be able to do that. You can create a federated Kubernetes cluster, or a federated Nomad cluster, and you can begin shifting your workload away from the private data center and into the cloud. You can gain that clear migration path. It allows you to run both of those platforms side by side, the distinct platform that the organization understands but also the modern platform that requires learning in terms of tooling and behavior. That’s going to be a typical approach for a lot of the large enterprises. We are going to see a lot of the shift from private data centers into public clouds. A lot of the cloud providers are offering pretty attractive reasons in terms of licensing to do that rather than renew your license for your physical infrastructure. Why don’t you just move it off into your cloud provider? But if you’re running tens of billions of dollars worth of business, then any downtime is incredibly expensive. So you will want to ensure that you have the maximum high availability. Baldwin: You can see that Microsoft is converting a lot of their enterprise agreements to move people over to Azure. Jackson: Well, it’s not just Microsoft. I mean, Dell/EMC is one of the most aggressive. I could imagine a great sales strategy for them is to say, “Well, hey, rather than buying a new Dell server, why don’t you just lease one of these servers in the Dell cloud and we That’s going to be a typical approach for a lot of the large enterprises. We are going to see a lot of the shift from private data centers into public clouds.
  • 11. will manage it for you.” And you basically you’re just shifting from a capital expenditure (CapEx) to an operational expenditure (OpEx) model. I think Oracle has a similar strategy, the Oracle cloud is up and coming. So the potential is rather than paying for an Oracle database license you could just move that database into the Oracle cloud and save yourself a lot of trouble around the maintenance of the physical data center. Gardner: Reynold, any thoughts on examples of how orchestration of containers may be moving more toward Serverless models that have great benefits for your end users? As a public cloud, where do you see a good example of how this all works to everyone’s advantage? No more either/or Harbin: As developers move toward containers and orchestration, they can begin looking at cloud providers not as a choice of either/or but as, “I get to use all of them, and I get to use the products and services that are best for my particular application.” An example of that would be a customer who was hosting their application and their storage on Amazon AWS, and a month ago DigitalOcean released our new object storage product called Spaces. Essentially they gained all the benefits of the AWS S3 object storage, but the cost is 10 times lower, at least for bandwidth. If this particular customer could containerize their application, which basically publishes and posts content to object storage and delivers a lot of that to end users, they would have the flexibility to take advantage of new products like Spaces that are being rolled out all the time by various cloud providers. In this case, they could have easily moved their application to DigitalOcean, take advantage of our new object storage product, and essentially lowered the total cost. But it’s not just DigitalOcean products. New technologies that can make your applications better are being released all the time, as open source projects and commercial products. Companies will gain agility if their applications are containerized, as they will be able to use new technologies much more easily. Baldwin: There are some great abstraction layers -- things like Minio that you don’t necessarily need to interact with the underlying object storage. You have a layer that allows you to be ignorant of that, and such de-coupling is super-useful. Gardner: I’m afraid we are about out of time, but I wanted to give each of you an opportunity to tell us how to learn more about your organization. Matt Baldwin, how could people follow you and also learn more about StackPointCloud? Baldwin: If you wanted to give Kubernetes a shot, we provide a turnkey marketplace and management platform. So you just hit the site, log in with social credentials like GitHub, and then you can start to build clusters. You can check it out via our blog on Companies will gain agility if their applications are containerized, as they will be able to use new technologies much more easily.
  • 12. Stackpoint.io. We also run all of the major markets for the Kubernetes community, up and down the West and East Coasts. So you can engage with us at any of the Kubernetes events in Seattle, San Francisco, New York, and wherever. Yeah, also just drop any Kubernetes slack channel and just ping us, ping me on baldwinmathew, also @baldwinmathew on Twitter. Gardner: Nic, same thing, how can people follow you and learn more about HashiCorp? Jackson: HashiCorp.com is a great landing site because you can bounce out to the various product sites from there. We also have a blog, which we are pretty active with. We are generally publishing at least a couple of pieces of information ourselves on there every week but we are also syndicating other stuff that we find, not necessarily always related to HashiCorp but just interesting technology things. So you can get access to the blog through there and on Twitter following HashiCorp, myself, I am @sheriffjackson, so you can follow me on Twitter, I try to share stuff that I find interesting. Gardner: And Reynold, learning more about DigitalOcean as well as following you or other evangelists that you think are worthy? Harbin: The community site on DigitalOcean has 1,700 really well-curated articles. So do.co/community would be a good start, and we have several really technology-agnostic articles about containerization, as well as specific technologies like Kubernetes. They are articles, they are well written and they will teach you just how you can get started. And then of course, the DigitalOcean website is a good resource just for our own product. Gardner: I’m afraid we’ll have to leave it there. You’ve been listening to a sponsored BriefingsDirect discussion on container orchestration and automation solutions as a means to encourage broader adoption of containers and multi-cloud use. We’ve learned about new ways to gain container control, we’ve also heard about Serverless and discussed some of the models around DevOps in order to grease the skids toward more competitive cloud deployments and development. So thanks to our guests, Matt Baldwin, Founder and CEO of StackPointCloud, Nic Jackson, Developer Advocate at HashiCorp, and Reynold Harbin, Director of Product Marketing at DigitalOcean. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing series of BriefingsDirect discussions. A big thank you to our sponsor DigitalOcean for supporting these presentations. Follow me on Twitter @Dana_Gardner and find more podcasts at BriefingsDirect.com. Thanks again for joining! Please pass this content along your IT community and do come back next time. Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: DigitalOcean.
  • 13. Transcript of a discussion on new ways to gain container orchestration, use Serverless models, and employ inclusive management to keep the container love alive and well. Copyright Interarbor Solutions, LLC, 2005-2017. All rights reserved. You may also be interested in: • As enterprises face mounting hybrid IT complexity, new management solutions beckon • How mounting complexity, multi-cloud sprawl, and need for maturity hinder hybrid IT's ability to grow and thrive • Get ready for the Post-Cloud World • Inside story on HPC’s AI role in Bridges 'strategic reasoning' research at CMU • Philips teams with HPE on ecosystem approach to improve healthcare informatics-driven outcome • Inside story: How Ormuco abstracts the concepts of private and public cloud across the globe • How Nokia refactors the video delivery business with new time-managed IT financing models • IoT capabilities open new doors for Miami telecoms platform provider Identidad IoT • Inside story on developing the ultimate SDN-enabled hybrid cloud object storage environment • How IoT and OT collaborate to usher in the data-driven factory of the future