Transcript of a discussion on how changes in business organization and culture demand a new approach to leadership over such functions as hybrid and multi-cloud procurement and optimization
Who, if Anyone, is in Charge of Multi-Cloud Business Optimization?
1. Page 1 of 7
Who, if Anyone, is in Charge of
Multi-Cloud Business Optimization?
Transcript of a discussion on how changes in business organization and culture demand a new
approach to leadership over such functions as hybrid and multi-cloud procurement and
optimization.
Listen to the podcast. Find it on iTunes. Download the transcript.
Sponsor: Hewlett Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of
the Analyst podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions,
your host and moderator for this ongoing discussion on the latest insights into successful
digital transformation.
This composable cloud strategies interview explores how changes in business
organization and culture demand a new approach to leadership over such functions as
hybrid and multi-cloud procurement and optimization.
We’ll now hear from an IT industry analyst about the forces reshaping the consumption
of hybrid cloud services and why the model around procurement must be accompanied
by an updated organizational approach -- perhaps even a new office or category of
officer in the business category.
Here to help us explore who -- or what -- should be in
charge of spurring effective change in how companies
acquire, use, and refine their new breeds of IT is John
Abbott, Vice President of Infrastructure and Co-Founder
of The 451 Group. Welcome, John.
John Abbott: Thank you very much for inviting me.
Gardner: What has changed about the way that IT is
being consumed in companies? Is there some gulf
between how IT was acquired and the way it is being
acquired now?
Cloud control controls costs
Abbott: I think there is, and it’s because of the rate of technology change. The whole
cloud model is up over traditional IT and is being modeled in a way that we probably
didn’t foresee just 10 years ago. So, CAPEX to OPEX, operational agility, complexity,
and costs have all been big factors.
Abbott
2. Page 2 of 7
But now, it’s not just cloud, it's multi-cloud as well. People are beginning to say, “We
can’t rely on one cloud if we are responsible citizens and want to keep our IT up and
running.” There may be other reasons for going to multi-cloud as well, such as cost and
suitability for particular applications. So that’s added further complexity to the cloud
model.
Also, on-premises deployments continue to remain a critical function. You can’t just get
rid of your existing infrastructure investments that you have made over many, many
years. So, all of that has upended everything. The cloud model is basically simple, but
it's getting more complex to implement as we speak.
Gardner: Not surprisingly, costs have run away from organizations that haven’t been
able to be on top of a complex mixture of IT infrastructure-as-a-service (IaaS), platform-
as-a-service (PaaS), and software-as-a-service (SaaS). So, this is becoming an
economic imperative. It seems to me that if you don't control this, your runaway costs will
start to control you.
Abbott: Yes. You need to look at the cloud models of
consumption, because that really is the way of the
future. Cloud models can significantly reduce cost, but
only if you control it. Instant sizes, time slices, time
increments, and things like that all have a huge effect
on the total cost of cloud services.
Also, if you have multiple people in an organization ordering particular services from
their credit cards, that gets out of control as well. So you have to gain control over your
spending on cloud. And with services complexity -- I think Amazon Web Services (AWS)
alone has hundreds of price points -- things are really hard to keep track of.
Gardner: When we are thinking about who -- or what -- has the chops to know enough
about the technology, understand the economic implications, be in a position to forecast
cost, budget appropriately, and work with the powers that be who are in charge of
enterprise financial functions, that's not your typical IT director or administrator.
IT Admin role evolves in cloud
Abbott: No. The new generation of generalist IT administrators – the people who grew
up with virtualization -- don't necessarily look at the specifics of a storage platform, or
compute platform, or a networking service. They look at it on a much higher level, and
those virtualization admins are the ones I see as probably being the key to all of this.
But they need tools that can help them gain command of this. They need, effectively, a
single pane of glass -- or at least a single control point -- for these multiple services, both
on-premises and in the cloud.
Cloud models can
significantly reduce cost,
but only if you control it.
3. Page 3 of 7
Also, as the data centers become more distributed, going toward the edge, that adds
even further complexity. The admins will need new tools to do all of that, even if they
don't need to know the specifics of every platform.
Gardner: I have been interested and intrigued by what Hewlett Packard Enterprise
(HPE) has been doing with such products as HPE OneSphere, which, to your point,
provides more tools, visibility, automation, and composability around infrastructure,
cloud, and multi-cloud.
But then, I wonder, who actually will best exploit these tools? Who is the target
consumer, either as an individual or a group, in a large enterprise? Or is this person or
group yet to be determined?
Gain New Insights Into
Managing the Next Wave
Of IT Disruption
Abbott: I think they are evolving. There are skill shortages, obviously, for managing
specialist equipment, and organizations can’t replace some of those older admin types.
So, they are building up a new level of expertise that is more generalist. It’s those newer
people coming up, who are used to the mobile world, who are used to consumer
products a bit more, that we will see taking over.
We are going toward everything-as-a-service
and cloud consumption models. People have
greater expectations on what they can get out of
a system as well.
Also, you want the right resources to be applied to your application. The best, most cost-
effective resources; it might be in the cloud, it might be a particular cloud service from
AWS or from Microsoft Azure or from Google Cloud Platform, or it might be a specific in-
house platform that you have. No one is likely to have of all that specific knowledge in
the future, so it needs to be automated.
We are looking at the developers and the systems architects to pull that together with
the help of new automation tools, management consoles, and control plans, such as
HPE OneSphere and HPE OneView. That will pull it together so that the admin people
don’t need to worry so much. A lot of it will be automated.
Gardner: Are we getting to a point where we will look for an outsourced approach to
overall cloud operations, the new IT procurement function? Would a systems integrator,
or even a vendor in a neutral position, be able to assert themselves on best making
these decisions? What do you think comes next when it comes to companies that can't
quite pull this off by themselves?
We are going toward
everything-as-a-service and
cloud consumption models.
4. Page 4 of 7
People and AI partnership prowess
Abbott: The role of partners is very important. A lot of the vertically oriented systems
integrators and value-added resellers, as we used to call them, with specific application
expertise are probably the people in the best position.
We saw recently at HPE Discover the announced acquisition of BlueData, which allows
you to configure in your infrastructure a particular pool for things like big data and
analytics applications. And that’s sort of application-led.
The experts in data analysis and in artificial
intelligence (AI), the data scientists coming up,
are the people that will drive this. And they
need partners with expertise in vertical sectors
to help them pull it together.
Gardner: In the past when there has been a skills vacuum, not only have we seen a
systems integration or a professional services role step up, we have also seen
technology try to rise to the occasion and solve complexity.
Where do you think the concept of AIOps, or using AI and machine learning (ML) to help
better identify IT inefficiencies, will fit in? Will it help make predictions or
recommendations as to how you run your IT?
Abbott: There is a huge potential there. I don’t think we have actually seen that really
play out yet. But IT tools are in a great position to gather a huge amount of data from
sensors and from usage data, logs, and everything like that and pull that together, see
what the patterns are, and recommend and optimize for that in the future.
I have seen some startups doing system tuning, for example. Experts who optimize the
performance of a server usually have a particular area of expertise, and they can't really
go beyond that because it's huge in itself. There are around 100 “knobs” on a server that
you can tweak to up the speed. I think you can only do that in an automated fashion
now. And we have seen some startups use AI modeling, for instance, to pull those things
together. That will certainly be very important in the future.
Gardner: It seems to me a case of the cobbler’s children having no shoes. The IT
department doesn’t seem to be on the forefront of using big data to solve their problems.
Gain New Insights Into
Managing the Next Wave
Of IT Disruption
Abbott: I know. It's really surprising because they are the people best able to do that.
But we are seeing some AI coming together. Again, at the recent HPE Discover
The experts in data analysis
and in artificial intelligence, the
data scientists coming up, are
the people that will drive this.
5. Page 5 of 7
conference, HPE InfoSight made news as a tool that’s starting to do that analysis more.
It came from the Nimble acquisition and began as a storage-specific product. Now it’s
broadening out, and it seems they are going to be using it quite a lot in the future.
Gardner: Perhaps we have been looking for a new officer or office of leadership to solve
multi-cloud IT complexity, but maybe it's going to be a case of the machines running the
machines.
Faith in future automation
Abbott: A lot of automation will be happening in the future, but that takes trust. We
have seen AI waves [of interest] over the years, of course, but the new wave of AI still
has a trust issue. It takes a bit of faith for users to hand over control.
But as we have talked about, with multi-cloud, the edge, and things like microservices
and containers -- where you split up applications into smaller parts -- all of that adds to
the complexity and requires a higher level of automation that we haven’t really quite got
to yet but are going toward.
Gardner: What recommendations can we conjure for enterprises today to start them on
the right path? I’m thinking about the economics of IT consumption, perhaps getting
more of a level playing field or a common denominator in terms of how one acquires an
operating basis using different finance models. We have heard about the use of these
plans by HPE, HPE GreenLake Flex Capacity, for example.
What steps would you recommend that organizations take to at least get them on the
path toward finding a better way to procure, run, and optimize their IT?
Gain New Insights Into
Managing the Next Wave
Of IT Disruption
Abbott: I actually recently wrote a research paper for HPE on the eight essentials of
edge-to-cloud and hybrid IT management. The first thing we recommended was a
proactive cloud strategy. Think out your cloud strategy, of where to put your workloads
and how to distribute them around to different clouds, if that’s what you think is
necessary.
Then modernize your existing technology. Try and use automation tools on that
traditional stuff and simplify it with hyperconverged and/or composable infrastructure so
that you have more flexibility about your resources.
Make the internal stuff more like a cloud. Take out some of that complexity. It's has to be
quick to implement. You can’t spend six months doing this, or something like that.
6. Page 6 of 7
Some of these tools we are seeing, like HPE OneView and HPE OneSphere, for
example, are a better bet than some of the traditional huge management frameworks
that we used to struggle with.
Make sure it's future-proof. You have to be able to use operating system and
virtualization advances [like containers] that we are used to now, as well as public cloud
and open APIs. This helps accelerate things that are coming into the systems
infrastructure space.
Then strive for everything-as-a-service, so use cloud consumption models. You want
analytics, as we said earlier, to help understand what's going on and where you can best
distribute workloads -- from the cloud to the edge or on-premises, because it's a hybrid
world and that’s what we really need.
And then make sure you can control your spending and utilization of those services,
because otherwise they will get out of control and you won't save any money at all.
Lastly, be ready to extend your control beyond the data center to the edge as things get
more distributed. A lot of the computing will increasingly happen close to the edge.
Gardner: Micro data centers at the edge?
Computing close to the edge
Abbott: Yes. That's has to be something you start working on now. If you have
software-defined infrastructure, that's going to be easier to distribute than if you are still
wedded to particular systems, as the old, traditional model was.
Gardner: We have talked about what companies should do. What about what they
shouldn't do? Do you just turn off the spigot and say no more cloud services until you get
control?
It seems to me that that would stifle innovation, and developers would be particularly
angry or put off by that. Is there a way of finding a balance between creative innovation
that uses cloud services, but within the confines of an economic and governance model
that provides oversight, cost controls, and security and risk controls?
Abbott: The best way is to use some of
these new tools as bridging tools. So,
with hybrid management tools, you can
keep your existing mission-critical
applications running and make sure that
they aren't disrupted. Then, gradually you
can move over the bits that make sense
onto the newer models of cloud and
distributed edge.
The best way is to use some of these
new tools as bridging tools. … Then,
gradually you can move over the bits
that make sense onto the newer
models of cloud and distributed edge.
7. Page 7 of 7
You don't do it in one big bang. You don’t lift-and-shift from one to another, or react, as
some people have, to reverse back from cloud if it has not worked out. It's about keeping
both worlds going in a controlled way. You must make sure you measure what you are
doing, and you know what the consequences are, so it doesn't get out of control.
Gardner: I’m afraid we’ll have to leave it there. We have been exploring how changes in
business organization and culture have demanded a new approach to oversight and
management of total IT assets, resources, and services. And we have learned about
how consumption of hybrid and multi-cloud services is a starting point for regaining
control over a highly heterogeneous IT landscape.
Please join me in thanking our guest, John Abbott, Vice President of Infrastructure and
Co-Founder of The 451 Group. Thanks so much, John.
Abbott: Thank you very much, indeed. I enjoyed it.
Gardner: And a big thank you to our audience as well for joining this BriefingsDirect
Voice of the Analyst hybrid IT management strategies interview. I’m Dana Gardner,
Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett
Packard Enterprise-sponsored discussions.
Thanks again for listening. Please pass this on to your IT community and do come back
next time.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett
Packard Enterprise.
Transcript of a discussion on how changes in business organization and culture demand a new
approach to leadership over such functions as hybrid and multi-cloud procurement and
optimization. Copyright Interarbor Solutions, LLC, 2005-2019. All rights reserved.
You may also be interested in:
• How global HCM provider ADP mines an ocean of employee data for improved talent
management
• Inside story: How HP Inc. moved from a rigid legacy to data center transformation
• Dark side of cloud—How people and organizations are unable to adapt to improve the
business
• Better management of multicloud IaaS proves accelerant to developer productivity for
European gaming leader Magellan Robotech
• Ryder Cup provides extreme use case for managing the digital edge for 250K mobile golf
fans
• How hybrid cloud deployments gain traction via Equinix datacenter adjacency coupled
with the Cloud28+ ecosystem
• HPE and Citrix team up to make hybrid cloud-enabled workspaces simpler to deploy
• Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture
• New strategies emerge to stem the costly downside of complex cloud choices