SlideShare uma empresa Scribd logo
1 de 14
Baixar para ler offline
Page 1 of 14
A New Status Quo for Data Centers --
Seamless Communication
From Core to Cloud to Edge
A discussion with two leading IT and critical infrastructure executives on how the state of data
centers in 2020 demands better speed, agility, and efficiency from IT resources wherever they
reside.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast
series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and
moderator for this ongoing discussion on the latest insights into data center strategies.
As 2020 ushers in a new decade, the forces shaping data center decisions are
extending compute resources to new places. With the challenging goals of speed, agility,
and efficiency, enterprises and service providers alike will be seeking new balance
between the need for low latency and optimal utilization of workload placement.
Hybrid models will therefore include more distributed, confined, and modular data
centers at or near the edge.
These are but some of a few top-line predictions on the future state of the modern data
center design. Stay with us as we examine, with two leading IT and critical infrastructure
executives, how these data center variations nonetheless must also interoperate
seamlessly from core to cloud to edge.
Here to help us learn more about the state of data
centers in 2020 is Peter Panfil, Vice President of Global
Power at VertivTM. Welcome, Peter.
Peter Panfil: How are you, Dana?
Gardner: I’m doing great. We’re also here with Steve
Madara, Vice President of Global Thermal at Vertiv.
Welcome, Steve.
Steve Madara: Thank you, Dana.
Gardner: The world is rapidly changing in 2020.
Organizations are moving past the debate around hybrid
deployments, from on-premises to public clouds. Why do we need to also think about IT
architectures and hybrid computing differently, Peter?
Panfil
Page 2 of 14
Moving to the edge, with momentum
Panfil: We noticed a trend at Vertiv in our customer base. That trend is toward a new
generation of data centers. We have been living with distributed IT, client-server data
centers moving to cloud, either a public cloud or a private cloud.
But what we are seeing is the evolution of an edge-to-core, near-real-time data center
generation. And it’s being driven by devices everywhere, the “connected-all-the-time”
model that all of us seem to be going to.
And so, when you are in a near-real-time
world, you have to have infrastructure
that supports your near-real-time
applications. And that is what the
technology folks are facing. I refer to it as
a pack of dogs chasing them -- the
amount of data that’s being generated,
the applications running remotely, and
the demand for availability, low latency,
and driving cost down as much as you possibly can. This is what’s changing how they
approach their critical infrastructure space.
Gardner: And so, a new equilibrium is emerging. How is this different from the past?
Madara: If we go back 20 years, everything was
centralized at enterprise data centers. Then we decided
to move to decentralized, and then back to centralized.
We saw a move to colocation as people decided that’s
where they could get lower cost to run their apps. And
then things went to the cloud, as Peter said earlier.
And now, we have a huge number of devices connected
locally. Cisco says by late 2020 that it’s going to have 23
billion connected devices, and over half of those are
going to be machine-to-machine communications, which,
as Peter mentioned earlier, the latency is going to be
very, very critical.
An interesting read is Michael Lewis’s book Flash Boys about the arbitrage that’s taking
place with the low latency that you have in stock market trading. I think we are going to
see more of that moving to the edge. The edge is more like a smart rack or smart row
deployment in an existing facility. It’s going to be multi-tenant, because it’s going to be
able to be throughout large cities. There could be 20 or 30 of these edge data center
sites hosting different applications for customers.
[It’s] a pack of dogs chasing [the
technology folks] – the amount of
data that’s being generated, the
applications running remotely, and
the demand for availability, low
latency, and driving cost down as
much as you possibly can.
Madara
Page 3 of 14
This move to the edge is also going to provide IT resources in a lot of underserved
markets that don’t yet have pervasive compute, especially in emerging countries.
Gardner: Why is speed so important? We have been talking about this now for years,
but it seems like the need for speed to market and speed to value continues to ramp up.
What’s driving that?
Panfil: There is more than one kind of
speed. There is speed of response of the
application, that’s something that all of us
demand -- speed of response of the
applications. I have to have low latency in
the transactions I am performing with my
data or with my applications. So there is the
speed of the actual data being transmitted.
There is also speed of deployment. When Steve talked earlier about centralized cloud
deployments in these core data centers, your data might be going over a significant
distance, hopping along the way. Well, if you can’t live with that latency that gets
inserted, then you have to take the IT application and put it closer to the source and
consumer of the data. So there is a speed of deployment, from core to edge, that
happens.
And the third type of speed is you have to have low-first-cost, high-asset-utilization, and
rapid-scalability. So that’s a speed of infrastructure adaptation to what the demands for
the IT applications are.
So when we mean speed, I often say it’s speed, speed, and speed. First, it’s the data IT.
Once I have data IT speed, how did I achieve that? l did it by deploying fast, in the scale
needed for the applications, and lastly at a cost and reliability that makes it tolerable for
the businesses.
Gardner: So I guess it’s speed-cubed, right?
Panfil: At least, speed-cubed. Steve, if we had a nickel for every time one of our
customers said “speed,” we wouldn’t have to work anymore. They are consumed with
the different speeds that they have to deal with -- and it’s really the demands of their
customers.
Gardner: Vertiv for years has been looking at the data center of the future and making
some predictions around what to expect. You have been rather prescient. To continue,
you have now identified several areas for 2020, too. Let’s go through those trends.
Steve, Vertiv predicts that “hybrid architectures will go mainstream.” Why did you identify
that, and what do you mean?
There is more than one kind of
speed. … Speed of response of the
applications … speed of deployment
… and speed of infrastructure
adaptation to what the demands for
the IT applications are.
Page 4 of 14
The future is hybrid
Madara: If we look at the history of going from centralized to decentralized, and going
to colocation and cloud applications, it shows the ongoing evolution of Internet of Things
(IoT) sensors, 5G networks, smart cities, autonomous cars, and how more and more of
that data is generated and will need to be processed locally. A lot of that is from
machine-to-machine applications.
So when we now talk about hybrid, we have to get very, very close to the source, as far
as the processing is concerned. That’s going to be a large-scale evolution that’s going to
drive the need for hybrid applications. There is going to be processing at the edge as
well as centralized applications -- whether it’s in a cloud or hosted in colocation-based
applications.
Panfil: Steve, you and I both came up through the ranks. I remember when the data
closet down the hall was basically a communications matrix. Its intent was to get
communications from wherever we were to wherever our core data center was.
Well, the cloud is not going away. Number two, enterprise IT is not going away. What the
enterprise is saying is, “Okay, I am going to take my secret sauce and I am going to put
it in an edge data center. I am going to put the compute power as close to my consumer
of that data and that application as I possibly can. And then I am going to figure out
where the rest of it’s going to go.”
“If I can live with the latency I get out of a core data center, I am going to stay in the
cloud. If I can’t, I might even break up my enterprise data center into small or micro data
centers that give me even better responses.”
Dana, it’s interesting, there was a recent wholesale market summary published that said
the difference between the smaller and the larger wholesale deals widened. So what that
says is the large wholesale deals are getting bigger, the small wholesale deals are
getting smaller, and that the enterprise-based demand, in deployments under 600
kilowatts, is focused on low-latency and multi-cloud access.
That tells us that our customers, the
users of that critical space, are trying
to place their IT appliances as close
as they can to their customers,
eliminating the latency, responding
with speed, and then figuring out how
to mesh that edge deployment with
their core strategy.
Gardner: Our second trend gets back to the speed-cubed notion. I have heard people
describe this as a new arms race, because while it might be difficult to differentiate
Our customers … are trying to place
their IT appliances as close as they can
to their customers, eliminating the
latency, responding with speed, and then
figuring out how to mesh that edge
deployment with their core strategy.
Page 5 of 14
yourself when everyone is using the same public cloud services, you can really
differentiate yourself on how well you can conduct yourself at speed.
What kinds of capabilities across your technologies will make differentiation around
speed work to an advantage as a company?
The need for speed
Panfil: Well, I was with an analyst recently, and I said the new reality is not that the big
will eat the small -- it’s that the fast will eat the slow. And any advantage that you can get
in speed of applications, speed of deployment, deploying those IT assets -- or morphing
the data center infrastructure or critical space infrastructure – helps improve capital
efficiency. What many customers tell us is that they have to shorten the period of time
between deciding to spend money on IT assets and the time that those asset start
creating revenue.
They want help being creative in lowering their first-cost, in increasing asset utilization,
and in maintaining reliability. If, holy cow, my application goes down, I am out of
business. And then they want to figure out how to manage things like supply chains and
forecasting, which is difficult to do in this market, and to help them be as responsive as
they can to their customers.
Madara: Forecasting and understanding the new applications -- whether it’s artificial
intelligence (AI) or 5G -- the CIOs need to decide where they need to put those
applications whether they should be in the cloud or at the edge. Technology is changing
so fast that nobody can predict far out into the future as far as to where I will need that
capacity and what type of capacity I will need.
So, it comes down to being able to put that
capacity in the place where I need it, right when I
need it, and not too far in advance. Again, I don’t
want to spend the capital, because I may put it in
the wrong place. So it’s got to be about tying the
demand with the supply, and that’s what’s key as
far as the infrastructure.
And the other element I see is technology is changing fast, even on the infrastructure
side. For our equipment, we are constantly making improvements every day, making it
more efficient, lower cost, and with more capability. And if you put capacity in today that
you don’t need for a year or two down the road, you are not taking advantage of the
latest, greatest technology. So really it’s coupling the demand to the actual supply of the
infrastructure -- and that’s what’s key.
Another consideration is that many of these large companies, especially in the
colocation market, have their financial structure as a real estate investment trust (REIT).
It comes down to being
able to put that capacity in
the place where I need it,
right when I need it, and
not too far in advance.
Page 6 of 14
As a result, they need to tie revenue with expenses tighter and tighter, along with capital
spending.
Panfil: That’s a good point, Steve. We redesigned our entire large power portfolio at
Vertiv specifically to be able to address this demand.
In previous generations, for example, the uninterruptible power supply (UPS) was built
as a complete UPS. The new generation is built as a power converter, plus an I/O
section, plus an interface section that can be rapidly configured to the customer, or, in
some cases, put into a vendor-managed inventory program. This approach allows us to
respond to the market and customers quicker.
We were forced to change our business model in such a way that we can respond in real
time to these kinds of capacity-demand changes.
Madara: And to add to that, we have to put
together more and more modules and
solutions where we are bundling the
equipment to deliver it faster, so that you don’t
have to do testing on site or assembly on site.
Again, we are putting together solutions that
help the end-user address the speed of the
construction of the infrastructure.
I also think that this ties into the relationship that the person who owns the infrastructure
has with their supplier base. Those relationships have to build in, as Peter mentioned
earlier, the ability to do stocking of inventory, of having parts available on-site to go fast.
Gardner: In summary so far, we have this need for speed across multiple dimensions.
We are looking at more hybrid architectures, up and down the scale -- from edge to core,
on-premises to the cloud. And we are also looking at crunching more data and making
real-time analytics part of that speed advantage. That means being able to have
intelligence brought to bear on our business decisions and making that as fast as
possible.
So what’s going on now with the analytics efficiency trend? Even if average rack density
remains static due to a lack of space, how will such IT developments as high
performance computing (HPC) help make this analysis equation work to the business
outcome’s advantage?
High performance computing in high density pods
Madara: The development of AI applications, machine learning (ML), and what could
be called deep learning are evolving. Many applications are requiring these HPC
systems. We see this in the areas of defense, gaming, the banking industry, and people
We are putting together solutions
that help the end-user address
the speed of the construction of
the infrastructure.
Page 7 of 14
doing advanced analytics and tying it to a lot of the sensor data we talked about for
manufacturing.
It’s not yet widespread, it’s not across the whole enterprise or the entire data center, and
these are often unique applications. What I hear in large data centers, especially from
the banks, is that they will need to put these AI applications up on 30-, 40-, 50- or 60-kW
racks -- but they only have three or four of these racks in the whole data center.
The end-user will need to decide how to tune or adjust facilities to accommodate this
small but growing pods of high-density compute. And if they are in their own facility, if it’s
an enterprise that has its own data center, they will need to decide how they are going to
facilitize for that type of equipment.
A lot of the colocation hosting facilities have customers saying, “Hey, I am going to be
bringing in the future a couple of racks that are very high density. A lot of these multi-
tenant data centers are saying, ‘Oh, how do I provision for these, because my data
center was laid out for this average of maybe 8 kW per rack? How do I manage that,
especially for data centers that didn’t previously have chilled water to provide liquid to
the rack?’”
We are now seeing a need to provide chilled water cooling that would go to a rear door
heat exchanger on the back of the rack. It could be chilled water that would go to a rack
for chip cooling applications. And again, it’s not the whole data center; it’s a small
segment of the data center. But it raises questions of how I do that without overkill on the
infrastructure needed.
Gardner: Steve, do you expect those small pods of HPC in the data center to make their
way out to the edge when people do more data crunching for the low-latency
requirements, where you can’t move the data to a data center? Do you expect to have
this trend grow more distributed?
Madara: Yes, I expect this will be for more
than the enterprise data center and cloud
data centers. I think you are going to see
analytics applications developed that are
going to be out at the edge because of the
requirements for latency.
When you think about the autonomous car; none of us know what's going to be required
there for that high-performance processing, but I would expect there is going to be a
need for that down at the edge.
Gardner: Peter, looking at the power side of things when we look at the batteries that
help UPS and systems remain mission-critical regardless of external factors, what’s
going on with battery technology? How will we be using batteries differently in the
modern data center?
You are going to see analytics
applications developed that are
going to be out at the edge because
of the requirements for latency.
Page 8 of 14
Battery-powered savings
Panfil: That’s a great question. Battery technology has been evolving at an incredibly
fast rate. It’s being driven by the electric vehicles. That growth is bringing to the market
batteries that have a size and weight advantage. You can’t put a big, heavy pack of
batteries in a car and hope to have it perform well.
It also gives a long-life expectation. So data centers used to have to decide between
long-life, high-maintenance, wet cells and the shorter-life, high-maintenance, valve-
regulated lead-acid (VRLA) batteries. In step with the lithium-ion batteries (LIBs) and thin
plate pure lead (TPPL) batteries, what’s happened is the total cost of ownership (TCO)
has started to become very advantageous for these batteries.
Our sales leadership lead sent me the most recent TCO between either TPPL or LIBs
versus traditional VRLA batteries, and the TCO is a winner for the LIBs and the TPPL
batteries. In some cases, over a 10-year period, the TCO is a factor of two lower for LIB
and TPPL.
Where in the cloud generation of data
centers was all about lowest first cost, in
this edge-to-core mentality of data
centers, it’s about TCO. There are other
levers that they can start to play with, too.
So, for example, they have life cycle and
operating temperature variables. That used to be a real limitation. Nobody in the data
center wanted their systems to go on batteries. They tried everything they could to not
have their systems go on the battery because of the potential for shortening the life of
their batteries or causing an outage.
Today we are developing IT systems infrastructure that takes advantage of not only
LIBs, but also pure lead batteries that can increase the number of [discharge/recharge]
cycles. Once you increase the number of cycles, you can think about deploying smart
power configurations. That means using batteries not only in the critical infrastructure for
a very short period of time when the power grid utility fails, but to use that in critical
infrastructure to help offset cost.
If I can reduce utility use at peak demand periods, for example, or I can reduce stress on
the grid at specified times, then batteries are not only a reliability play – they are also a
revenue-offset play. And so, we’re seeing more folks talking to us about how they can
apply these new energy storage technologies to change the way they think about using
their critical space.
Also, folks used to think that the longer the battery time, the better off they were because
it gave more time to react to issues. Now, folks know what they are doing, they are going
with runtimes that are tuned to their operations team’s capabilities. So, if my operations
Where in the cloud generation of
data centers was all about lowest
first cost, in this edge-to-core
mentality of data centers, it’s about
total cost of ownership.
Page 9 of 14
team can do a hot swap over an IT application -- either to a backup critical space
application or to a redundant data center -- then all of a sudden, I don’t need 5 to 12
minutes of runtime, I just need the bridge time. I might only need 60 to 120 seconds.
Now, if I can have these battery times tuned to the operations’ capabilities -- and I can
use the batteries more often or in higher temperature applications -- then I can really
start to impact my TCO and make it very, very cost-effective.
Gardner: It’s interesting; there is almost a power analog to hybrid computing. We can
either go to the cloud or the grid, or we can go to on-premises or the battery. Then we
can start to mix and match intelligently. That’s really exciting. How does lessening
dependence on the grid impact issues such as sustainability and conserving energy?
Sustainability surges forward
Panfil: We are having such conversations with our key accounts virtually every day.
What they are saying is, “I am eventually not going to make smoke and steam. I want to
limit the number of times my system goes on a generator. So, I might put in more
batteries, more LIBs or TPPL batteries, in certain applications because if my TCO is half
the amount of the old way, I could potentially put in twice as much, and have the same
cost basis and get that economic benefit.”
And so from a sustainability perspective, they are saying, “Okay, I might need at some
point in the useful life of that critical space to not draw what I think I need to draw from
my utility. I can limit the amount of power I draw from that utility.”
This is not a criticism, I love all of you out there in data center design, but most of them
are designed for peak usage. So what these changes allow them to do is to design more
for the norm of the requirements. That means they can put in less infrastructure, the
potential to put in less battery. They have the potential to right-size their generators;
same thing on the cooling side, to right-size the cooling to what they need and not for the
extremes of what that data center is going to see.
From a sustainability perspective, we used to talk
about the glass as half-full or half-empty. Now,
we say there is too much of a glass. Let’s right-
size the glass itself, and then all of the other
things you have to do in support of that
infrastructure are reduced.
Madara: As we look at the edge applications, many will not have backup generators. We
will have alternate energy sources, and we will probably be taking more hits to the
batteries. Is the LIB the better solution for that?
Panfil: Yes, Steve, it sure is. We will see customers with an expectation of sustainability,
a path to an energy source that is not fossil fuel-based. That could be a renewable
From a sustainability
perspective, we used to talk
about the glass as half-full or
half-empty. Now, we say there
is too much of a glass.
Page 10 of 14
energy source. We might not be able to deploy that today, but they can now deploy what
I call foundational technologies that allow them to take advantage of it. If I can have a
LIB, for example, that stores excess energy and allows me to absorb energy when I’m
creating more than I need -- then I can consume that energy on the other side. It’s better
for everybody.
Gardner: We are entering an era where we have the agility to optimize utilization and
reduce our total costs. The thing is that it varies from region to region. There are some
areas where compliance is a top requirement. There are others where energy issues are
a top requirement because of cost.
What’s going on in terms of global cross-pollination? Are we seeing different markets
react to their power and thermal needs in different ways? How can we learn from that?
Global differences, normalized
Madara: If you look at the size of data centers around the world, the data centers in the
U.S. are generally much larger than in Europe. And what’s in Europe is much larger than
what we have in other developed countries. So, there are a couple of things, as you
mentioned, energy availability, cost of energy, the size of the market and the users that it
serves. We may be looking at more edge data centers in very underserved markets that
have been in underdeveloped countries.
So, you are going to see the size of the data center and the technology used potentially
different to better fit needs of the specific markets and applications. Across the globe,
certain regions will have different requirements with regard to security and sustainability.
Even though we have these potential
differences, we can meet the end-user needs
to right-size the IT resources in that region.
We are all more common than we are different
in many respects. We all have needs for
security, we all have needs for efficiency, it
may just be to different degrees.
Panfil: There are different regional agency requirements, different governmental
regulations that companies have to comply with. And so what we find, Dana, is that what
our customers are trying to do is normalize their designs. I won’t say they are
standardizing their design because standardization says I am going to deploy exactly the
same way everywhere in the world. I am a fan of Kit Kats and Kit Kats are not the same
globally, they vary by region, the same is true for data centers.
So, when you look at how the customers are trying to deal with the regional and agency
differences that they have to live with, what they find themselves doing is trying to
normalize their designs as much as they possibly can globally, realizing that they might
not to be able to use exactly the same power configuration or exactly the same thermal
We are all more common that
we are different in many
respects. We all have needs for
security, we all have needs for
efficiency, it may just be to
different degrees.
Page 11 of 14
configuration. But we also see pockets where different technologies are moving to the
forefront. For example, China has data centers that are running at high voltage DC, 240
volts DC, we have always had 48-volt DC IT applications in the Americas and in Europe.
Customers are looking at three things -- speed, speed, and speed.
And so when we look at the application, for example, of DC, there used to be a debate,
is it AC or DC? Well, it’s not an “or” it’s an “and.” Most of the customers we talk to, for
example, in Asia are deploying high-voltage DC and have some form of hybrid AC plus
DC deployment. They are doing it so that they can speed their applications deployments.
In the Americas, the Open Compute Project (OCP) deploys either 12 or 48 volts to the
rack. I look at it very simply. We have been seeing a move from 2N architecture to N
plus 1 architecture in the power world for a decade, this is nothing more than adopting
the N plus 1 architecture at the rack level versus the 2N architecture at the rack level.
And so what we see is when folks are trying to, number one, increase the speed;
number two, increase their utilization; number three, lower their total cost, they are going
to deploy infrastructures that are most advantageous for either the IT appliances that
they are deploying or for the IT applications that they are running, and it’s not the same
for everybody, right Steve?
You and I have been around the planet
way too many times, you are a million
miler, so am I. It’s amazing how a city
might be completely different in a
different time zone, but once you walk
into that data center, you see how very
consistent they have gotten, even
though they have done it completely
independently from anybody else.
Madara: Correct!
Consistency lowers costs and risks
Gardner: A lot of what we have talked about boils down to a need to preserve speed-to-
value while managing total cost of utilization. What is there about these multiple trends
that people can consider when it comes to getting the right balance, the right equilibrium,
between TCO and that all important speed-to-value?
Madara: Everybody strives to drive cost down. The more you can drive the cost down of
the infrastructure, the more you can do to develop more edge applications.
I think we are seeing a very large rate of change of driving cost down. Yet we still have a
lot of stranded capacity out there in the marketplace. And people are making decisions
to take that down without impacting risk, but I think they can do it faster.
It’s amazing how a city might be
completely different in a different time
zone, but once you walk into that data
center, you see how very consistent
they have gotten, even though they
have done it completely
independently from anybody else.
Page 12 of 14
Peter mentioned standardization. Standardization helps drive speed, whether it’s
normalization or similarity. What allows people to move fast is to repeat what they are
doing instead of snowflake data centers, where every new one is different.
Repeating allows you to build a supply
base ecosystem where everybody has
the same goal, knows what to do, and
can be partners in driving out cost and in
driving speed. Those are some of the key
elements as we go forward.
Gardner: Peter when we look to that standardization, you also allow for more seamless
communication from core to cloud to edge. Why is that important, and how can we better
add intelligence and seamless communication among and between all these different
distributed data centers?
Panfil: When we normalize designs globally, we take a look at the regional differences,
sort out what the regional differences have to be, and then put a proof of concept
deployment. And out of that comes a consistent method of procedure.
When we talk about managing the data center effectively and efficiently, first of all, you
have to know what you have. And second, you have to know what it’s doing. And so, we
are seeing more folks normalizing their designs and getting consistency. They can then
start looking at how much of their available capacity from a design perspective they are
actually using both on a normal basis and on a peak basis and then they can determine
how much of that they are willing to use.
We have some customers who are very risk-averse. They stay in the 2N world, which is
a 50 percent maximum utilization. We applaud them for it because they are not going to
miss a transaction.
There are others who will say, “I can live with the availability that an N+1 architecture
gives me. I know I am going to have to be prepared for more failures. I am going to have
to figure out how to mitigate those failures.”
So they are working constantly at figuring out how to monitor what they have and figure
out what the equipment is doing, and how they can best optimize the performance. We
talked earlier about battery runtimes, for example. Sometimes they might get short or
sometimes they might be long.
As these companies get into this step and repeat function, they are going to get
consistency of their methods of procedure. They’re going to get consistency of how their
operations teams run their physical infrastructure. They are going to think about running
their equipment in ways that is nontraditional today but will become the norm in the next
generation of data centers. And then they are going to look at us and say, “Okay, now
Repeating allows you to build a
supply base ecosystem where
everybody has the same goal, knows
what to do, and can be partners in
driving out cost and in driving speed.
Page 13 of 14
that I have normalized my design, can I use rapid deployment configuration? Can I put it
on a skid, in a container? Can I drop it in place as the complete data center?”
Well, we build it one piece of equipment at a time and stitch it all together. The question
that you asked about monitoring, it’s interesting because we talked to a major company
just last month. Steve and I were visiting them at their site. And they said, “You know
what? We spend an awful lot of time figuring out how our building management system
and our data exchange happens at the site. Could Vertiv do some of that in the factory?
Could you configure our data acquisition systems? Could you test them there in the
factory? Could we know that when the stuff shows up on site that it’s doing the things
that it’s supposed to be doing instead of us playing hunt and peck to figure out what the
issues are?”
We said, “Of course.” So we are adding that capability now into our factory testing
environment. What we see is a move up the evolutionary scale. Instead of buying
separate boxes, we are seeing them buying solutions -- and those solutions include both
monitoring and controls.
Steve didn’t even get a chance to mention the industry-leading Vertiv Liebert® iCOM™
control for thermal. These controls and monitoring systems allow them to increase their
utilization rates because they know what they have and what it’s doing.
Gardner: It certainly seems to me, with all that we have said today, that the data center
status quo just can’t stand. Change and improvement is inevitable. Let’s close out with
your thoughts on why people shouldn’t be standing still; why it’s just not acceptable.
Innovation is inevitable
Madara: At the end of the day, the IT world is changing rapidly every day. Whether in
the cloud or down at the edge, the IT world needs to adjust to those needs. They need to
be able to be cut out enough of the cost structure. There is always a demand to drive
cost down.
If we don’t change with the world around us, if we don’t meet the requirements of our
customers, things aren’t going to work out – and somebody else is going to take it and
go for it.
Panfil: Remember, it’s not the big that eats the
small, it’s the fast that eats the slow.
Madara: Yes, right.
Panfil: And so, what I have been telling folks is, you got to go. The technology is there.
The technology is there for you to cut your cost, improve your speed, and increase
utilization. Let’s do it. Otherwise, somebody else is going to do it for you.
Remember, it’s not the big
that eats the small, it’s the
fast that eats the slow.
Page 14 of 14
Gardner: I’m afraid we’ll have to leave it there. We have been exploring the forces
shaping data center decisions and how that’s extending compute resources to new
places with the challenging goals of speed, agility, and efficiency.
And we have learned how enterprises and service providers alike are seeking new
balance between the need for low latency and optimal utilization of workload placement.
So please join me in thanking our guests, Peter Panfil, Vice President of Global Power at
Vertiv. Thank you so much, Peter.
Panfil: Thanks for having me. I appreciate it.
Gardner: And we have also been joined by Steve Madara, Vice President of Global
Thermal at Vertiv. Thanks so much, Steve.
Madara: You’re welcome, Dana.
Gardner: And a big thank you as well to our audience for joining us for this sponsored
BriefingsDirect data centers strategies interview. I’m Dana Gardner, Principal Analyst at
Interarbor Solutions, your host for this ongoing series of Vertiv-sponsored discussions.
Thanks again for listening. Please pass this along to your community, and do come back
next time.
Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv.
A discussion with two leading IT and critical infrastructure executives on how the state of data
centers in 2020 demands better speed, agility, and efficiency from IT resources wherever they
reside. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved.
You may also be interested in:
• How smart IT infrastructure has evolved into the era of data centers-as-a-service
• The next line of defense—How new security leverages virtualization to counter
sophisticated threats
• Expert Panel Explores the New Reality for Cloud Security and Trusted Mobile Apps
Delivery
• How IT innovators turn digital disruption into a business productivity force multiplier
• Cerner’s lifesaving sepsis control solution shows the potential of bringing more AI-
enabled IoT to the healthcare edge
• How containers are the new basic currency for pay as you go hybrid IT
• How rapid machine learning at the racing edge accelerates Venturi Formula E Team to
top-efficiency wins
• Data-driven and intelligent healthcare processes improve patient outcomes while making
the IT increasingly invisible
• Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture

Mais conteúdo relacionado

Mais procurados

Big Data in Global Telecom Market: Key Trends, Market Opportunities and Indus...
Big Data in Global Telecom Market: Key Trends, Market Opportunities and Indus...Big Data in Global Telecom Market: Key Trends, Market Opportunities and Indus...
Big Data in Global Telecom Market: Key Trends, Market Opportunities and Indus...Market Research Reports, Inc.
 
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...Dana Gardner
 
Data center outsourcing a new paradigm for the IT
Data center outsourcing a new paradigm for the ITData center outsourcing a new paradigm for the IT
Data center outsourcing a new paradigm for the ITAlessandro Guli
 
HP CTO Summit, New Jersey, March 24, 2010
HP CTO Summit, New Jersey, March 24, 2010HP CTO Summit, New Jersey, March 24, 2010
HP CTO Summit, New Jersey, March 24, 2010sureddy
 
Big Data World Forum
Big Data World ForumBig Data World Forum
Big Data World Forumbigdatawf
 
Big data analytics for telecom operators final use cases 0712-2014_prof_m erdas
Big data analytics for telecom operators final use cases 0712-2014_prof_m erdasBig data analytics for telecom operators final use cases 0712-2014_prof_m erdas
Big data analytics for telecom operators final use cases 0712-2014_prof_m erdasProf Dr Mehmed ERDAS
 
Weekly Assignment Slideshare #4
Weekly Assignment Slideshare #4Weekly Assignment Slideshare #4
Weekly Assignment Slideshare #4MaggieWalker13
 
The 4 Biggest Trends In Big Data and Analytics Right For 2021
The 4 Biggest Trends In Big Data and Analytics Right For 2021The 4 Biggest Trends In Big Data and Analytics Right For 2021
The 4 Biggest Trends In Big Data and Analytics Right For 2021Bernard Marr
 
Enterprise Architecture Networking
Enterprise Architecture Networking Enterprise Architecture Networking
Enterprise Architecture Networking Cohesive Networks
 
Benefiting from Big Data - A New Approach for the Telecom Industry
Benefiting from Big Data - A New Approach for the Telecom Industry  Benefiting from Big Data - A New Approach for the Telecom Industry
Benefiting from Big Data - A New Approach for the Telecom Industry Persontyle
 
The Value of Signal (and the Cost of Noise): The New Economics of Meaning-Making
The Value of Signal (and the Cost of Noise): The New Economics of Meaning-MakingThe Value of Signal (and the Cost of Noise): The New Economics of Meaning-Making
The Value of Signal (and the Cost of Noise): The New Economics of Meaning-MakingCognizant
 
Data Has A Shelf Life: Why You Should Be Thinking About Real-Time Analytics
Data Has A Shelf Life: Why You Should Be Thinking About Real-Time AnalyticsData Has A Shelf Life: Why You Should Be Thinking About Real-Time Analytics
Data Has A Shelf Life: Why You Should Be Thinking About Real-Time AnalyticsBernard Marr
 
Ghosts of technology
Ghosts of technologyGhosts of technology
Ghosts of technologyAbhik Biswas
 
Капитализация промышленного интернета
Капитализация промышленного интернетаКапитализация промышленного интернета
Капитализация промышленного интернетаSergey Zhdanov
 
How Global Data Availability Accelerates Collaboration And Delivers Business ...
How Global Data Availability Accelerates Collaboration And Delivers Business ...How Global Data Availability Accelerates Collaboration And Delivers Business ...
How Global Data Availability Accelerates Collaboration And Delivers Business ...Dana Gardner
 

Mais procurados (20)

Big Data in Global Telecom Market: Key Trends, Market Opportunities and Indus...
Big Data in Global Telecom Market: Key Trends, Market Opportunities and Indus...Big Data in Global Telecom Market: Key Trends, Market Opportunities and Indus...
Big Data in Global Telecom Market: Key Trends, Market Opportunities and Indus...
 
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...
Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualiza...
 
Data center outsourcing a new paradigm for the IT
Data center outsourcing a new paradigm for the ITData center outsourcing a new paradigm for the IT
Data center outsourcing a new paradigm for the IT
 
HP CTO Summit, New Jersey, March 24, 2010
HP CTO Summit, New Jersey, March 24, 2010HP CTO Summit, New Jersey, March 24, 2010
HP CTO Summit, New Jersey, March 24, 2010
 
Big Data World Forum
Big Data World ForumBig Data World Forum
Big Data World Forum
 
Dynamic network services whitepaper external final
Dynamic network services whitepaper external finalDynamic network services whitepaper external final
Dynamic network services whitepaper external final
 
Big Data 2.0
Big Data 2.0Big Data 2.0
Big Data 2.0
 
Big data analytics for telecom operators final use cases 0712-2014_prof_m erdas
Big data analytics for telecom operators final use cases 0712-2014_prof_m erdasBig data analytics for telecom operators final use cases 0712-2014_prof_m erdas
Big data analytics for telecom operators final use cases 0712-2014_prof_m erdas
 
Weekly Assignment Slideshare #4
Weekly Assignment Slideshare #4Weekly Assignment Slideshare #4
Weekly Assignment Slideshare #4
 
The 4 Biggest Trends In Big Data and Analytics Right For 2021
The 4 Biggest Trends In Big Data and Analytics Right For 2021The 4 Biggest Trends In Big Data and Analytics Right For 2021
The 4 Biggest Trends In Big Data and Analytics Right For 2021
 
Etiya White Paper_ABDR
Etiya White Paper_ABDREtiya White Paper_ABDR
Etiya White Paper_ABDR
 
Enterprise Architecture Networking
Enterprise Architecture Networking Enterprise Architecture Networking
Enterprise Architecture Networking
 
Fight COVID Crisis With Cloud
Fight COVID Crisis With CloudFight COVID Crisis With Cloud
Fight COVID Crisis With Cloud
 
Benefiting from Big Data - A New Approach for the Telecom Industry
Benefiting from Big Data - A New Approach for the Telecom Industry  Benefiting from Big Data - A New Approach for the Telecom Industry
Benefiting from Big Data - A New Approach for the Telecom Industry
 
The Value of Signal (and the Cost of Noise): The New Economics of Meaning-Making
The Value of Signal (and the Cost of Noise): The New Economics of Meaning-MakingThe Value of Signal (and the Cost of Noise): The New Economics of Meaning-Making
The Value of Signal (and the Cost of Noise): The New Economics of Meaning-Making
 
Big Data on AWS
Big Data on AWSBig Data on AWS
Big Data on AWS
 
Data Has A Shelf Life: Why You Should Be Thinking About Real-Time Analytics
Data Has A Shelf Life: Why You Should Be Thinking About Real-Time AnalyticsData Has A Shelf Life: Why You Should Be Thinking About Real-Time Analytics
Data Has A Shelf Life: Why You Should Be Thinking About Real-Time Analytics
 
Ghosts of technology
Ghosts of technologyGhosts of technology
Ghosts of technology
 
Капитализация промышленного интернета
Капитализация промышленного интернетаКапитализация промышленного интернета
Капитализация промышленного интернета
 
How Global Data Availability Accelerates Collaboration And Delivers Business ...
How Global Data Availability Accelerates Collaboration And Delivers Business ...How Global Data Availability Accelerates Collaboration And Delivers Business ...
How Global Data Availability Accelerates Collaboration And Delivers Business ...
 

Semelhante a A New Status Quo for Data Centers --Seamless Communication From Core to Cloud to Edge

How the Modern Data Center Extends Across Remote Locations Due to Automation ...
How the Modern Data Center Extends Across Remote Locations Due to Automation ...How the Modern Data Center Extends Across Remote Locations Due to Automation ...
How the Modern Data Center Extends Across Remote Locations Due to Automation ...Dana Gardner
 
Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...
Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...
Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...Dana Gardner
 
How Software-Defined Storage Translates into Just-in-Time Data Center Scaling
How Software-Defined Storage Translates into Just-in-Time Data Center ScalingHow Software-Defined Storage Translates into Just-in-Time Data Center Scaling
How Software-Defined Storage Translates into Just-in-Time Data Center ScalingDana Gardner
 
T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...
T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...
T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...Dana Gardner
 
Open Group Cloud Panel Forecasts Transition Phase for Enterprise IT Architecture
Open Group Cloud Panel Forecasts Transition Phase for Enterprise IT ArchitectureOpen Group Cloud Panel Forecasts Transition Phase for Enterprise IT Architecture
Open Group Cloud Panel Forecasts Transition Phase for Enterprise IT ArchitectureDana Gardner
 
GoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish ListGoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish ListDana Gardner
 
Converged IoT Systems: Bringing the Data Center to the Edge of Everything
Converged IoT Systems: Bringing the Data Center to the Edge of EverythingConverged IoT Systems: Bringing the Data Center to the Edge of Everything
Converged IoT Systems: Bringing the Data Center to the Edge of EverythingDana Gardner
 
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...Dana Gardner
 
CIO SDN B.Bayer HiRes
CIO SDN B.Bayer HiResCIO SDN B.Bayer HiRes
CIO SDN B.Bayer HiResBennet Bayer
 
New Managed Paths to Private Cloud Deployments Allow for Swifter Adoption at ...
New Managed Paths to Private Cloud Deployments Allow for Swifter Adoption at ...New Managed Paths to Private Cloud Deployments Allow for Swifter Adoption at ...
New Managed Paths to Private Cloud Deployments Allow for Swifter Adoption at ...Dana Gardner
 
GoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish ListGoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish ListDana Gardner
 
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid IT
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid ITHow Containers are Becoming The New Basic Currency For Pay as You Go Hybrid IT
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid ITDana Gardner
 
Citrix and HPE Team to Make Sense of the Core-Cloud-Edge Architecture
Citrix and HPE Team to Make Sense of the Core-Cloud-Edge ArchitectureCitrix and HPE Team to Make Sense of the Core-Cloud-Edge Architecture
Citrix and HPE Team to Make Sense of the Core-Cloud-Edge ArchitectureDana Gardner
 
Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...
Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...
Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...Dana Gardner
 
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower Cost
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower CostHow Consistent Data Services Deliver Simplicity, Compatibility, And Lower Cost
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower CostDana Gardner
 
CloudSmartz White Paper: Real Time IT
CloudSmartz White Paper: Real Time ITCloudSmartz White Paper: Real Time IT
CloudSmartz White Paper: Real Time ITCloudSmartz
 
How to fail in the IoT business
How to fail in the IoT businessHow to fail in the IoT business
How to fail in the IoT businessPerry Lea
 
Equinix Performance Hub gives Enterprise Networks a Giant Boost
Equinix Performance Hub gives Enterprise Networks a Giant BoostEquinix Performance Hub gives Enterprise Networks a Giant Boost
Equinix Performance Hub gives Enterprise Networks a Giant BoostEquinix
 
App Stores -- They're Not Just for Consumers Any More, as More Enterprises Ad...
App Stores -- They're Not Just for Consumers Any More, as More Enterprises Ad...App Stores -- They're Not Just for Consumers Any More, as More Enterprises Ad...
App Stores -- They're Not Just for Consumers Any More, as More Enterprises Ad...Dana Gardner
 

Semelhante a A New Status Quo for Data Centers --Seamless Communication From Core to Cloud to Edge (20)

How the Modern Data Center Extends Across Remote Locations Due to Automation ...
How the Modern Data Center Extends Across Remote Locations Due to Automation ...How the Modern Data Center Extends Across Remote Locations Due to Automation ...
How the Modern Data Center Extends Across Remote Locations Due to Automation ...
 
Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...
Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...
Qlik’s CTO on Why the Cloud Data Diaspora Forces Businesses To Rethink their ...
 
How Software-Defined Storage Translates into Just-in-Time Data Center Scaling
How Software-Defined Storage Translates into Just-in-Time Data Center ScalingHow Software-Defined Storage Translates into Just-in-Time Data Center Scaling
How Software-Defined Storage Translates into Just-in-Time Data Center Scaling
 
T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...
T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...
T-Mobile Swaps Manual Cloud Provisioning for Services Portal, Gains Lifecycle...
 
Open Group Cloud Panel Forecasts Transition Phase for Enterprise IT Architecture
Open Group Cloud Panel Forecasts Transition Phase for Enterprise IT ArchitectureOpen Group Cloud Panel Forecasts Transition Phase for Enterprise IT Architecture
Open Group Cloud Panel Forecasts Transition Phase for Enterprise IT Architecture
 
GoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish ListGoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish List
 
Converged IoT Systems: Bringing the Data Center to the Edge of Everything
Converged IoT Systems: Bringing the Data Center to the Edge of EverythingConverged IoT Systems: Bringing the Data Center to the Edge of Everything
Converged IoT Systems: Bringing the Data Center to the Edge of Everything
 
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
The Evolution of Data Center Infrastructure Has Now Ushered in The Era of Dat...
 
CIO SDN B.Bayer HiRes
CIO SDN B.Bayer HiResCIO SDN B.Bayer HiRes
CIO SDN B.Bayer HiRes
 
Data center trends in 2017
Data center trends in 2017Data center trends in 2017
Data center trends in 2017
 
New Managed Paths to Private Cloud Deployments Allow for Swifter Adoption at ...
New Managed Paths to Private Cloud Deployments Allow for Swifter Adoption at ...New Managed Paths to Private Cloud Deployments Allow for Swifter Adoption at ...
New Managed Paths to Private Cloud Deployments Allow for Swifter Adoption at ...
 
GoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish ListGoodData Developers Share Their Big Data Platform Wish List
GoodData Developers Share Their Big Data Platform Wish List
 
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid IT
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid ITHow Containers are Becoming The New Basic Currency For Pay as You Go Hybrid IT
How Containers are Becoming The New Basic Currency For Pay as You Go Hybrid IT
 
Citrix and HPE Team to Make Sense of the Core-Cloud-Edge Architecture
Citrix and HPE Team to Make Sense of the Core-Cloud-Edge ArchitectureCitrix and HPE Team to Make Sense of the Core-Cloud-Edge Architecture
Citrix and HPE Team to Make Sense of the Core-Cloud-Edge Architecture
 
Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...
Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...
Legacy IT Evolves: How Cloud Choices Like Microsoft Azure Can Conquer the VMw...
 
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower Cost
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower CostHow Consistent Data Services Deliver Simplicity, Compatibility, And Lower Cost
How Consistent Data Services Deliver Simplicity, Compatibility, And Lower Cost
 
CloudSmartz White Paper: Real Time IT
CloudSmartz White Paper: Real Time ITCloudSmartz White Paper: Real Time IT
CloudSmartz White Paper: Real Time IT
 
How to fail in the IoT business
How to fail in the IoT businessHow to fail in the IoT business
How to fail in the IoT business
 
Equinix Performance Hub gives Enterprise Networks a Giant Boost
Equinix Performance Hub gives Enterprise Networks a Giant BoostEquinix Performance Hub gives Enterprise Networks a Giant Boost
Equinix Performance Hub gives Enterprise Networks a Giant Boost
 
App Stores -- They're Not Just for Consumers Any More, as More Enterprises Ad...
App Stores -- They're Not Just for Consumers Any More, as More Enterprises Ad...App Stores -- They're Not Just for Consumers Any More, as More Enterprises Ad...
App Stores -- They're Not Just for Consumers Any More, as More Enterprises Ad...
 

Último

Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGSujit Pal
 

Último (20)

Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAG
 

A New Status Quo for Data Centers --Seamless Communication From Core to Cloud to Edge

  • 1. Page 1 of 14 A New Status Quo for Data Centers -- Seamless Communication From Core to Cloud to Edge A discussion with two leading IT and critical infrastructure executives on how the state of data centers in 2020 demands better speed, agility, and efficiency from IT resources wherever they reside. Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv. Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on the latest insights into data center strategies. As 2020 ushers in a new decade, the forces shaping data center decisions are extending compute resources to new places. With the challenging goals of speed, agility, and efficiency, enterprises and service providers alike will be seeking new balance between the need for low latency and optimal utilization of workload placement. Hybrid models will therefore include more distributed, confined, and modular data centers at or near the edge. These are but some of a few top-line predictions on the future state of the modern data center design. Stay with us as we examine, with two leading IT and critical infrastructure executives, how these data center variations nonetheless must also interoperate seamlessly from core to cloud to edge. Here to help us learn more about the state of data centers in 2020 is Peter Panfil, Vice President of Global Power at VertivTM. Welcome, Peter. Peter Panfil: How are you, Dana? Gardner: I’m doing great. We’re also here with Steve Madara, Vice President of Global Thermal at Vertiv. Welcome, Steve. Steve Madara: Thank you, Dana. Gardner: The world is rapidly changing in 2020. Organizations are moving past the debate around hybrid deployments, from on-premises to public clouds. Why do we need to also think about IT architectures and hybrid computing differently, Peter? Panfil
  • 2. Page 2 of 14 Moving to the edge, with momentum Panfil: We noticed a trend at Vertiv in our customer base. That trend is toward a new generation of data centers. We have been living with distributed IT, client-server data centers moving to cloud, either a public cloud or a private cloud. But what we are seeing is the evolution of an edge-to-core, near-real-time data center generation. And it’s being driven by devices everywhere, the “connected-all-the-time” model that all of us seem to be going to. And so, when you are in a near-real-time world, you have to have infrastructure that supports your near-real-time applications. And that is what the technology folks are facing. I refer to it as a pack of dogs chasing them -- the amount of data that’s being generated, the applications running remotely, and the demand for availability, low latency, and driving cost down as much as you possibly can. This is what’s changing how they approach their critical infrastructure space. Gardner: And so, a new equilibrium is emerging. How is this different from the past? Madara: If we go back 20 years, everything was centralized at enterprise data centers. Then we decided to move to decentralized, and then back to centralized. We saw a move to colocation as people decided that’s where they could get lower cost to run their apps. And then things went to the cloud, as Peter said earlier. And now, we have a huge number of devices connected locally. Cisco says by late 2020 that it’s going to have 23 billion connected devices, and over half of those are going to be machine-to-machine communications, which, as Peter mentioned earlier, the latency is going to be very, very critical. An interesting read is Michael Lewis’s book Flash Boys about the arbitrage that’s taking place with the low latency that you have in stock market trading. I think we are going to see more of that moving to the edge. The edge is more like a smart rack or smart row deployment in an existing facility. It’s going to be multi-tenant, because it’s going to be able to be throughout large cities. There could be 20 or 30 of these edge data center sites hosting different applications for customers. [It’s] a pack of dogs chasing [the technology folks] – the amount of data that’s being generated, the applications running remotely, and the demand for availability, low latency, and driving cost down as much as you possibly can. Madara
  • 3. Page 3 of 14 This move to the edge is also going to provide IT resources in a lot of underserved markets that don’t yet have pervasive compute, especially in emerging countries. Gardner: Why is speed so important? We have been talking about this now for years, but it seems like the need for speed to market and speed to value continues to ramp up. What’s driving that? Panfil: There is more than one kind of speed. There is speed of response of the application, that’s something that all of us demand -- speed of response of the applications. I have to have low latency in the transactions I am performing with my data or with my applications. So there is the speed of the actual data being transmitted. There is also speed of deployment. When Steve talked earlier about centralized cloud deployments in these core data centers, your data might be going over a significant distance, hopping along the way. Well, if you can’t live with that latency that gets inserted, then you have to take the IT application and put it closer to the source and consumer of the data. So there is a speed of deployment, from core to edge, that happens. And the third type of speed is you have to have low-first-cost, high-asset-utilization, and rapid-scalability. So that’s a speed of infrastructure adaptation to what the demands for the IT applications are. So when we mean speed, I often say it’s speed, speed, and speed. First, it’s the data IT. Once I have data IT speed, how did I achieve that? l did it by deploying fast, in the scale needed for the applications, and lastly at a cost and reliability that makes it tolerable for the businesses. Gardner: So I guess it’s speed-cubed, right? Panfil: At least, speed-cubed. Steve, if we had a nickel for every time one of our customers said “speed,” we wouldn’t have to work anymore. They are consumed with the different speeds that they have to deal with -- and it’s really the demands of their customers. Gardner: Vertiv for years has been looking at the data center of the future and making some predictions around what to expect. You have been rather prescient. To continue, you have now identified several areas for 2020, too. Let’s go through those trends. Steve, Vertiv predicts that “hybrid architectures will go mainstream.” Why did you identify that, and what do you mean? There is more than one kind of speed. … Speed of response of the applications … speed of deployment … and speed of infrastructure adaptation to what the demands for the IT applications are.
  • 4. Page 4 of 14 The future is hybrid Madara: If we look at the history of going from centralized to decentralized, and going to colocation and cloud applications, it shows the ongoing evolution of Internet of Things (IoT) sensors, 5G networks, smart cities, autonomous cars, and how more and more of that data is generated and will need to be processed locally. A lot of that is from machine-to-machine applications. So when we now talk about hybrid, we have to get very, very close to the source, as far as the processing is concerned. That’s going to be a large-scale evolution that’s going to drive the need for hybrid applications. There is going to be processing at the edge as well as centralized applications -- whether it’s in a cloud or hosted in colocation-based applications. Panfil: Steve, you and I both came up through the ranks. I remember when the data closet down the hall was basically a communications matrix. Its intent was to get communications from wherever we were to wherever our core data center was. Well, the cloud is not going away. Number two, enterprise IT is not going away. What the enterprise is saying is, “Okay, I am going to take my secret sauce and I am going to put it in an edge data center. I am going to put the compute power as close to my consumer of that data and that application as I possibly can. And then I am going to figure out where the rest of it’s going to go.” “If I can live with the latency I get out of a core data center, I am going to stay in the cloud. If I can’t, I might even break up my enterprise data center into small or micro data centers that give me even better responses.” Dana, it’s interesting, there was a recent wholesale market summary published that said the difference between the smaller and the larger wholesale deals widened. So what that says is the large wholesale deals are getting bigger, the small wholesale deals are getting smaller, and that the enterprise-based demand, in deployments under 600 kilowatts, is focused on low-latency and multi-cloud access. That tells us that our customers, the users of that critical space, are trying to place their IT appliances as close as they can to their customers, eliminating the latency, responding with speed, and then figuring out how to mesh that edge deployment with their core strategy. Gardner: Our second trend gets back to the speed-cubed notion. I have heard people describe this as a new arms race, because while it might be difficult to differentiate Our customers … are trying to place their IT appliances as close as they can to their customers, eliminating the latency, responding with speed, and then figuring out how to mesh that edge deployment with their core strategy.
  • 5. Page 5 of 14 yourself when everyone is using the same public cloud services, you can really differentiate yourself on how well you can conduct yourself at speed. What kinds of capabilities across your technologies will make differentiation around speed work to an advantage as a company? The need for speed Panfil: Well, I was with an analyst recently, and I said the new reality is not that the big will eat the small -- it’s that the fast will eat the slow. And any advantage that you can get in speed of applications, speed of deployment, deploying those IT assets -- or morphing the data center infrastructure or critical space infrastructure – helps improve capital efficiency. What many customers tell us is that they have to shorten the period of time between deciding to spend money on IT assets and the time that those asset start creating revenue. They want help being creative in lowering their first-cost, in increasing asset utilization, and in maintaining reliability. If, holy cow, my application goes down, I am out of business. And then they want to figure out how to manage things like supply chains and forecasting, which is difficult to do in this market, and to help them be as responsive as they can to their customers. Madara: Forecasting and understanding the new applications -- whether it’s artificial intelligence (AI) or 5G -- the CIOs need to decide where they need to put those applications whether they should be in the cloud or at the edge. Technology is changing so fast that nobody can predict far out into the future as far as to where I will need that capacity and what type of capacity I will need. So, it comes down to being able to put that capacity in the place where I need it, right when I need it, and not too far in advance. Again, I don’t want to spend the capital, because I may put it in the wrong place. So it’s got to be about tying the demand with the supply, and that’s what’s key as far as the infrastructure. And the other element I see is technology is changing fast, even on the infrastructure side. For our equipment, we are constantly making improvements every day, making it more efficient, lower cost, and with more capability. And if you put capacity in today that you don’t need for a year or two down the road, you are not taking advantage of the latest, greatest technology. So really it’s coupling the demand to the actual supply of the infrastructure -- and that’s what’s key. Another consideration is that many of these large companies, especially in the colocation market, have their financial structure as a real estate investment trust (REIT). It comes down to being able to put that capacity in the place where I need it, right when I need it, and not too far in advance.
  • 6. Page 6 of 14 As a result, they need to tie revenue with expenses tighter and tighter, along with capital spending. Panfil: That’s a good point, Steve. We redesigned our entire large power portfolio at Vertiv specifically to be able to address this demand. In previous generations, for example, the uninterruptible power supply (UPS) was built as a complete UPS. The new generation is built as a power converter, plus an I/O section, plus an interface section that can be rapidly configured to the customer, or, in some cases, put into a vendor-managed inventory program. This approach allows us to respond to the market and customers quicker. We were forced to change our business model in such a way that we can respond in real time to these kinds of capacity-demand changes. Madara: And to add to that, we have to put together more and more modules and solutions where we are bundling the equipment to deliver it faster, so that you don’t have to do testing on site or assembly on site. Again, we are putting together solutions that help the end-user address the speed of the construction of the infrastructure. I also think that this ties into the relationship that the person who owns the infrastructure has with their supplier base. Those relationships have to build in, as Peter mentioned earlier, the ability to do stocking of inventory, of having parts available on-site to go fast. Gardner: In summary so far, we have this need for speed across multiple dimensions. We are looking at more hybrid architectures, up and down the scale -- from edge to core, on-premises to the cloud. And we are also looking at crunching more data and making real-time analytics part of that speed advantage. That means being able to have intelligence brought to bear on our business decisions and making that as fast as possible. So what’s going on now with the analytics efficiency trend? Even if average rack density remains static due to a lack of space, how will such IT developments as high performance computing (HPC) help make this analysis equation work to the business outcome’s advantage? High performance computing in high density pods Madara: The development of AI applications, machine learning (ML), and what could be called deep learning are evolving. Many applications are requiring these HPC systems. We see this in the areas of defense, gaming, the banking industry, and people We are putting together solutions that help the end-user address the speed of the construction of the infrastructure.
  • 7. Page 7 of 14 doing advanced analytics and tying it to a lot of the sensor data we talked about for manufacturing. It’s not yet widespread, it’s not across the whole enterprise or the entire data center, and these are often unique applications. What I hear in large data centers, especially from the banks, is that they will need to put these AI applications up on 30-, 40-, 50- or 60-kW racks -- but they only have three or four of these racks in the whole data center. The end-user will need to decide how to tune or adjust facilities to accommodate this small but growing pods of high-density compute. And if they are in their own facility, if it’s an enterprise that has its own data center, they will need to decide how they are going to facilitize for that type of equipment. A lot of the colocation hosting facilities have customers saying, “Hey, I am going to be bringing in the future a couple of racks that are very high density. A lot of these multi- tenant data centers are saying, ‘Oh, how do I provision for these, because my data center was laid out for this average of maybe 8 kW per rack? How do I manage that, especially for data centers that didn’t previously have chilled water to provide liquid to the rack?’” We are now seeing a need to provide chilled water cooling that would go to a rear door heat exchanger on the back of the rack. It could be chilled water that would go to a rack for chip cooling applications. And again, it’s not the whole data center; it’s a small segment of the data center. But it raises questions of how I do that without overkill on the infrastructure needed. Gardner: Steve, do you expect those small pods of HPC in the data center to make their way out to the edge when people do more data crunching for the low-latency requirements, where you can’t move the data to a data center? Do you expect to have this trend grow more distributed? Madara: Yes, I expect this will be for more than the enterprise data center and cloud data centers. I think you are going to see analytics applications developed that are going to be out at the edge because of the requirements for latency. When you think about the autonomous car; none of us know what's going to be required there for that high-performance processing, but I would expect there is going to be a need for that down at the edge. Gardner: Peter, looking at the power side of things when we look at the batteries that help UPS and systems remain mission-critical regardless of external factors, what’s going on with battery technology? How will we be using batteries differently in the modern data center? You are going to see analytics applications developed that are going to be out at the edge because of the requirements for latency.
  • 8. Page 8 of 14 Battery-powered savings Panfil: That’s a great question. Battery technology has been evolving at an incredibly fast rate. It’s being driven by the electric vehicles. That growth is bringing to the market batteries that have a size and weight advantage. You can’t put a big, heavy pack of batteries in a car and hope to have it perform well. It also gives a long-life expectation. So data centers used to have to decide between long-life, high-maintenance, wet cells and the shorter-life, high-maintenance, valve- regulated lead-acid (VRLA) batteries. In step with the lithium-ion batteries (LIBs) and thin plate pure lead (TPPL) batteries, what’s happened is the total cost of ownership (TCO) has started to become very advantageous for these batteries. Our sales leadership lead sent me the most recent TCO between either TPPL or LIBs versus traditional VRLA batteries, and the TCO is a winner for the LIBs and the TPPL batteries. In some cases, over a 10-year period, the TCO is a factor of two lower for LIB and TPPL. Where in the cloud generation of data centers was all about lowest first cost, in this edge-to-core mentality of data centers, it’s about TCO. There are other levers that they can start to play with, too. So, for example, they have life cycle and operating temperature variables. That used to be a real limitation. Nobody in the data center wanted their systems to go on batteries. They tried everything they could to not have their systems go on the battery because of the potential for shortening the life of their batteries or causing an outage. Today we are developing IT systems infrastructure that takes advantage of not only LIBs, but also pure lead batteries that can increase the number of [discharge/recharge] cycles. Once you increase the number of cycles, you can think about deploying smart power configurations. That means using batteries not only in the critical infrastructure for a very short period of time when the power grid utility fails, but to use that in critical infrastructure to help offset cost. If I can reduce utility use at peak demand periods, for example, or I can reduce stress on the grid at specified times, then batteries are not only a reliability play – they are also a revenue-offset play. And so, we’re seeing more folks talking to us about how they can apply these new energy storage technologies to change the way they think about using their critical space. Also, folks used to think that the longer the battery time, the better off they were because it gave more time to react to issues. Now, folks know what they are doing, they are going with runtimes that are tuned to their operations team’s capabilities. So, if my operations Where in the cloud generation of data centers was all about lowest first cost, in this edge-to-core mentality of data centers, it’s about total cost of ownership.
  • 9. Page 9 of 14 team can do a hot swap over an IT application -- either to a backup critical space application or to a redundant data center -- then all of a sudden, I don’t need 5 to 12 minutes of runtime, I just need the bridge time. I might only need 60 to 120 seconds. Now, if I can have these battery times tuned to the operations’ capabilities -- and I can use the batteries more often or in higher temperature applications -- then I can really start to impact my TCO and make it very, very cost-effective. Gardner: It’s interesting; there is almost a power analog to hybrid computing. We can either go to the cloud or the grid, or we can go to on-premises or the battery. Then we can start to mix and match intelligently. That’s really exciting. How does lessening dependence on the grid impact issues such as sustainability and conserving energy? Sustainability surges forward Panfil: We are having such conversations with our key accounts virtually every day. What they are saying is, “I am eventually not going to make smoke and steam. I want to limit the number of times my system goes on a generator. So, I might put in more batteries, more LIBs or TPPL batteries, in certain applications because if my TCO is half the amount of the old way, I could potentially put in twice as much, and have the same cost basis and get that economic benefit.” And so from a sustainability perspective, they are saying, “Okay, I might need at some point in the useful life of that critical space to not draw what I think I need to draw from my utility. I can limit the amount of power I draw from that utility.” This is not a criticism, I love all of you out there in data center design, but most of them are designed for peak usage. So what these changes allow them to do is to design more for the norm of the requirements. That means they can put in less infrastructure, the potential to put in less battery. They have the potential to right-size their generators; same thing on the cooling side, to right-size the cooling to what they need and not for the extremes of what that data center is going to see. From a sustainability perspective, we used to talk about the glass as half-full or half-empty. Now, we say there is too much of a glass. Let’s right- size the glass itself, and then all of the other things you have to do in support of that infrastructure are reduced. Madara: As we look at the edge applications, many will not have backup generators. We will have alternate energy sources, and we will probably be taking more hits to the batteries. Is the LIB the better solution for that? Panfil: Yes, Steve, it sure is. We will see customers with an expectation of sustainability, a path to an energy source that is not fossil fuel-based. That could be a renewable From a sustainability perspective, we used to talk about the glass as half-full or half-empty. Now, we say there is too much of a glass.
  • 10. Page 10 of 14 energy source. We might not be able to deploy that today, but they can now deploy what I call foundational technologies that allow them to take advantage of it. If I can have a LIB, for example, that stores excess energy and allows me to absorb energy when I’m creating more than I need -- then I can consume that energy on the other side. It’s better for everybody. Gardner: We are entering an era where we have the agility to optimize utilization and reduce our total costs. The thing is that it varies from region to region. There are some areas where compliance is a top requirement. There are others where energy issues are a top requirement because of cost. What’s going on in terms of global cross-pollination? Are we seeing different markets react to their power and thermal needs in different ways? How can we learn from that? Global differences, normalized Madara: If you look at the size of data centers around the world, the data centers in the U.S. are generally much larger than in Europe. And what’s in Europe is much larger than what we have in other developed countries. So, there are a couple of things, as you mentioned, energy availability, cost of energy, the size of the market and the users that it serves. We may be looking at more edge data centers in very underserved markets that have been in underdeveloped countries. So, you are going to see the size of the data center and the technology used potentially different to better fit needs of the specific markets and applications. Across the globe, certain regions will have different requirements with regard to security and sustainability. Even though we have these potential differences, we can meet the end-user needs to right-size the IT resources in that region. We are all more common than we are different in many respects. We all have needs for security, we all have needs for efficiency, it may just be to different degrees. Panfil: There are different regional agency requirements, different governmental regulations that companies have to comply with. And so what we find, Dana, is that what our customers are trying to do is normalize their designs. I won’t say they are standardizing their design because standardization says I am going to deploy exactly the same way everywhere in the world. I am a fan of Kit Kats and Kit Kats are not the same globally, they vary by region, the same is true for data centers. So, when you look at how the customers are trying to deal with the regional and agency differences that they have to live with, what they find themselves doing is trying to normalize their designs as much as they possibly can globally, realizing that they might not to be able to use exactly the same power configuration or exactly the same thermal We are all more common that we are different in many respects. We all have needs for security, we all have needs for efficiency, it may just be to different degrees.
  • 11. Page 11 of 14 configuration. But we also see pockets where different technologies are moving to the forefront. For example, China has data centers that are running at high voltage DC, 240 volts DC, we have always had 48-volt DC IT applications in the Americas and in Europe. Customers are looking at three things -- speed, speed, and speed. And so when we look at the application, for example, of DC, there used to be a debate, is it AC or DC? Well, it’s not an “or” it’s an “and.” Most of the customers we talk to, for example, in Asia are deploying high-voltage DC and have some form of hybrid AC plus DC deployment. They are doing it so that they can speed their applications deployments. In the Americas, the Open Compute Project (OCP) deploys either 12 or 48 volts to the rack. I look at it very simply. We have been seeing a move from 2N architecture to N plus 1 architecture in the power world for a decade, this is nothing more than adopting the N plus 1 architecture at the rack level versus the 2N architecture at the rack level. And so what we see is when folks are trying to, number one, increase the speed; number two, increase their utilization; number three, lower their total cost, they are going to deploy infrastructures that are most advantageous for either the IT appliances that they are deploying or for the IT applications that they are running, and it’s not the same for everybody, right Steve? You and I have been around the planet way too many times, you are a million miler, so am I. It’s amazing how a city might be completely different in a different time zone, but once you walk into that data center, you see how very consistent they have gotten, even though they have done it completely independently from anybody else. Madara: Correct! Consistency lowers costs and risks Gardner: A lot of what we have talked about boils down to a need to preserve speed-to- value while managing total cost of utilization. What is there about these multiple trends that people can consider when it comes to getting the right balance, the right equilibrium, between TCO and that all important speed-to-value? Madara: Everybody strives to drive cost down. The more you can drive the cost down of the infrastructure, the more you can do to develop more edge applications. I think we are seeing a very large rate of change of driving cost down. Yet we still have a lot of stranded capacity out there in the marketplace. And people are making decisions to take that down without impacting risk, but I think they can do it faster. It’s amazing how a city might be completely different in a different time zone, but once you walk into that data center, you see how very consistent they have gotten, even though they have done it completely independently from anybody else.
  • 12. Page 12 of 14 Peter mentioned standardization. Standardization helps drive speed, whether it’s normalization or similarity. What allows people to move fast is to repeat what they are doing instead of snowflake data centers, where every new one is different. Repeating allows you to build a supply base ecosystem where everybody has the same goal, knows what to do, and can be partners in driving out cost and in driving speed. Those are some of the key elements as we go forward. Gardner: Peter when we look to that standardization, you also allow for more seamless communication from core to cloud to edge. Why is that important, and how can we better add intelligence and seamless communication among and between all these different distributed data centers? Panfil: When we normalize designs globally, we take a look at the regional differences, sort out what the regional differences have to be, and then put a proof of concept deployment. And out of that comes a consistent method of procedure. When we talk about managing the data center effectively and efficiently, first of all, you have to know what you have. And second, you have to know what it’s doing. And so, we are seeing more folks normalizing their designs and getting consistency. They can then start looking at how much of their available capacity from a design perspective they are actually using both on a normal basis and on a peak basis and then they can determine how much of that they are willing to use. We have some customers who are very risk-averse. They stay in the 2N world, which is a 50 percent maximum utilization. We applaud them for it because they are not going to miss a transaction. There are others who will say, “I can live with the availability that an N+1 architecture gives me. I know I am going to have to be prepared for more failures. I am going to have to figure out how to mitigate those failures.” So they are working constantly at figuring out how to monitor what they have and figure out what the equipment is doing, and how they can best optimize the performance. We talked earlier about battery runtimes, for example. Sometimes they might get short or sometimes they might be long. As these companies get into this step and repeat function, they are going to get consistency of their methods of procedure. They’re going to get consistency of how their operations teams run their physical infrastructure. They are going to think about running their equipment in ways that is nontraditional today but will become the norm in the next generation of data centers. And then they are going to look at us and say, “Okay, now Repeating allows you to build a supply base ecosystem where everybody has the same goal, knows what to do, and can be partners in driving out cost and in driving speed.
  • 13. Page 13 of 14 that I have normalized my design, can I use rapid deployment configuration? Can I put it on a skid, in a container? Can I drop it in place as the complete data center?” Well, we build it one piece of equipment at a time and stitch it all together. The question that you asked about monitoring, it’s interesting because we talked to a major company just last month. Steve and I were visiting them at their site. And they said, “You know what? We spend an awful lot of time figuring out how our building management system and our data exchange happens at the site. Could Vertiv do some of that in the factory? Could you configure our data acquisition systems? Could you test them there in the factory? Could we know that when the stuff shows up on site that it’s doing the things that it’s supposed to be doing instead of us playing hunt and peck to figure out what the issues are?” We said, “Of course.” So we are adding that capability now into our factory testing environment. What we see is a move up the evolutionary scale. Instead of buying separate boxes, we are seeing them buying solutions -- and those solutions include both monitoring and controls. Steve didn’t even get a chance to mention the industry-leading Vertiv Liebert® iCOM™ control for thermal. These controls and monitoring systems allow them to increase their utilization rates because they know what they have and what it’s doing. Gardner: It certainly seems to me, with all that we have said today, that the data center status quo just can’t stand. Change and improvement is inevitable. Let’s close out with your thoughts on why people shouldn’t be standing still; why it’s just not acceptable. Innovation is inevitable Madara: At the end of the day, the IT world is changing rapidly every day. Whether in the cloud or down at the edge, the IT world needs to adjust to those needs. They need to be able to be cut out enough of the cost structure. There is always a demand to drive cost down. If we don’t change with the world around us, if we don’t meet the requirements of our customers, things aren’t going to work out – and somebody else is going to take it and go for it. Panfil: Remember, it’s not the big that eats the small, it’s the fast that eats the slow. Madara: Yes, right. Panfil: And so, what I have been telling folks is, you got to go. The technology is there. The technology is there for you to cut your cost, improve your speed, and increase utilization. Let’s do it. Otherwise, somebody else is going to do it for you. Remember, it’s not the big that eats the small, it’s the fast that eats the slow.
  • 14. Page 14 of 14 Gardner: I’m afraid we’ll have to leave it there. We have been exploring the forces shaping data center decisions and how that’s extending compute resources to new places with the challenging goals of speed, agility, and efficiency. And we have learned how enterprises and service providers alike are seeking new balance between the need for low latency and optimal utilization of workload placement. So please join me in thanking our guests, Peter Panfil, Vice President of Global Power at Vertiv. Thank you so much, Peter. Panfil: Thanks for having me. I appreciate it. Gardner: And we have also been joined by Steve Madara, Vice President of Global Thermal at Vertiv. Thanks so much, Steve. Madara: You’re welcome, Dana. Gardner: And a big thank you as well to our audience for joining us for this sponsored BriefingsDirect data centers strategies interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Vertiv-sponsored discussions. Thanks again for listening. Please pass this along to your community, and do come back next time. Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Vertiv. A discussion with two leading IT and critical infrastructure executives on how the state of data centers in 2020 demands better speed, agility, and efficiency from IT resources wherever they reside. Copyright Interarbor Solutions, LLC, 2005-2020. All rights reserved. You may also be interested in: • How smart IT infrastructure has evolved into the era of data centers-as-a-service • The next line of defense—How new security leverages virtualization to counter sophisticated threats • Expert Panel Explores the New Reality for Cloud Security and Trusted Mobile Apps Delivery • How IT innovators turn digital disruption into a business productivity force multiplier • Cerner’s lifesaving sepsis control solution shows the potential of bringing more AI- enabled IoT to the healthcare edge • How containers are the new basic currency for pay as you go hybrid IT • How rapid machine learning at the racing edge accelerates Venturi Formula E Team to top-efficiency wins • Data-driven and intelligent healthcare processes improve patient outcomes while making the IT increasingly invisible • Citrix and HPE team to bring simplicity to the hybrid core-cloud-edge architecture