3. Virtualization to Cloud Maturity Model Self-Managing Datacenters Server Consolidation Test and Development Capacity On Demand Enterprise Computing Clouds On and Off Premise Separate Consolidate Aggregate Automate Liberate You Are Here Or Here Or Here Or Here Enterprise Objective: An IT Services On-Demand Platform Private Public OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP
4. VIRTUALIZATION is PERVASIVE VIRTUAL PRIVATE NETWORK VIRTUAL LAN (VLAN) STORAGE VIRTUALIZATION OS VIRTUALIZATION NETWORK VIRTUALIZATION SERVICE VIRTUALIZATION VIRTUAL NETWORK ACCESS
7. Adaptability Reactions Triggers Events ATTACKS POLICIES DEMAND MOVEMENT CHANGES REMINDER: Please fill out your Evaluation Form
8. Server Virtualization Legacy Applications & Infrastructure Software as a Service (SaaS) Social Computing Multi-Protocols Cloud Services – Internal / External Mobility People and Budgets Services Infrastructure Gap Network Infrastructure and Resources Employees Partners Customers Branch Offices Outsourcers Adapted from Seeking Alpha, October 2008 – Greg Ness The Need for Dynamic Infrastructure
9. Shift the Burden to Bridge the Gap TECHNOLOGY PEOPLE PEOPLE TECHNOLOGY
10. 2008 1993 1998 CONNECTED COLLABORATIVE The Evolution of the Web Application 2003
11. 2008 CONNECTED MORE CONNECTED The Evolution of Infrastructure 1993 1998 2003 2003 Web VM App VM DB VM Web VM App VM DB VM Web VM App VM DB VM Web VM App VM DB VM Web VM App VM DB VM Web VM App VM DB VM Web VM App VM DB VM Web VM App VM DB VM Web VM App VM DB VM
14. dy·nam·ic characterized by continuous change, activity, or progress in⋅fra⋅struc⋅ture Adaptation the basic, underlying framework or features of a system or organization.
21. What’s Needed Dynamic Services Model: Reusable services that understand context and can provide control regardless of application, virtualization, user, device, platform or location Resources Physical Virtual Multi-Site DCs Cloud OS APP OS APP OS APP OS APP OS APP OS APP OS APP OS APP Private Public Users
22. Thank You! Contact Info: Lori MacVittie [email_address] @lmacvittie http://devcentral.f5.com/weblogs/macvittie/
Notas do Editor
The “new network” needs to support elasticity not just of applications but other network components. It, therefore, must also be elastic itself. But if you stretch an elastic band too far what happens? It breaks, and snaps back and it’s really, really, hard to fix.
Enable the means by which a dynamic infrastructure supporting multi-component/tier applications can be discovered, rapidly provisioned, scaled, secured, managed, modified, and migrated across disparate locations.
It’s exciting times for IT and business. The promise of virtualization and cloud computing can greatly improve the efficiency and the rapidity in bringing applications and services to market. But the problem is for each step along the virtualization to cloud maturity model the enterprise oftentimes is asked to deploy technology that doesn’t match their existing infrastructure, is dissimilar, or a poor derivative of something they already own. Creating different delivery models regardless of where the applications, users, or resources reside or whether they’re virtualized or not is the opposite of agility and flexibility. Many organizations today are at the consolidate phase. They want to take virtualization to the next level, to aggregate and automate and give users (internal constituents) the ability to self-service provision applications in real time, but know that current network management and operational processes aren’t going to scale well because they rely too much on people. Moving toward a truly “cloud” based infrastructure necessarily introduces a wide range of complexity that needs to be managed.
Every tier, every data center, everywhere. This is the foundation that enables cloud computing, but is also the root cause of many of the obstacles facing implementers today.
Virtualization and cloud computing ultimately require new kinds of interoperability to reduce the burdens imposed by these technologies. What’s really problematic is that this can occur many times over a short period of time, and in a public cloud environment can be occurring for thousands of customers at the same time. Massive amounts of information about policies, locations, and infrastructure must be shared over the same networks, at the same time. Devices must be updated, configurations changed, policies applied, and it has to happen in the right order.
And when we decommission services/exit a cloud, we have to reverse the process. Again, simultaneously and amidst other resources doing the same thing.
It’s not just about starting up and shutting down. A dynamic infrastructure must be able to react on-demand to events and changes in the environment automatically. As policies governing the delivery of applications change, as the applications change themselves, when attacks and demands threaten availability and security, the underlying infrastructure must be capable of adapting and responding with as little manual intervention as possible
The problem is that there’s this gap. People and budgets are flat even as the demands on applications and networks are increasing.
How do we bridge that gap given the current restrictions? With technology. Shift the burden to the infrastructure – to the network. To understand how to shift the burden we need to look at web application evolution, the reasons for which will become clear very shortly.
The network didn’t get more collaborative as applications evolved, it just got more connected. There’s still very little collaboration and communication that goes on between devices/components in the network. There is a lot of communication between the components and people, which is part of what causes the gap and makes moving forward a difficult proposition.
What happened is that applications recognized that it couldn't adapt fast enough to the rapid changes occuring and addressed that gap with technology. Strategic points of control began to emerge that allowed for automation of sharing and feedback across web sites and ultimately people.
That's what we're missing with current infrastructure implementations. It's not able to adapt because it isn't receiving actionable data at the time it needs it. For example, consider what happens when an application spins up in a virtual machine. An IP address is assigned - either dynamically or statically. Then what? Nothing. The rest of the infrastructure is likely unaware of the incident. Unless it was watching the network traffic and was watching for something to happen, it's likely that every other piece of the infrastructure has to be manually informed of the new instance. Sure, that may be via scripts or a button in an administrative console, but it could have - should have - happened automatically. And what about when the users are actually accessing the application? You have service level agreements around applications and they can include variables like the user, time of the day, etc... How does the entire infrastructure - from the routers that decide which ISP link to use - to the application delivery infrastructure that has to decide which server to send the request to and whether it should apply compression or not, decide what to do *for that user and each individual request* if it isn't aware of the conditions on the network, on the servers, who the user is, and what application is being used?
We need a dynamic infrastructure. But that term is used by a lot of folks today and is one square in the buzzword bingo game. What does it really mean?
A dynamic infrastructure is a new intelligent fabric that can react based on the stream of interactions between users and resources without impacting performance or availability. It provides an important new vantage point to see and report on these interactions. It understands a vast array of variables that put the interactions in context – user profile, location, interface device, application, network, file meta-data, etc. And just as importantly, it *shares* that information, providing actionable data to the rest of the infrastructure that must also take the variables it knows about into consideration. Like "collaborative software" it uses APIs, alerts, notifications, and event-based triggering mechanisms to share information across the entire infrastructure - from layer 2 all the way up to layer 7. When necessary, it can collaborate with layer 8 to instigate human intervention. Because we all know that sometimes it's going to require a human being to deal with a situation. The goal is to get to the point where human capital is leveraged effectively, where the mundane operational details are handled by the infrastructure and people only intervene when necessary.
Visibility is one of the key components of a dynamic infrastructure.
But just "seeing" the data isn't enough, you have to understand it and be able to correlate actions to data.
You have to provide feedback so decisions can be made. Devices can't assume they are an island; they have to provide feedback themselves to other pieces of the infrastructure in order to effectively share the information necessary for making real-time (dynamic) decisions.
Even if individual components are endowed with the aforementioned abilities, without the ability to communicate there is no collaboration. And that's going to be enabled through integration. It may be device-to-device, or it may be through a third party management or orchestration system. There are key strategic points of control in every architecture and it is at these points which synchronization must occur. This is accomplished via standards-based APIs that are open to the public at large to be used for integration and coordination. Authoritative sources of information must be designated, based on the ability of the infrastructure component to provide visibility, control, and communication with the rest of the infrastucture.
Once we've got all the disparate pieces of the infrastructure able to see and communicate with each other and necessary management systems, we can effectively fill in the technology gap and tip the scales of responsibility toward technology taking over the mundane day-to-day operations that are consuming time and budgets today. This is infrastructure 2.0 – the adaptation layer needed to move the network from the static, brittle implementations of yesterday toward the dynamic, flexible implementation that will provide the economy of scale we’re looking for and need.
The Final Frontier of Virtualization – the architecture. It’s about putting together the right pieces of infrastructure in the right places and leveraging the services they provide to adapt dynamically, both at design time and run time. It’s about recognizing that the needs required of a mobile client are different from those of a laptop user. It’s about understanding that some data being delivered is highly sensitive and may need additional security to ensure not only to secure the data but to secure the reputation of the business. It’s about making all those moving pieces work together as seamlessly and efficiently as possible without abandoning the control necessary for security and the integrity of the operational architecture. Many pundits have posited that a dynamic infrastructure requires the dynamism that can only be offered by a virtualized network infrastructure. While we agree that in many cases a virtual network infrastructure is the right choice, there are also times when the performance and reliability and even capacity of a hardware infrastructure is necessary. In some situations the best option is to use a virtual appliance, in others the costs of management, scale, and security unbalance the equation and make it more appropriate to use hardware. In both cases, however, the dynamic services model relies on just that –services, and these services should be available as the building blocks of a dynamic infrastructure regardless of the form-factor of its components. It’s about trade-offs and balancing needs. The key is collaboration across the infrastructure, the sharing of actionable data and context, so the infrastructure can start picking up the burden off people and reducing the gap that’s essentially causing the diseconomy of scale associated with cloud computing and virtualization. In order to achieve this collaboration we need to enable the network with services that can be consumed in an open, standards-based way and that can be easily integrated into automation and orchestration systems – whether those be open-source (Chef, Puppet) or proprietary (Vmware, Microsoft, HP). This is the guts of infrastructure 2.0 – a collaborative, intelligent, dynamic infrastructure that bridges the gap and moves the burden of managing operational processes from people onto technology.