Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Cloud Computing Principles and Paradigms: 5 virtual machines provisioning and migration services
1. November
Cloud Computing - Part II
1
5-VIRTUAL MACHINES
PROVISIONING AND
MIGRATION SERVICES
Cloud Computing
Principles and Paradigms
Presented by
Majid Hajibaba
3. November
Cloud Computing - Part II
3
Public Infrastructure Cloud
• Examples
• Amazon Elastic Compute Cloud (EC2)
• GoGrid, Joyent, Accelerator, Rackspace
• AppNexus, FlexiScale, Manjrasoft Aneka
• EC2
• leveraged via
• Web services (SOAP or REST)
• Web-based AWS (Amazon Web Service) management console
• EC2 command line tools
• AMI (Amazon Machine Images)
• Different instance size
• Resource need (small, large, and extra large)
• High cpu (medium and extra large high CPU instances)
• High-memory (extra large, double extra large, quadruple extra large)
Presented by Majid Hajibaba
4. November
Cloud Computing - Part II
4
Private Infrastructure Cloud
• Meet Security and Governance
• Inside organization firewall
• May within a cloud vendor’s data center
• Characteristic
• Allow service provisioning and compute capability for an
organization’s users in a self-service manner
• Automate and provide well-managed virtualized environments
• Optimize computing resources, and servers’ utilization
• Support specific workloads
• Examples
• Eucalyptus
• OpenNebula
• Hybrid Cloud
Presented by Majid Hajibaba
5. November
Cloud Computing - Part II
5
Virtualization Standardization
• DMTF (Distributed Management Task Force)
• VMAN (Virtualization Management)
• VMAN OVF(Open Virtualization Format)
• OGF (Open Grid Forum)
• OCCIWG (Open Cloud Computing Interface Working Group)
• new API allow:
• Consumers to interact with cloud computing infrastructure on an ad
hoc basis.
• Integrators to offer advanced management services.
• Aggregators to offer a single common interface to multiple providers.
• Providers to offer a standard interface that is compatible with the
available tools.
• Vendors of grids/clouds to offer standard interfaces for dynamically
scalable service’s delivery in their products.
Presented by Majid Hajibaba
14. November
Cloud Computing - Part II
Migration and SLA
• Match resource’s demand conditions
• Avoid the violations of SLA
• Integration between
virtualization’s management tools
and SLA’s management tools
• Migrate between
different platforms
• Vmware Converter
Presented by Majid Hajibaba
14
16. November
Cloud Computing - Part II
16
ConVirt
• open source framework for the management of open
source virtualization like Xen and KVM
• ConVirt Workstation
• Basic configuration (local machine)
• Advanced configuration (remote server)
Presented by Majid Hajibaba
17. November
Cloud Computing - Part II
Amazon EC2
• Elastic Compute Cloud
• is a Web service
• allows users to provision new machines
• into Amazon’s virtualized infrastructure in a matter of minutes
• Root access to AMI
• EC2 instance is typically a virtual machine with a certain
amount of RAM, CPU, and storage capacity.
• Purchasing Model
• On-Demand
• Reserved
• Spot
• Provisioning Services
• Auto Scaling
• CloudWatch
• Elastic Load Balancer
Presented by Majid Hajibaba
17
18. November
Cloud Computing - Part II
Infrastructure Enabling
Technology
• Offering infrastructure as a service
• Requires software and platforms for management
• Manage the Infrastructure that is being shared and
dynamically provisioned
• Three noteworthy technologies to be considered:
• Eucalyptus
• OpenNebula
• Aneka
Presented by Majid Hajibaba
18
19. November
Cloud Computing - Part II
19
Eucalyptus
• Open Source cloud tool
• Elastic utility computing architecture for linking your programs to
useful systems
• Private cloud and hybrid cloud
• Amazon EC2,S3 interface
• Features
• Interface compatibility with EC2, and S3
• Simple installation and deployment
• Support for most Linux distributions
• Support for running VMs that run atop the Xen or KVM (VMWare?!)
• Secure internal communication using SOAP with WS security
• Administrator’s tool for system’s management and user’s accounting
• configure multiple clusters each with private internal network addresses into a
single cloud
• Research Lines
• service’s provisioning, Scheduling, SLA formulation, hypervisors’ portability
Presented by Majid Hajibaba
21. November
Cloud Computing - Part II
21
UEC (Ubuntu Enterprise Cloud)
• Is a tool to provision, deploy, configure, and use cloud
•
•
•
•
•
•
infrastructures
Based on Eucalyptus
Brings Amazon EC2-like infrastructure’s capabilities inside
the firewall
Simplest way to install and try Eucalyptus
First open source project
Lets you create cloud services in your local environment
Let you leverage the power of cloud computing
Presented by Majid Hajibaba
22. November
Cloud Computing - Part II
22
OpenNebula
• Open source tool
• Virtualization tool to manage your virtual infrastructure
• Private cloud and Hybrid cloud
• Research lines
• Advance reservation of capacity
• Probabilistic admission control
• Placement optimization
• Resource models for the efficient management of groups of virtual
machines
• elasticity support
Presented by Majid Hajibaba
24. November
Cloud Computing - Part II
24
Aneka
• .NET-based platform and framework
• Building and Deploying distributed applications on clouds
• Private, Public, Hybrid
• EC2 interface
• Management Studio
Presented by Majid Hajibaba
26. November
Cloud Computing - Part II
26
Research Direction
• Self-adaptive and dynamic data center
• Performance evaluation and workload characterization
• fundamental tools and techniques that facilitate the
•
•
•
•
•
•
•
integration and provisioning of hybrid clouds
High-performance data scaling in private and public cloud
Performance and high availability through live migration
VM scheduling algorithms
Accelerating VMs live migration time
Cloud-wide VM migration and memory de-duplication
Live migration security
Extend migration algorithm to allow for priorities
Presented by Majid Hajibaba
Virtualization can be defined as the abstraction of the four computing resources (storage, processing power, memory, and network or I/O). It is conceptually similar to emulation, where a system pretends to be another system, whereas virtualization is a system pretending to be two or more of the same system.the virtualization layer will partition the physical resource of the underlying physical server into multiple virtual machines with different workloads.These machines can be scaled up and down on demand with a high level of resources’ abstraction.Virtualization enables high, reliable, and agile deployment mechanisms and management of services, providing on-demand cloning and live migration services which improve reliability.
resources are dynamically provisioned via publicly accessible Web applications/Web services (SOAP or REST ful interfaces)---------------------------------------------------------------------------------------------------These services can be leveraged via Web services (SOAP or REST), a Web-based AWS (Amazon Web Service) management console, or the EC2 command line tools. The Amazon service provides hundreds of pre-made AMIs (Amazon Machine Images) with a variety of operating systems (i.e., Linux, OpenSolaris, or Windows) and pre-loaded software.Amazon offers different instance’s size according to (a) the resource’s needs (small, large, and extra large), (b) the high CPU’s needs it provides (medium and extra large high CPU instances), and (c) high-memory instances (extra large, double extra large, and quadruple extra large instance).
A private cloud aims at providing public cloud functionality, but on private resources, while maintaining control over an organization’s data and resources to meet security and governance’s requirements in an organization.It may also be a private space dedicated for your company within a cloud vendor’s data center designed to handle the organization’s workloads.---------------------------------------------------------------------------“hybrid cloud,” in which a combination of private/internal and external cloud resources exist together by enabling outsourcing of noncritical services and functions in public cloud and keeping the critical ones internal.
Standardization is important to ensure interoperability between virtualization management vendors, the virtual machines produced by each one of them, and cloud computing.(DMTF) have produced standards for almost all the aspects of virtualization technology. VMAN delivers broadly supported interoperability and portability standards for managing the virtual computing lifecycle. VMAN’s OVF (Open Virtualization Format) in a collaboration between industry key players: Dell, HP, IBM, Microsoft, XenSource, and Vmware. OVF specification provides a common format to package and securely distribute virtual appliances across multiple virtualization platforms. OGForganized an official new working group to deliver a standard API for cloud IaaS, the (OCCIWG).This group is dedicated for delivering an API specification for the remote management of cloud computing’s infrastructure and for allowing the development of interoperable tools for common tasks including deployment, autonomic scaling, and monitoring.
To summarize, server provisioning is defining server’s configuration based on the organization requirements, a hardware, and software component.Provisioning from a template is an invaluable feature, because it reduces the time required to create a new virtual machine. This enables the administrator to quickly provision a correctly configured virtual server on demand.This ease and flexibility bring with them the problem of virtual machine’s sprawl, where virtual machines are provisioned so rapidly that documenting and managing the virtual machine’s life cycle become a challenge.
Migration service, in the context of virtual machines, is the process of moving a virtual machine from one host server or storage location to another; there are different techniques of VM migration, hot/life migration, cold/regular migration, and live storage migration of a virtual machineIn this process, all key machines’ components, such as CPU, storage disks, networking, and memory, are completely virtualized, thereby facilitating the entire state of a virtual machine to be captured by a set of easily moved data files.
Live migration (which is also called hot or real-time migration) can be defined as the movement of a virtual machine from one physical host to another while being powered on. When it is properly carried out, this process takes place without any noticeable effect from the end user’s point of view (a matter of milliseconds). advantages of live migration : facilitates proactive maintenance in case of failure can be used for load balancing in which work is shared among computers in order to optimize the utilization of available CPU resourcesThis approach to failure management ensures that at least one host has a consistent VM image at all times during migration. It depends on the assumption that the original host remains stable until the migration commits and that the VM may be suspended and resumed on that host with no risk of failure. Citrix XenServerXenMotionVMware Vmotion
This simple example demonstrates that a highly loaded server can be migrated with both controlled impact on live services and a short downtime.
Cold migration is the migration of a powered-off virtual machine. With cold migration, you have the option of moving the associated disks from one data store to another. The virtual machines are not required to be on a shared storage. It’s important to highlight that the two main differences between live migration and cold migration are that live migration needs a shared storage for virtual machines in the server’s pool, but cold migration does not; also, in live migration for a virtual machine between two hosts, there would be certain CPU compatibility checks to be applied; while in cold migration this checks do not apply. cold migration processThe configuration files, including the NVRAM file (BIOS settings), log files, as well as the disks of the virtual machine, are moved from the source host to the destination host’s associated storage area.The virtual machine is registered with the new host.After the migration is completed, the old version of the virtual machine is deleted from the source host.
virtual machines’ migration plays an important role in data centers by making it easy to adjust resource’s priorities to match resource’s demand conditions.In order to achieve such goals, there should be an integration between virtualization’s management tools (with its migrations and performance’s monitoring capabilities), and SLA’s management tools to achieve balance in resources by migrating and monitoring the workloads, and accordingly, meeting the SLA.
The Amazon EC2 (Elastic Compute Cloud) is a Web service that allows users to provision new machines into Amazon’s virtualized infrastructure in a matter of minutes;Setting up an EC2 instance is quite easy. Once you create your AWS (Amazon Web service) account, you can use the on-line AWS console, or simply download the offline command line’s tools to start provisioning your instances.Amazon Auto Scaling [30] is a set of command line tools that allows scaling Amazon EC2 capacity up or down automatically and according to the conditions the end user defines.CloudWatch [31] isa monitoring service for AWS cloud resources and their utilizationAmazon Elastic Load Balancer [32] is another service that helps in building fault-tolerant applications by automatically provisioning incoming applicationworkload across available Amazon EC2 instances and in multiple availability zones.
Eucalyptus [11] is an open-source infrastructure for the implementation of cloud computing on computer clusters.Its name is an acronym for “elastic utility computing architecture for linking your programs to useful systems.”
UEC can be setup quickly on two machines. This is a simple configuration. One computer will have cloud controller (CLC), Walrus, cluster controller (CC), and storage controller (SC). The other computer will act as the node controller (NC)
Manjrasoft Aneka [10] is a .NET-based platform and framework designed for building and deploying distributed applications on clouds.Aneka also provides support for deploying and managing clouds By using its Management Studio and a set of Web interfaces, it is possible to set up eitherpublic or private clouds, monitor their status, update their configuration, and perform the basic management.
Fabric services directly interact with the node through the platform abstraction layer (PAL) and perform hardware profiling and dynamic resource provisioning. Foundation services identify the core system of the Aneka middleware, providing a set of basic features to enable Aneka containers to perform specialized and specific sets of tasks. Execution services directly deal with the scheduling and execution of applications in the cloud.