SlideShare uma empresa Scribd logo
1 de 11
EXECUTIVE SUMMARY
It has been such a marvel to watch the birth and development of the technologies that will enable
a modernized smart grid. At times, the fits and starts along the way made it seem as though the
transformation would always move at a snail’s pace, if at all, even for the IOU’s with deep
pockets and generous resources. Over the past three to five years, utility executives have been
discovering that the ad hoc nature of their systems threatened to block their forward progress in
achieving smart grid business goals. Utilities also found that they were unable to store and sift
through the exponential flood of the terabytes of raw data produced by smart technology to
uncover useful actionable intelligence in a way that advances utility planning, operating and
maintenance practices. Further headaches ensued when utilities realized the size of the capital-
investment that would be required and the risk associated with making long-term technology
decisions. In addition, they learned that deployments are very complex when it comes to
integrating the disparate software applications associated with delivering the desired level of grid
automation. Utilities were further anxious about whether investments would pay for themselves
over time, or mistakes in making near-term smart grid decisions may increase costs and limit
benefits. Eventually, it was federal and state regulations that finally forced their hand towards
modernization. But for small and mid-sized utilities, including co-op’s and muni’s, the tasks at
hand seemed too Herculean to overcome. Thanks to the development of cloud hosting, SaaS
(Software-as-a-Service) and managed services, all utilities, regardless of size, can realize the
fruition of a fully integrated smart grid infrastructure and the amazing benefits that accompany it.
This report will explore the steps along the journey that have led to this realization.
OPPORTUNITIES AND RECOMMENDATIONS FOR UTILITIES
• Major cloud technology developers continue to invest billions a year in cloud R&D. In
2011, for example, Microsoft committed 90% of its $9.6 billion R&D budget to cloud.
• Cloud computing is a rapidly accelerating revolution within IT and will become the
default method of IT delivery moving into the future.
• When shopping for a SaaS solution, above all, make sure you own your data and without
a time limit. You need to know your security and compliance needs. Transparency,
compliance controls, certifications, and audit ability are some of the key criteria to
evaluate. You also need to ensure it has all the features you want as some hosted versions
are not identical to their desktop counterparts. Look for service-oriented architectures
(SOA), web services standards and web application frameworks as they are easier to
integrate. Lastly, don’t be afraid to ask whether service levels are negotiable.
OPPORTUNITIES AND RECOMMENDATIONS FOR UTILITY VENDORS
• According to ABI Research, spending on big data and analytics in the energy industry
will amount to $7 billion in 2014, representing over 15 percent of the overall cross-
industry spending. In 2019, spending on energy analytics will grow to more than $21
billion.
• Cloud vendors are experiencing growth rates of 50% per annum.
• SaaS revenue will be more than double its 2010 numbers by 2015 and reach a projected
$21.3 billion.
• By 2017, 70% of AMI Systems will integrate to a Distribution Control System.
• Navigant Research recently estimated that the managed services market for smart grid is
at $1.7 billion and is expected to reach $7 billion worldwide by 2020.
THE HISTORY
The underlying concept of cloud computing dates to the 1950’s when large-scale mainframe
computers were seen as the future of computing. These computers were referred to as "static
terminals" because they were used for communications but had no internal processing capacities.
To make more efficient use of costly mainframes and eliminate periods of inactivity, a practice
known as time-sharing evolved which allowed multiple users to share both the physical access to
the computer from multiple terminals as well as the Central Processing Unit (CPU) time. This
eliminated periods of inactivity on the mainframe and allowed for a greater return on the
investment. During the mid 70s, time-sharing became popularly known as Remote Job Entry
(RJE), a nomenclature mostly associated with large vendors such as IBM and DEC.
In the early 60’s and 70’s, most enterprise-based organizations employed a centralized
computing model consisting of supercomputers, software, storage devices, printers and the like,
all centrally located within a temperature controlled data center. These systems typically cost
millions of dollars and were very expensive to operate so, in the 1980’s, there burgeoned a
growing demand for increasingly more powerful and less expensive microprocessors and
personal computers.
In the early 1990’s, the internet and the world-wide web moved into the general computing
world and centralized, client-server models evolved into internet-based computing, and thus, grid
and utility computing came into play. Grid computing enabled individuals from different
organizations the opportunity to work together on common projects, and utility computing
allowed people to essentially rent computing services such as internet access. As computers
began to proliferate, technology scientists researched ways to make large-scale computing power
available to more users through time-sharing, and algorithms were developed to optimize the
infrastructure, platform, and applications to prioritize CPUs and increase efficiency for end
users. Around the same time, telecommunication companies, who traditionally offered primarily
dedicated point-to-point data circuits, began offering virtual private network (VPN) services.
These services were of comparable quality but with lower cost. By switching traffic to balance
server use, the overall network bandwidth could be used more effectively.
In the late 1990’s, Application Service Providers (ASPs) created the first wave of internet-
enabled applications, and this development was really a precursor to SaaS. An ASP would
license a commercial software application to multiple customers which made it possible for
businesses to outsource some of their server and software IT needs.
One of the first milestones in cloud computing history was the arrival of Salesforce.com in 1999,
which pioneered the concept of delivering enterprise applications via a simple website. The next
development was Amazon Web Services in 2002, which provided a suite of cloud-based services
including storage, computation and even human intelligence through the Amazon Mechanical
Turk. In 2006, Amazon then launched its Elastic Compute cloud (EC2) as a commercial web
service that allows small companies and individuals to rent computers on which to run their own
computer applications.
In early 2008, Eucalyptus (Elastic Utility Computing Architecture for Linking Your Programs
To Useful Systems) became the first open-source, AWS (Amazon Web Services) API-
compatible platform for deploying private clouds. (Note: Eucalyptus was acquired by Hewlett-
Packard in September 2014). During the same time frame, OpenNebula, a cloud computing
platform for managing heterogeneous distributed data center infrastructures, became the first
open-source software for deploying private and hybrid clouds, and for the federation of clouds.
Another big milestone came in 2009, as Web 2.0 hit its stride, and Google and others started to
offer browser-based enterprise applications though services such as Google Apps.
In July 2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software
initiative known as OpenStack. The OpenStack project intended to help organizations offer
cloud-computing services running on standard hardware. The early code came from
NASA's Nebula platform as well as from Rackspace's Cloud Files platform. On March 1, 2011,
IBM announced the IBM SmartCloud framework to support Smarter Planet, and on June 7,
2012, Oracle announced the Oracle Cloud. While aspects of the Oracle Cloud are still in
development, their cloud offering is posed to be the first to provide users with access to an
integrated set of IT solutions, including the Applications (SaaS), Platform (PaaS), and
Infrastructure (IaaS) layers
CLOUD COMPUTING/HOSTING
The term "moving to cloud" generally refers to an organization shifting away from a
traditional CAPEX model wherein their company buys dedicated hardware and depreciates it
over a period of time to an OPEX model to use a shared cloud infrastructure on a pay-as-you-go
basis.
The most prominent enabling technology for cloud computing is virtualization. Virtualization
software separates a physical computing device into one or more "virtual" devices, each of which
can be easily used and managed to perform computing tasks. With operating system–level
virtualization essentially creating a scalable system of multiple independent computing devices,
idle computing resources can be allocated and used more efficiently.
The National Institute of Standards and Technology's (NIST) definition of cloud computing
identifies five essential characteristics:
• On-demand self-service
A consumer can unilaterally and automotically provision computing capabilities, such as
server time and network storage, as needed without requiring human interaction with
each service provider.
• Broad network access
Capabilities are available over the network and accessed through standard mechanisms
that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones,
tablets, laptops, and workstations).
• Resource pooling
The provider's computing resources are pooled to serve multiple consumers using a
multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand.
• Rapid elasticity
Capabilities can be elastically provisioned and released, in some cases automatically, to
scale rapidly outward and inward commensurate with demand. To the consumer, the
capabilities available for provisioning often appear unlimited and can be appropriated in
any quantity at any time.
• Measured service
Cloud systems automatically control and optimize resource use by leveraging a metering
capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing, bandwidth, and active user accounts). Resource usage can be monitored,
controlled, and reported, providing transparency for both the provider and consumer of
the utilized service.
There are three major cloud deployment models which utilities can consider implementing. The
first is the Private cloud which is solely operated for a single organization. It can be
hosted internally or externally, and it can be managed internally or by a third-party.
The second deployment model is a Public cloud in which services are rendered over a network
that is open for public use. Public cloud services may be free or offered on a pay-per-usage
model. Generally, public cloud service providers like Amazon AWS, Microsoft and Google own
and operate the infrastructure at their data center and access is generally via the Internet. AWS
and Microsoft also offer direct connect services respectively called "AWS Direct Connect" and
"Azure ExpressRoute". These types of providers require customers to purchase or lease a private
connection to a peering point offered by the cloud provider.
The third deployment is known as a Hybrid cloud because it is comprised of two or more
(private, community or public) cloud services which remain distinct entities but are bound
together and offer the benefits of multiple deployment models. For example, an organization
may store sensitive client data in-house on a private cloud application and interconnect that
application to a business intelligence application provided on a public cloud as a software
service. Another example of hybrid cloud is one where IT organizations use public cloud
computing resources to meet temporary capacity needs that can not be met by the private
cloud. This capability enables hybrid clouds to employ “cloud bursting” for scaling across
clouds. Cloud bursting is an application deployment model wherein an application runs in a
private cloud or data center and "bursts" to a public cloud when the demand for computing
capacity increases. A primary advantage of cloud bursting in the hybrid cloud model is that an
organization only pays for extra computing resources when they are needed.
SaaS
Big data and predictive analytics are the transformative drivers of the new utility. Data analytics
provide utilities with an opportunity to better manage their enterprise based on data-driven
decisions. By using analytics, they are able to improve customer satisfaction through
segmentation and communication personalization, improve operational reliability through
monitoring and predictive maintenance, and expand operational efficiencies through improved
planning and execution.
Utilities are beginning to change their approach as to how they manage their analytics
capabilities. Much like the way they approached transactional applications like customer
information systems (CIS) two decades ago, utilities are now moving from in-house, custom-
built data analytics systems to buying pre-packaged, in-application analytics tools and software-
as-a-service (SaaS) analytics applications. Unlike conventional software which is sold as a
perpetual license with an up-front cost and optional ongoing support fees, SaaS providers
generally price applications using a subscription fee, most commonly a monthly fee or an annual
fee. As a result, the initial setup cost for SaaS is typically lower than equivalent enterprise
software. SaaS vendors typically price their applications based on some usage parameters, such
as the number of users using the application. Since customers' data reside with the SaaS vendor,
opportunities also exist to charge per transaction, event, or other unit of value
Software as a Service (SaaS) makes use of a cloud computing infrastructure to deliver one
application to many users, regardless of their location, rather than the traditional model of one
application per desktop. It allows activities to be managed from central locations in a one-to-
many model, including architecture, pricing, partnering, and management characteristics.
As it relates to utilities, SaaS technology combines the benefits of Advanced Metering
Infrastructure (AMI), Outage Management Systems (OMS), Interactive Voice Response (IVR),
Demand Optimization Systems (DOS) and Geospatial Information Systems (GIS) into simple,
hosted, subscription-based packages.
SaaS is the next iteration in the evolution of Cloud Computing. Third-party providers deliver a
variety of software and other devices as a service rather than installing it directly on the
hardware. SaaS vendors work with cloud hosting companies to deliver on-premise software via a
cost-effective web-based application which effectively relieves the client from the need for
installing updates, updating software or maintaining hardware upgrades.
There are three distinct categories of cloud computing “stacks”, to wit: Software-as- a Service
(SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS).
• SaaS applications are designed for end-users and are delivered via the web
• PaaS is a set of tools and services designed to make coding and deploying applications more
quickly and efficiently
• IaaS is the hardware and software that powers it all and includes servers, storage, networks, and
operating systems
SaaS may be the best known aspect of cloud computing, but developers around the world are
leveraging PaaS as it combines the simplicity of SaaS with the power of IaaS.
PaaS is especially useful in any situation where multiple developers will be working on a
development project or where other external parties need to interact with the development
process. Some basic PaaS characteristics include:
• Services to develop, test, deploy, host and maintain applications in the same integrated
development environment.
• Web-based user interface tools help to create, modify, test and deploy different UI
scenarios.
• Multi-tenant architecture where multiple concurrent users utilize the same development
application.
• Built-in scalability of deployed software.
• Integration with web services and databases via common standards.
Some examples of PaaS include the Google App Engine and Microsoft Azure Services.
Infrastructure as a Service (IaaS) is a way of delivering cloud computing infrastructure as an on-
demand service. Rather than purchasing servers, software, datacenter space or network
equipment, utilities instead buy those resources as a fully outsourced service on demand. IaaS
can be obtained as public or private infrastructure or a combination of the two.
IaaS is mostly suitable for situations where demand is very volatile and there are significant
spikes and troughs in terms of demand on the infrastructure, for new organization without the
capital to invest in hardware, where an organization is growing rapidly and scaling would be
problematic, and for trial or temporary infrasture needs.
Just like SaaS and PaaS, IaaS is a rapidly developing field. There are some core characteristics
which describe what IaaS is:
• Resources are distributed as a service
• Allows for dynamic scaling
• Has a variable cost utility pricing model
• Generally includes multiple users on a single piece of hardware
There are a plethora of IaaS providers out there, from the largest cloud players like Amazon Web
Services and Rackspace to regionalized niche players.
THE CHALLENGES
• Since data are being stored on the vendor’s servers, data security becomes an issue.
• SaaS applications are hosted in the cloud, far away from the application users. Latency in
the environment is not suitable for applications that demand response times in the
milliseconds.
• Multi-tenant architectures limit customization of applications for large clients.
• Some business applications require access to an end-users’ personal information and/or
other sensitive data. Integrating this type of data can be risky and may violate data
governance regulations.
• Organizations that adopt SaaS may be forced into adopting new versions resulting in
unplanned training costs or an increase in probability that a user might make an error.
• Data are transferred to and from a SaaS vendor at Internet speeds rather than the higher
speeds of a firm’s internal network
WHAT CLOUD AND SaaS SERVICES CAN DO FOR THE UTILITY INDUSTRY
Saas and cloud hosting are, by far, the greatest solution for small-to-midsize utilities. Selected
data streams are sent to the “cloud” where experts analyze it using a set of leading edge
technologies, and the data is mined for common indicators of problems like slow meters, high
usage, or failing feeders. Sophisticated algorithms are applied to identify, for instance, the causes
of load inefficiencies or missed opportunities for conservation program outreach. Then a series
of recommendations are returned that enable action items such as modeling, forecasting and
planning with quantifiable results. Transactions take just microseconds, and complex analytics
queries happen in real-time.
There are five ways grid analytics help utilities improve distribution planning:
• Integrating distributed energy resources with the distribution grid
• Managing peak demand
• Fixing the distribution system before it breaks
• Identifying and fixing outages from a severe weather event in hours rather than days
• Improving efficiency by reducing line loss
There are also five ways grid analytics can help utilities improve transmission planning and
operation:
• Enhance infrastructure planning under various load growth scenarios
• Test-run “what-if” scenarios to avoid outages
• Optimize generation resources to insure that the best mix of generation resources are
serving the grid at any given time
• Optimize voltage profile by reducing line transmission loss
• Better integration of utility-scale renewables
UTILITIES CAN REAP FURTHER BENEFITS BY COMBINING CLOUD AND SaaS
WITH A MANAGED SERVICES VENDOR
While Saas and cloud services were the first developments to open the door for small-to-midsize
utilities to enjoy the same smart grid benefits as their big brothers, it is managed services
technology that enabled them to get that foot in the door.
Managed services build on the concept of Saas but provide for the full operational execution of
systems like AMI, MDM and their associated setup, integration, business process alignment,
daily operations and system management. They focus on core business objectives and
operational outcomes such as meter data collection completeness, two-way command execution
availability, and billing-data quality and timeliness.
But, more than anything, managed services take the noose from around the neck of small-to-
midsize utilities by managing the core risks of smart grid investment by providing:
• Fixed-price: A managed service provides fixed costs and a guaranteed outcome backed
by Service Level Agreements (SLA).
• Risk Transfer: Much of the risk associated with technology gaps are transferred to the
managed service partners through business and engineering SLAs and are borne by the
vendor, not the utility.
• Rapid Benefits Realization: Allows utilities to implement significantly pre-integrated
solutions so benefits start to accrue rapidly.
• Utility Focus: Utilities can keep focus on their core business throughout the design, build-
out and operations stages.
THE KEY PLAYERS
DATA MANAGEMENT AND MOVEMENT
LAYER
ANALYTICS AND APPLICATIONS
LAYER
SAS SAS
Teradata IBM
IBM Opower
EMC/Greenplum Space-Time Insights
Oracle EcoFactor
Cisco GE
SAP eMeter (a Siemens Company)
Hortonworks Accenture
Versant ABB/Ventyx
OSIsoft Ecologic Analytics (a Landis+Gyr
Company)
Cloudera Aclara
Hadapt Tendril
Microsoft Silver Spring Networks
RackSpace Echelon
Amazon DataRaker
Google Telvent (a Schneider Electric Company)
EnerNOC
Itron
Tableau Software
Energate
Grid Net
Power Analytics
ECOtality
IMPLEMENTATIONS IN THE SPACE
City of Fort Collins
The Elster EnergyAxis Smart Grid solution will power an advanced metering infrastructure
(AMI) system for Fort Collins Utilities' electric and water customers, delivering system and
operational improvements and providing consumers with more flexibility in electric and water
usage.
The Elster EnergyAxis solution, running on the Tropos Network communications system, will
enable Fort Collins Utilities to analyze meter data to maintain high system reliability, make
utility operations more cost effective, provide more information to customers and better prepare
the utility and the community for emerging technologies. The Fort Collins Smart Grid
deployment will also improve customer service by enabling two-way digital communication
between the advanced meters and system managers, and reduce operational costs and related
truck rolls.
Wake Electric Membership
Wake Electric sought to build out its Smart Grid strategy. In 2010, a SCADA system was
established to allow central monitoring and remote control of all the substations around the
system. Public networks worked well for the SCADA proof of concept, but the utility wanted to
ultimately move from public to fixed communications networks to optimize reliability. In
addition, it had already chosen an AMI and began rolling out the updated digital meters in 2011.
With additional Smart Grid applications on the horizon, Wake Electric needed to verify its
communications strategy. Rather than consulting specialists in individual technologies, it sought
a single source capable of comparing multiple available technologies. When Siemens introduced
its SG-CAT tool as a means to holistically study the various options, Wake Electric chose the
company to perform the communications study in addition to a feeder automation pilot with
FLIR. The study would involve modeling, simulating and documenting the precise network
environment, including the service area’s terrain, asset deployment topology and cross-cell
interference in relation to specific application requirements.
The Siemens SG-CAT study kicked off at Wake Electric in November 2011 and it was
completed in two phases. The first study was based on general locations and assumptions, and
the findings were presented by Siemens in February 2012. Ultimately, Wake Electric learned that
its existing AMI network is capable of also supporting fault current indicator monitoring and
configurations of distribution automation in certain locations. Based on the study, the utility
chose to use the AMI network for portions of its distribution automation initiatives, and WiMAX
for the FLIR pilot and backhaul communications.
Central Lincoln People’s Utility district
Central Lincoln People’s Utility District (Central Lincoln PUD) is deploying advanced metering
infrastructure (AMI) and distribution automation assets as part of their Smart Grid Team 2020.
The AMI project consists of a system-wide deployment of smart meters to its customers as well
as a communications infrastructure to gather the smart meter data. The two-way communication
provided by the AMI will allow Central Lincoln PUD to deploy direct load control devices and
pricing programs in the future and a customer energy management web portal in the near term.
In addition to the AMI, Central Lincoln PUD is also upgrading its electric infrastructure with an
enhanced SCADA system, installation of an Outage Management System, fiber optic cable, and
automated distribution feeder controls, regulators and fault indicators.
THE OUTLOOK
Cloud-based web services for virtual power plants will be a boon for small-to-midsize utilities.
In February of 2014, Siemens Smart Grid Division announced that the company can provide
utilities with a cloud-based Web service for virtual power plants. This service enables the
utilities to interconnect their customers' small distributed-energy resources (DEMS) together and
offer the bundled power to operators of a large virtual power plant for marketing. Because of the
standard functions of the Siemens energy management system, DEMS are adequate for setting
up a small virtual power plant and software license costs are reduced. Another advantage of the
cloud-based Web service is that there are no costs incurred for the computer hardware that is
otherwise required. Benefits include communication interfaces for the distributed power
generation plants, generation forecasts, and aggregation functions as well as a Web portal
through which owners of distributed plants can release their generated power for marketing in
the virtual power plant network. Siemens began offering this service in the early summer 2014.
The company will provide the service jointly with RWE Deutschland AG which operates one of
the major virtual networks.
Landis+Gyr also moved further into cloud-based services earlier this year to integrate outage
detection, meter data and conservation voltage reduction for smaller utilities.

Mais conteúdo relacionado

Mais procurados

Luis Alves Martins Presentation / CloudViews.Org - Cloud Computing Conference...
Luis Alves Martins Presentation / CloudViews.Org - Cloud Computing Conference...Luis Alves Martins Presentation / CloudViews.Org - Cloud Computing Conference...
Luis Alves Martins Presentation / CloudViews.Org - Cloud Computing Conference...EuroCloud
 
cloud computing documentation
cloud computing documentationcloud computing documentation
cloud computing documentationshilpa bojji
 
F ernando sousa ibm_from hype to realiity
F ernando sousa ibm_from hype to realiityF ernando sousa ibm_from hype to realiity
F ernando sousa ibm_from hype to realiityEuroCloud
 
Group seminar report on cloud computing
Group seminar report on cloud computingGroup seminar report on cloud computing
Group seminar report on cloud computingSandhya Rathi
 
Cloud computing applicatio
Cloud  computing  applicatioCloud  computing  applicatio
Cloud computing applicatioChetan Sontakke
 
Cloud computing report
Cloud computing reportCloud computing report
Cloud computing reportErManish5
 
Public cloud: A Review
Public cloud: A ReviewPublic cloud: A Review
Public cloud: A ReviewAjay844
 
Cloud Computing on ISO/IEC JTC 1
Cloud Computing on ISO/IEC JTC 1Cloud Computing on ISO/IEC JTC 1
Cloud Computing on ISO/IEC JTC 1Seungyun Lee
 
seminar on cloud computing report
seminar on cloud computing reportseminar on cloud computing report
seminar on cloud computing reportANKIT KUMAR
 
White Paper smaller
White Paper smallerWhite Paper smaller
White Paper smallerJonny Sharp
 

Mais procurados (20)

Luis Alves Martins Presentation / CloudViews.Org - Cloud Computing Conference...
Luis Alves Martins Presentation / CloudViews.Org - Cloud Computing Conference...Luis Alves Martins Presentation / CloudViews.Org - Cloud Computing Conference...
Luis Alves Martins Presentation / CloudViews.Org - Cloud Computing Conference...
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
cloud computing documentation
cloud computing documentationcloud computing documentation
cloud computing documentation
 
F ernando sousa ibm_from hype to realiity
F ernando sousa ibm_from hype to realiityF ernando sousa ibm_from hype to realiity
F ernando sousa ibm_from hype to realiity
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Cloud computing report
Cloud computing reportCloud computing report
Cloud computing report
 
Group seminar report on cloud computing
Group seminar report on cloud computingGroup seminar report on cloud computing
Group seminar report on cloud computing
 
Serenus White Paper
Serenus White PaperSerenus White Paper
Serenus White Paper
 
Cloud computing Report
Cloud computing ReportCloud computing Report
Cloud computing Report
 
Cloud
CloudCloud
Cloud
 
Cloud computing applicatio
Cloud  computing  applicatioCloud  computing  applicatio
Cloud computing applicatio
 
Cloud computing.pptx
Cloud computing.pptxCloud computing.pptx
Cloud computing.pptx
 
Cloud
CloudCloud
Cloud
 
Cloud computing report
Cloud computing reportCloud computing report
Cloud computing report
 
Public cloud: A Review
Public cloud: A ReviewPublic cloud: A Review
Public cloud: A Review
 
Cloud Computing on ISO/IEC JTC 1
Cloud Computing on ISO/IEC JTC 1Cloud Computing on ISO/IEC JTC 1
Cloud Computing on ISO/IEC JTC 1
 
Cloud Ecosystem
Cloud EcosystemCloud Ecosystem
Cloud Ecosystem
 
seminar on cloud computing report
seminar on cloud computing reportseminar on cloud computing report
seminar on cloud computing report
 
White Paper smaller
White Paper smallerWhite Paper smaller
White Paper smaller
 
Introduction to cloud computing
Introduction to cloud computingIntroduction to cloud computing
Introduction to cloud computing
 

Semelhante a Zpryme Report on Cloud and SAS Solutions

Semelhante a Zpryme Report on Cloud and SAS Solutions (20)

Cloud report
Cloud report Cloud report
Cloud report
 
Introduction to Cloud Computing
Introduction to Cloud ComputingIntroduction to Cloud Computing
Introduction to Cloud Computing
 
Impacts of Cloud Computing in the Society
Impacts of Cloud Computing in the SocietyImpacts of Cloud Computing in the Society
Impacts of Cloud Computing in the Society
 
cloud-computing
cloud-computingcloud-computing
cloud-computing
 
CLOUD COMPUTING
CLOUD COMPUTINGCLOUD COMPUTING
CLOUD COMPUTING
 
Cloud Computing
Cloud ComputingCloud Computing
Cloud Computing
 
Openstack
OpenstackOpenstack
Openstack
 
Cloud Computing Michael Davis 2008 Aug17
Cloud Computing Michael Davis 2008 Aug17Cloud Computing Michael Davis 2008 Aug17
Cloud Computing Michael Davis 2008 Aug17
 
Cloud Compting History.
Cloud Compting History.Cloud Compting History.
Cloud Compting History.
 
Ch-1-INTRODUCTION (1).pdf
Ch-1-INTRODUCTION (1).pdfCh-1-INTRODUCTION (1).pdf
Ch-1-INTRODUCTION (1).pdf
 
Cloud Computing Essays
Cloud Computing EssaysCloud Computing Essays
Cloud Computing Essays
 
Cloud computing writeup
Cloud computing writeupCloud computing writeup
Cloud computing writeup
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
Cloud computing tarea
Cloud computing tareaCloud computing tarea
Cloud computing tarea
 
Cloud computing project report
Cloud computing project reportCloud computing project report
Cloud computing project report
 
cc notes one.pdf
cc notes one.pdfcc notes one.pdf
cc notes one.pdf
 
cloud co3453545665768s6xd5x5a5mputing.pdf
cloud co3453545665768s6xd5x5a5mputing.pdfcloud co3453545665768s6xd5x5a5mputing.pdf
cloud co3453545665768s6xd5x5a5mputing.pdf
 
Cloud computing
Cloud computingCloud computing
Cloud computing
 
WHAT IS CLOUD COMPUTING
WHAT IS CLOUD COMPUTINGWHAT IS CLOUD COMPUTING
WHAT IS CLOUD COMPUTING
 

Zpryme Report on Cloud and SAS Solutions

  • 1. EXECUTIVE SUMMARY It has been such a marvel to watch the birth and development of the technologies that will enable a modernized smart grid. At times, the fits and starts along the way made it seem as though the transformation would always move at a snail’s pace, if at all, even for the IOU’s with deep pockets and generous resources. Over the past three to five years, utility executives have been discovering that the ad hoc nature of their systems threatened to block their forward progress in achieving smart grid business goals. Utilities also found that they were unable to store and sift through the exponential flood of the terabytes of raw data produced by smart technology to uncover useful actionable intelligence in a way that advances utility planning, operating and maintenance practices. Further headaches ensued when utilities realized the size of the capital- investment that would be required and the risk associated with making long-term technology decisions. In addition, they learned that deployments are very complex when it comes to integrating the disparate software applications associated with delivering the desired level of grid automation. Utilities were further anxious about whether investments would pay for themselves over time, or mistakes in making near-term smart grid decisions may increase costs and limit benefits. Eventually, it was federal and state regulations that finally forced their hand towards modernization. But for small and mid-sized utilities, including co-op’s and muni’s, the tasks at hand seemed too Herculean to overcome. Thanks to the development of cloud hosting, SaaS (Software-as-a-Service) and managed services, all utilities, regardless of size, can realize the fruition of a fully integrated smart grid infrastructure and the amazing benefits that accompany it. This report will explore the steps along the journey that have led to this realization. OPPORTUNITIES AND RECOMMENDATIONS FOR UTILITIES • Major cloud technology developers continue to invest billions a year in cloud R&D. In 2011, for example, Microsoft committed 90% of its $9.6 billion R&D budget to cloud. • Cloud computing is a rapidly accelerating revolution within IT and will become the default method of IT delivery moving into the future. • When shopping for a SaaS solution, above all, make sure you own your data and without a time limit. You need to know your security and compliance needs. Transparency, compliance controls, certifications, and audit ability are some of the key criteria to evaluate. You also need to ensure it has all the features you want as some hosted versions are not identical to their desktop counterparts. Look for service-oriented architectures (SOA), web services standards and web application frameworks as they are easier to integrate. Lastly, don’t be afraid to ask whether service levels are negotiable. OPPORTUNITIES AND RECOMMENDATIONS FOR UTILITY VENDORS • According to ABI Research, spending on big data and analytics in the energy industry will amount to $7 billion in 2014, representing over 15 percent of the overall cross- industry spending. In 2019, spending on energy analytics will grow to more than $21 billion.
  • 2. • Cloud vendors are experiencing growth rates of 50% per annum. • SaaS revenue will be more than double its 2010 numbers by 2015 and reach a projected $21.3 billion. • By 2017, 70% of AMI Systems will integrate to a Distribution Control System. • Navigant Research recently estimated that the managed services market for smart grid is at $1.7 billion and is expected to reach $7 billion worldwide by 2020. THE HISTORY The underlying concept of cloud computing dates to the 1950’s when large-scale mainframe computers were seen as the future of computing. These computers were referred to as "static terminals" because they were used for communications but had no internal processing capacities. To make more efficient use of costly mainframes and eliminate periods of inactivity, a practice known as time-sharing evolved which allowed multiple users to share both the physical access to the computer from multiple terminals as well as the Central Processing Unit (CPU) time. This eliminated periods of inactivity on the mainframe and allowed for a greater return on the investment. During the mid 70s, time-sharing became popularly known as Remote Job Entry (RJE), a nomenclature mostly associated with large vendors such as IBM and DEC. In the early 60’s and 70’s, most enterprise-based organizations employed a centralized computing model consisting of supercomputers, software, storage devices, printers and the like, all centrally located within a temperature controlled data center. These systems typically cost millions of dollars and were very expensive to operate so, in the 1980’s, there burgeoned a growing demand for increasingly more powerful and less expensive microprocessors and personal computers. In the early 1990’s, the internet and the world-wide web moved into the general computing world and centralized, client-server models evolved into internet-based computing, and thus, grid and utility computing came into play. Grid computing enabled individuals from different organizations the opportunity to work together on common projects, and utility computing allowed people to essentially rent computing services such as internet access. As computers began to proliferate, technology scientists researched ways to make large-scale computing power available to more users through time-sharing, and algorithms were developed to optimize the infrastructure, platform, and applications to prioritize CPUs and increase efficiency for end users. Around the same time, telecommunication companies, who traditionally offered primarily dedicated point-to-point data circuits, began offering virtual private network (VPN) services. These services were of comparable quality but with lower cost. By switching traffic to balance server use, the overall network bandwidth could be used more effectively. In the late 1990’s, Application Service Providers (ASPs) created the first wave of internet- enabled applications, and this development was really a precursor to SaaS. An ASP would
  • 3. license a commercial software application to multiple customers which made it possible for businesses to outsource some of their server and software IT needs. One of the first milestones in cloud computing history was the arrival of Salesforce.com in 1999, which pioneered the concept of delivering enterprise applications via a simple website. The next development was Amazon Web Services in 2002, which provided a suite of cloud-based services including storage, computation and even human intelligence through the Amazon Mechanical Turk. In 2006, Amazon then launched its Elastic Compute cloud (EC2) as a commercial web service that allows small companies and individuals to rent computers on which to run their own computer applications. In early 2008, Eucalyptus (Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems) became the first open-source, AWS (Amazon Web Services) API- compatible platform for deploying private clouds. (Note: Eucalyptus was acquired by Hewlett- Packard in September 2014). During the same time frame, OpenNebula, a cloud computing platform for managing heterogeneous distributed data center infrastructures, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds. Another big milestone came in 2009, as Web 2.0 hit its stride, and Google and others started to offer browser-based enterprise applications though services such as Google Apps. In July 2010, Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack. The OpenStack project intended to help organizations offer cloud-computing services running on standard hardware. The early code came from NASA's Nebula platform as well as from Rackspace's Cloud Files platform. On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet, and on June 7, 2012, Oracle announced the Oracle Cloud. While aspects of the Oracle Cloud are still in development, their cloud offering is posed to be the first to provide users with access to an integrated set of IT solutions, including the Applications (SaaS), Platform (PaaS), and Infrastructure (IaaS) layers CLOUD COMPUTING/HOSTING The term "moving to cloud" generally refers to an organization shifting away from a traditional CAPEX model wherein their company buys dedicated hardware and depreciates it over a period of time to an OPEX model to use a shared cloud infrastructure on a pay-as-you-go basis. The most prominent enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system–level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. The National Institute of Standards and Technology's (NIST) definition of cloud computing identifies five essential characteristics:
  • 4. • On-demand self-service A consumer can unilaterally and automotically provision computing capabilities, such as server time and network storage, as needed without requiring human interaction with each service provider. • Broad network access Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations). • Resource pooling The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. • Rapid elasticity Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time. • Measured service Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. There are three major cloud deployment models which utilities can consider implementing. The first is the Private cloud which is solely operated for a single organization. It can be hosted internally or externally, and it can be managed internally or by a third-party. The second deployment model is a Public cloud in which services are rendered over a network that is open for public use. Public cloud services may be free or offered on a pay-per-usage model. Generally, public cloud service providers like Amazon AWS, Microsoft and Google own and operate the infrastructure at their data center and access is generally via the Internet. AWS and Microsoft also offer direct connect services respectively called "AWS Direct Connect" and "Azure ExpressRoute". These types of providers require customers to purchase or lease a private connection to a peering point offered by the cloud provider. The third deployment is known as a Hybrid cloud because it is comprised of two or more (private, community or public) cloud services which remain distinct entities but are bound together and offer the benefits of multiple deployment models. For example, an organization may store sensitive client data in-house on a private cloud application and interconnect that application to a business intelligence application provided on a public cloud as a software service. Another example of hybrid cloud is one where IT organizations use public cloud
  • 5. computing resources to meet temporary capacity needs that can not be met by the private cloud. This capability enables hybrid clouds to employ “cloud bursting” for scaling across clouds. Cloud bursting is an application deployment model wherein an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting in the hybrid cloud model is that an organization only pays for extra computing resources when they are needed. SaaS Big data and predictive analytics are the transformative drivers of the new utility. Data analytics provide utilities with an opportunity to better manage their enterprise based on data-driven decisions. By using analytics, they are able to improve customer satisfaction through segmentation and communication personalization, improve operational reliability through monitoring and predictive maintenance, and expand operational efficiencies through improved planning and execution. Utilities are beginning to change their approach as to how they manage their analytics capabilities. Much like the way they approached transactional applications like customer information systems (CIS) two decades ago, utilities are now moving from in-house, custom- built data analytics systems to buying pre-packaged, in-application analytics tools and software- as-a-service (SaaS) analytics applications. Unlike conventional software which is sold as a perpetual license with an up-front cost and optional ongoing support fees, SaaS providers generally price applications using a subscription fee, most commonly a monthly fee or an annual fee. As a result, the initial setup cost for SaaS is typically lower than equivalent enterprise software. SaaS vendors typically price their applications based on some usage parameters, such as the number of users using the application. Since customers' data reside with the SaaS vendor, opportunities also exist to charge per transaction, event, or other unit of value Software as a Service (SaaS) makes use of a cloud computing infrastructure to deliver one application to many users, regardless of their location, rather than the traditional model of one application per desktop. It allows activities to be managed from central locations in a one-to- many model, including architecture, pricing, partnering, and management characteristics. As it relates to utilities, SaaS technology combines the benefits of Advanced Metering Infrastructure (AMI), Outage Management Systems (OMS), Interactive Voice Response (IVR), Demand Optimization Systems (DOS) and Geospatial Information Systems (GIS) into simple, hosted, subscription-based packages. SaaS is the next iteration in the evolution of Cloud Computing. Third-party providers deliver a variety of software and other devices as a service rather than installing it directly on the hardware. SaaS vendors work with cloud hosting companies to deliver on-premise software via a cost-effective web-based application which effectively relieves the client from the need for installing updates, updating software or maintaining hardware upgrades. There are three distinct categories of cloud computing “stacks”, to wit: Software-as- a Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS).
  • 6. • SaaS applications are designed for end-users and are delivered via the web • PaaS is a set of tools and services designed to make coding and deploying applications more quickly and efficiently • IaaS is the hardware and software that powers it all and includes servers, storage, networks, and operating systems SaaS may be the best known aspect of cloud computing, but developers around the world are leveraging PaaS as it combines the simplicity of SaaS with the power of IaaS. PaaS is especially useful in any situation where multiple developers will be working on a development project or where other external parties need to interact with the development process. Some basic PaaS characteristics include: • Services to develop, test, deploy, host and maintain applications in the same integrated development environment. • Web-based user interface tools help to create, modify, test and deploy different UI scenarios. • Multi-tenant architecture where multiple concurrent users utilize the same development application. • Built-in scalability of deployed software. • Integration with web services and databases via common standards. Some examples of PaaS include the Google App Engine and Microsoft Azure Services. Infrastructure as a Service (IaaS) is a way of delivering cloud computing infrastructure as an on- demand service. Rather than purchasing servers, software, datacenter space or network equipment, utilities instead buy those resources as a fully outsourced service on demand. IaaS can be obtained as public or private infrastructure or a combination of the two. IaaS is mostly suitable for situations where demand is very volatile and there are significant spikes and troughs in terms of demand on the infrastructure, for new organization without the capital to invest in hardware, where an organization is growing rapidly and scaling would be problematic, and for trial or temporary infrasture needs. Just like SaaS and PaaS, IaaS is a rapidly developing field. There are some core characteristics which describe what IaaS is:
  • 7. • Resources are distributed as a service • Allows for dynamic scaling • Has a variable cost utility pricing model • Generally includes multiple users on a single piece of hardware There are a plethora of IaaS providers out there, from the largest cloud players like Amazon Web Services and Rackspace to regionalized niche players. THE CHALLENGES • Since data are being stored on the vendor’s servers, data security becomes an issue. • SaaS applications are hosted in the cloud, far away from the application users. Latency in the environment is not suitable for applications that demand response times in the milliseconds. • Multi-tenant architectures limit customization of applications for large clients. • Some business applications require access to an end-users’ personal information and/or other sensitive data. Integrating this type of data can be risky and may violate data governance regulations. • Organizations that adopt SaaS may be forced into adopting new versions resulting in unplanned training costs or an increase in probability that a user might make an error. • Data are transferred to and from a SaaS vendor at Internet speeds rather than the higher speeds of a firm’s internal network WHAT CLOUD AND SaaS SERVICES CAN DO FOR THE UTILITY INDUSTRY Saas and cloud hosting are, by far, the greatest solution for small-to-midsize utilities. Selected data streams are sent to the “cloud” where experts analyze it using a set of leading edge technologies, and the data is mined for common indicators of problems like slow meters, high usage, or failing feeders. Sophisticated algorithms are applied to identify, for instance, the causes of load inefficiencies or missed opportunities for conservation program outreach. Then a series of recommendations are returned that enable action items such as modeling, forecasting and planning with quantifiable results. Transactions take just microseconds, and complex analytics queries happen in real-time. There are five ways grid analytics help utilities improve distribution planning: • Integrating distributed energy resources with the distribution grid • Managing peak demand • Fixing the distribution system before it breaks • Identifying and fixing outages from a severe weather event in hours rather than days • Improving efficiency by reducing line loss There are also five ways grid analytics can help utilities improve transmission planning and operation:
  • 8. • Enhance infrastructure planning under various load growth scenarios • Test-run “what-if” scenarios to avoid outages • Optimize generation resources to insure that the best mix of generation resources are serving the grid at any given time • Optimize voltage profile by reducing line transmission loss • Better integration of utility-scale renewables UTILITIES CAN REAP FURTHER BENEFITS BY COMBINING CLOUD AND SaaS WITH A MANAGED SERVICES VENDOR While Saas and cloud services were the first developments to open the door for small-to-midsize utilities to enjoy the same smart grid benefits as their big brothers, it is managed services technology that enabled them to get that foot in the door. Managed services build on the concept of Saas but provide for the full operational execution of systems like AMI, MDM and their associated setup, integration, business process alignment, daily operations and system management. They focus on core business objectives and operational outcomes such as meter data collection completeness, two-way command execution availability, and billing-data quality and timeliness. But, more than anything, managed services take the noose from around the neck of small-to- midsize utilities by managing the core risks of smart grid investment by providing: • Fixed-price: A managed service provides fixed costs and a guaranteed outcome backed by Service Level Agreements (SLA). • Risk Transfer: Much of the risk associated with technology gaps are transferred to the managed service partners through business and engineering SLAs and are borne by the vendor, not the utility. • Rapid Benefits Realization: Allows utilities to implement significantly pre-integrated solutions so benefits start to accrue rapidly. • Utility Focus: Utilities can keep focus on their core business throughout the design, build- out and operations stages. THE KEY PLAYERS DATA MANAGEMENT AND MOVEMENT LAYER ANALYTICS AND APPLICATIONS LAYER SAS SAS Teradata IBM
  • 9. IBM Opower EMC/Greenplum Space-Time Insights Oracle EcoFactor Cisco GE SAP eMeter (a Siemens Company) Hortonworks Accenture Versant ABB/Ventyx OSIsoft Ecologic Analytics (a Landis+Gyr Company) Cloudera Aclara Hadapt Tendril Microsoft Silver Spring Networks RackSpace Echelon Amazon DataRaker Google Telvent (a Schneider Electric Company) EnerNOC Itron Tableau Software Energate Grid Net Power Analytics ECOtality IMPLEMENTATIONS IN THE SPACE City of Fort Collins The Elster EnergyAxis Smart Grid solution will power an advanced metering infrastructure (AMI) system for Fort Collins Utilities' electric and water customers, delivering system and operational improvements and providing consumers with more flexibility in electric and water usage. The Elster EnergyAxis solution, running on the Tropos Network communications system, will enable Fort Collins Utilities to analyze meter data to maintain high system reliability, make utility operations more cost effective, provide more information to customers and better prepare the utility and the community for emerging technologies. The Fort Collins Smart Grid deployment will also improve customer service by enabling two-way digital communication between the advanced meters and system managers, and reduce operational costs and related truck rolls. Wake Electric Membership Wake Electric sought to build out its Smart Grid strategy. In 2010, a SCADA system was established to allow central monitoring and remote control of all the substations around the system. Public networks worked well for the SCADA proof of concept, but the utility wanted to
  • 10. ultimately move from public to fixed communications networks to optimize reliability. In addition, it had already chosen an AMI and began rolling out the updated digital meters in 2011. With additional Smart Grid applications on the horizon, Wake Electric needed to verify its communications strategy. Rather than consulting specialists in individual technologies, it sought a single source capable of comparing multiple available technologies. When Siemens introduced its SG-CAT tool as a means to holistically study the various options, Wake Electric chose the company to perform the communications study in addition to a feeder automation pilot with FLIR. The study would involve modeling, simulating and documenting the precise network environment, including the service area’s terrain, asset deployment topology and cross-cell interference in relation to specific application requirements. The Siemens SG-CAT study kicked off at Wake Electric in November 2011 and it was completed in two phases. The first study was based on general locations and assumptions, and the findings were presented by Siemens in February 2012. Ultimately, Wake Electric learned that its existing AMI network is capable of also supporting fault current indicator monitoring and configurations of distribution automation in certain locations. Based on the study, the utility chose to use the AMI network for portions of its distribution automation initiatives, and WiMAX for the FLIR pilot and backhaul communications. Central Lincoln People’s Utility district Central Lincoln People’s Utility District (Central Lincoln PUD) is deploying advanced metering infrastructure (AMI) and distribution automation assets as part of their Smart Grid Team 2020. The AMI project consists of a system-wide deployment of smart meters to its customers as well as a communications infrastructure to gather the smart meter data. The two-way communication provided by the AMI will allow Central Lincoln PUD to deploy direct load control devices and pricing programs in the future and a customer energy management web portal in the near term. In addition to the AMI, Central Lincoln PUD is also upgrading its electric infrastructure with an enhanced SCADA system, installation of an Outage Management System, fiber optic cable, and automated distribution feeder controls, regulators and fault indicators. THE OUTLOOK Cloud-based web services for virtual power plants will be a boon for small-to-midsize utilities. In February of 2014, Siemens Smart Grid Division announced that the company can provide utilities with a cloud-based Web service for virtual power plants. This service enables the utilities to interconnect their customers' small distributed-energy resources (DEMS) together and offer the bundled power to operators of a large virtual power plant for marketing. Because of the standard functions of the Siemens energy management system, DEMS are adequate for setting up a small virtual power plant and software license costs are reduced. Another advantage of the cloud-based Web service is that there are no costs incurred for the computer hardware that is otherwise required. Benefits include communication interfaces for the distributed power generation plants, generation forecasts, and aggregation functions as well as a Web portal through which owners of distributed plants can release their generated power for marketing in the virtual power plant network. Siemens began offering this service in the early summer 2014.
  • 11. The company will provide the service jointly with RWE Deutschland AG which operates one of the major virtual networks. Landis+Gyr also moved further into cloud-based services earlier this year to integrate outage detection, meter data and conservation voltage reduction for smaller utilities.