1. amazon
web services
What is new on AWS and what it
means to you
Kingsley Wood
Business Development Manager
Amazon Web Services
Created by Joe Ziegler
2. House Keeping
• This presentation will run for 45-50 minutes.
• We will have time to answer some questions at
the end.
• Please answer the survey that pops up at the
end to receive $25 in AWS Credits.
6. How did we innovate?
• Iterated based on customer feedback and
enhanced current features.
• Created new services to leverage the
cloud across more business challenges.
• Added new instances types for the EC2
service.
7. How did we innovate?
• Costs are reduced as we continue to
scale and harness efficiencies.
• Together this creates new solutions to IT
use cases
9. AWS Elastic Beanstalk
• New Runtimes – Singapore
– PHP – Australia
– .NET • Simplified Command Line
– Python Interface
– Ruby • Configuration
• New Regions Enhancements
– US West • RDS Integration
– EU
10. Virtual Private Cloud (VPC)
Features
• T1 Micro
• Static Routing
• Route Propagation
• Multiple IP Addresses with ENI
12. Relational Database Service (RDS)
• Increased Snapshot Retention Period
• Multi-AZ Option for Oracle
• RDS for SQL Server
• Oracle Enterprise Manager 11g Database
Control and Data Pump
13. Relational Data Service (RDS)
• SQL Server Database Engine Tuning
Advisor and Agent
• Free Tier
• Promote Read Replicas
• Endpoint Renaming
• Provisioned IOPS Volumes
14. CloudFront
• Max file size 5GB to 20GB
• Streaming with Windows IIS Media
Service
• Support for Dynamic Content
• Price Classes
15. Elastic Map Reduce (EMR)
• New Metrics
• Latest Version of Hadoop and Pig
• cc2.8xlarge Support
• Apache HBase
16. CloudWatch
• EC2 Instance Status Checks and
Reporting
• Monitoring Scripts (Linux and Windows)
• Billing Alerts
• Alarm Actions
17. Simple Storage Service (S3)
• The First Trillion Objects
• Cross Origin Resource Sharing Support
• Root Domain Website Hosting
19. Storage Gateway
• Available in All Public AWS Regions
• Gateway-Cached Volumes
• Storage Gateway Amazon Machine Image
(AMI)
20. Global Infrastructure
• 15 New Edge Locations for a Total of 39
• New Australian Region – ap-southeast-2
• Many Services Rolled out to Regions
As of February 2013
23. Simple Workflow Service
• Fully Managed Service
• Highly Available and Distributed
• Focus on Application Logic, not Workflow
Code
24. Amazon CloudSearch
• Easy to Configure • Low Latency, High
• Fully Managed Throughput
Service • Easy Administration
• Automatic Scaling • Rich Search Features
For Data & Traffic • Low Costs
25. Amazon Glacier
• Cold Storage
• As low as $.01 / GB / month
• Durability of 99.999999999%
• Move data between S3 using data lifecycle
policies
26. AWS Data Pipeline
• Transform and Process at Scale
• Data processing workloads that are fault
tolerant, repeatable, and highly available
• Transfer to Amazon S3, Amazon RDS,
Amazon DynamoDB, Amazon Elastic
MapReduce and on premises systems
27. Amazon Redshift
• Data Warehouse in the Cloud
• Fast, Fully Managed, Petabyte-scale
• Standard ODBC and JDBC connections and
Postgres drivers
• Integrates with Amazon S3, Amazon RDS
Amazon DynamoDB and AWS Data Pipleline
28. Amazon Elastic Transcoder
• Video Transcoding in the Cloud
• Familiar Development in AWS SDKs for
Python, Node.js, Java, .NET, PHP, and Ruby
• Supports H.264, AAC, MP4, MPEG-2, FLV,
3GP and AVI
• Free Usage Tier
44. amazon
web services
Thank You
Kingsley Wood
Business Development Manager
Amazon Web Services
Notas do Editor
Innovating more every year
Innovating more every year
Please note, this presentation will not cover all of the enhancements
RuntimesIn addition to existing Java.New RegionsAll regions supported. Started this year with just US EastCLIEb (pronounced ee-bee) to the family of Elastic Beanstalk command line tools! Eb simplifies the development and deployment tasks from the terminal on Linux, Mac OS, and Microsoft WindowsConfiguration EnhancementThe configuration files now allow you to declaratively install packages and libraries, configure software components (such as Apache Tomcat or the Apache Web Server), and run commands on the Amazon EC2 instances in your Elastic Beanstalk environment. You can also set environment variables across your fleet of EC2 instances, create users and groups, and start or stop daemons.So why is this big news? In the past, to make changes to an Elastic Beanstalk environment, you had to create and maintain custom Amazon Machine Images (AMIs). Now, a change as small as adding a font library or as involved as installing and configuring an agent entails only a few lines of YAML.RDSSupports my SQL foropensource tools. SQL Server for .Net
Static RoutingWe added the static routing option for a number of reasons. First, BGP can be difficult to set up and to manage, and we don't want to ask you to go to all of that trouble if all you want to do is set up a VPN connection to a VPC. Second, some firewalls and entry-level routers support IPSec but not BGP. These devices are very popular in corporate branch offices. As I mentioned above, this change dramatically increases the number of VPN devices that can be used to connect to a VPC. We have tested the static routing "No BGP" option with devices from Cisco, Juniper, Yamaha, Netgear, and Microsoft. We've assembled a list of VPN devices that we’ve tested for dynamic and statically routed VPN connectionsRoute PropagationYou can automatically propagate your VPN Connection routes (whether statically entered or advertised via BGP) to your VPC route table:In order to enable this option for a particular routing table, you must establish an association between the table and a gateway like this:You can also arrange to update multiple routing tables from the same virtual private gateway.Multiple IP AddressWhen we launched the Elastic Network Interface (ENI) feature last December, you were limited to a maximum of two ENI's per EC2 instance, each with a single IP address. With today's release we are raising these limits, allowing you to have up to 30 IP addresses per interface and 8 interfaces per instance on the m2.4xl and cc2.8xlarge instances, with proportionally smaller limits for the less powerful instance types. Inspect the limits with care if you plan to use lots of interfaces or IP addresses and expect to switch between different instance sizes from time to time.
Static RoutingWe added the static routing option for a number of reasons. First, BGP can be difficult to set up and to manage, and we don't want to ask you to go to all of that trouble if all you want to do is set up a VPN connection to a VPC. Second, some firewalls and entry-level routers support IPSec but not BGP. These devices are very popular in corporate branch offices. As I mentioned above, this change dramatically increases the number of VPN devices that can be used to connect to a VPC. We have tested the static routing "No BGP" option with devices from Cisco, Juniper, Yamaha, Netgear, and Microsoft. We've assembled a list of VPN devices that we’ve tested for dynamic and statically routed VPN connectionsRoute PropagationYou can automatically propagate your VPN Connection routes (whether statically entered or advertised via BGP) to your VPC route table:In order to enable this option for a particular routing table, you must establish an association between the table and a gateway like this:You can also arrange to update multiple routing tables from the same virtual private gateway.Multiple IP AddressWhen we launched the Elastic Network Interface (ENI) feature last December, you were limited to a maximum of two ENI's per EC2 instance, each with a single IP address. With today's release we are raising these limits, allowing you to have up to 30 IP addresses per interface and 8 interfaces per instance on the m2.4xl and cc2.8xlarge instances, with proportionally smaller limits for the less powerful instance types. Inspect the limits with care if you plan to use lots of interfaces or IP addresses and expect to switch between different instance sizes from time to time.
Increased snapshot retention period8 Days to 35 Dayshttp://aws.typepad.com/aws/2012/03/relational-database-service-increased-snapshot-retention-period.htmlMulti-AZ option for Oraclehttp://aws.typepad.com/aws/2012/05/multi-az-option-for-amazon-rds-oracle.htmlSQL ServerExpress, Web, Standard, Enterprise Edition 2010 r2, 2012http://aws.typepad.com/aws/2012/05/net-support-for-aws-elastic-beanstalk-amazon-rds-for-sql-server-.htmlOracle Enterprise Manager 11g Database Control & Data PumpOEM Database Control is pre-installed and available at no additional charge for new and existing Amazon RDS for Oracle DB instances and all supported Oracle Editions: Enterprise Edition, Standard Edition, and Standard Edition One, for License Included and Bring Your Own License Customers.http://aws.typepad.com/aws/2012/05/oracle-enterprise-manager-oem-for-for-oracle-db-instances.htmlCustomers have asked us to make it easier to import their existing databases into Amazon RDS. We are making it easy for you to move data on and off of the DB Instances by using Oracle Data Pump. A number of scenarios are supported including:Transfer between an on-premises Oracle database and an RDS DB Instance.Transfer between an Oracle database running on an EC2 instance and an RDS DB Instance.Transfer between two RDS DB Instances.http://aws.typepad.com/aws/2012/09/amazon-rds-news-oracle-datapump-.htmlMicrosoft SQL Server Database Engine Tuning Advisor and SQL Server AgentThe Advisor will help you to select and create an optimal set of indexes, indexed views, and partitions even if you don't have an expert-level understanding of the structure of your database or the internals of SQL Server.http://aws.typepad.com/aws/2012/07/database-engine-tuning-advisor-for-amazon-rds.htmlSQL Server Agent is a tool designed to take some of the manual heavy lifting of tuning and maintaining database services off database administrators' shoulders. For example, you could schedule regular index builds and data integrity checks as part of your regular maintenance program.http://aws.typepad.com/aws/2012/08/amazon-rds-for-sql-server-supports-sql-server-agent-.htmlFree TierNew AWS customers (see the AWS Free Usage Tier FAQ for eligibility details) can use the MySQL, Oracle (BYOL licensing model), or SQL Server database engines on a Micro DB Instance for up to 750 hours per month, along with 20 GB of database storage, 10 million I/Os and 20 GB of backup storage. http://aws.typepad.com/aws/2012/10/amazon-rds-now-available-in-the-aws-free-usage-tier.htmlPromote Read ReplicasToday we are enhancing the Read Replica function with support for promotion. You can now convert a MySQL Read Replica into a "standalone" RDS database instance using the Promote Read Replica function.http://aws.typepad.com/aws/2012/10/amazon-rds-for-mysql-promote-read-replica.htmlEndpoint RenamingSimplified Data Recovery - Amazon RDS gives you multiple options for data recovery including Point in Time Recovery, Read Replica Promotion, and Restore from DB Snapshot. Now that you have the power to change the name and endpoint of a newly created DB Instance, you can have it assume the identity of the original Instance, eliminating the need for you to update your application with the new endpoint.Simplified Architectural Evolution - As your RDS-powered applications grow in size, scope, and complexity, the role of individual Instances may evolve. You can now rename the instances to keep their names in sync with their new roles.http://aws.typepad.com/aws/2013/01/endpoint-renaming-for-amazon-rds.html
Increased snapshot retention period8 Days to 35 Dayshttp://aws.typepad.com/aws/2012/03/relational-database-service-increased-snapshot-retention-period.htmlMulti-AZ option for Oraclehttp://aws.typepad.com/aws/2012/05/multi-az-option-for-amazon-rds-oracle.htmlSQL ServerExpress, Web, Standard, Enterprise Edition 2010 r2, 2012http://aws.typepad.com/aws/2012/05/net-support-for-aws-elastic-beanstalk-amazon-rds-for-sql-server-.htmlOracle Enterprise Manager 11g Database Control & Data PumpOEM Database Control is pre-installed and available at no additional charge for new and existing Amazon RDS for Oracle DB instances and all supported Oracle Editions: Enterprise Edition, Standard Edition, and Standard Edition One, for License Included and Bring Your Own License Customers.http://aws.typepad.com/aws/2012/05/oracle-enterprise-manager-oem-for-for-oracle-db-instances.htmlCustomers have asked us to make it easier to import their existing databases into Amazon RDS. We are making it easy for you to move data on and off of the DB Instances by using Oracle Data Pump. A number of scenarios are supported including:Transfer between an on-premises Oracle database and an RDS DB Instance.Transfer between an Oracle database running on an EC2 instance and an RDS DB Instance.Transfer between two RDS DB Instances.http://aws.typepad.com/aws/2012/09/amazon-rds-news-oracle-datapump-.htmlMicrosoft SQL Server Database Engine Tuning Advisor and SQL Server AgentThe Advisor will help you to select and create an optimal set of indexes, indexed views, and partitions even if you don't have an expert-level understanding of the structure of your database or the internals of SQL Server.http://aws.typepad.com/aws/2012/07/database-engine-tuning-advisor-for-amazon-rds.htmlSQL Server Agent is a tool designed to take some of the manual heavy lifting of tuning and maintaining database services off database administrators' shoulders. For example, you could schedule regular index builds and data integrity checks as part of your regular maintenance program.http://aws.typepad.com/aws/2012/08/amazon-rds-for-sql-server-supports-sql-server-agent-.htmlFree TierNew AWS customers (see the AWS Free Usage Tier FAQ for eligibility details) can use the MySQL, Oracle (BYOL licensing model), or SQL Server database engines on a Micro DB Instance for up to 750 hours per month, along with 20 GB of database storage, 10 million I/Os and 20 GB of backup storage. http://aws.typepad.com/aws/2012/10/amazon-rds-now-available-in-the-aws-free-usage-tier.htmlPromote Read ReplicasToday we are enhancing the Read Replica function with support for promotion. You can now convert a MySQL Read Replica into a "standalone" RDS database instance using the Promote Read Replica function.http://aws.typepad.com/aws/2012/10/amazon-rds-for-mysql-promote-read-replica.htmlEndpoint RenamingSimplified Data Recovery - Amazon RDS gives you multiple options for data recovery including Point in Time Recovery, Read Replica Promotion, and Restore from DB Snapshot. Now that you have the power to change the name and endpoint of a newly created DB Instance, you can have it assume the identity of the original Instance, eliminating the need for you to update your application with the new endpoint.Simplified Architectural Evolution - As your RDS-powered applications grow in size, scope, and complexity, the role of individual Instances may evolve. You can now rename the instances to keep their names in sync with their new roles.http://aws.typepad.com/aws/2013/01/endpoint-renaming-for-amazon-rds.html
20GBhttp://aws.typepad.com/aws/2011/12/amazon-cloudfront-support-for-20-gb-objects.htmlMedia ServiceLive Smooth Streaming offers adaptive bit rate streaming of live content over HTTP. Your live content is delivered to clients as a series of MPEG-4 (MP4) fragments encoded in different bit rates, with each individual fragment cached by CloudFront edge servers. As clients play these video fragments, network conditions may change (for example, increased congestion in the viewer’s local network) or streaming may be affected by other applications that are running on the client. Smooth Streaming compatible clients use heuristics to dynamically monitor current local network and PC conditions. As a result, clients can seamlessly switch the video quality by requesting CloudFront to deliver the next fragment from a stream encoded at a different bit rate. This helps provide your viewers with the best playback experience possible based on their local network conditions.http://aws.typepad.com/aws/2012/04/smooth-streaming-with-cloudfront-and-windows-media-services.htmlDynamic ContentPersistent TCP ConnectionsSupport for Multiple Origins Support for Query StringsVariable Time-To-Live (TTL) Large TCP WindowCookie Supporthttp://aws.typepad.com/aws/2012/05/amazon-cloudfront-support-for-dynamic-content.htmlPrice Classeshttp://aws.typepad.com/aws/2012/09/amazon-cloudfront-cookies-and-more.html
MetricsJob flow progress including metrics on the number of map and reduce tasks running and remaining in your job flow and the number of bytes read and written to S3 and HDFS.Job flow contention including metrics on HDFS utilization, map and reduce slots open, jobs running, and the ratio between map tasks remaining and map slots.Job flow health including metrics on whether your job flow is idle, if there are missing data blocks, and if there are any dead nodes.http://aws.typepad.com/aws/2012/01/new-elastic-mapreduce-features-metrics-updates-vpc-and-cluster-compute-support-guest-post.htmlVersion of Hadoop and PigEMR now supports running your job flows using Hadoop 0.20.205 and Pig 0.9.1. To simplify the upgrade process, we have also introduced the concept of AMI versions. You can now provide a specific AMI version to use at job flow launch or specify that you would like to use our “latest” AMI, ensuring that you are always using our most up-to-date features. The following AMI versions are now available:Version 2.0.x: Hadoop 0.20.205, Hive 0.7.1, Pig 0.9.1, Debian 6.0.2 (Squeeze)Version 1.0.x: Hadoop 0.18.3 and 0.20.2, Hive 0.5 and 0.7.1, Pig 0.3 and 0.6, Debian 5.0 (Lenny)HbaseYou can now use Apache HBase to store and process extremely large amounts of data (think billions of rows and millions of columns per row) on AWS. HBase offers a number of powerful features including:Strictly consistent reads and writes.High write throughput.Automatic sharding of tables.Efficient storage of sparse data.Low-latency data access via in-memory operations.Direct input and output to Hadoop jobs.Integration with Apache Hive for SQL-like queries over HBase tables, joins, and JDBC support.http://aws.typepad.com/aws/2012/06/apache-hbase-on-emr.html
EC2 Instance Status Checks and ReportingSystem status checks detect problems with the underlying EC2 systems that are used by each individual instance. The first System status check we are introducing is a reachability check.The System Reachability check confirms that we are able to get network packets to your instance.System status problems require AWS involvement to repair. We work hard to fix every one as soon it arises, and we are continually driving down their occurrence. However, we also want you to have enough visibility to decide whether you want to wait for our systems to fix the issue or resolve it yourself (by restarting or replacing an instance).Instance Status checks detect problems within your instance. Typically, these are problems that you as a customer can fix, for example by rebooting the instance or making changes in your operating system. There is currently one Instance status check.The Instance Reachability check confirms that we are able to deliver network packets to the operating system hosted on your instance.Over time, we will add to these checks as we continue to improve our detection methods.We are also introducing a reporting system to allow you to provide us with additional information on the status of your EC2 instances.You can access this functionality from the new DescribeInstanceStatus and ReportInstanceStatus APIs, the AWS Management Console, and the command-line tools.Console SupportThe status of each of your instances is displayed in the instance list.http://aws.typepad.com/aws/2012/01/ec2-instance-status-checks.htmlMonitoring ScriptsYou can run these scripts on your instances and configure them to report memory and disk space usage metrics to Amazon CloudWatch. Once the metrics are submitted to CloudWatch, you can view graphs, calculate statistics and set alarms on them in the CloudWatch console or via the CloudWatch API.Available metrics include:Memory Utilization (%)Memory Used (MB)Memory Available (MB)Swap Utilization (%)Swap Used (MB)Disk Space Utilization (%)Disk Space Used (GB)Disk Space Available (GB)Windows and Linuxhttp://aws.typepad.com/aws/2012/03/new-amazon-cloudwatch-monitoring-scripts.htmlBilling AlertsThe following variants on the billing metrics are stored in CloudWatch:Estimated Charges: TotalEstimated Charges: By ServiceEstimated Charges: By Linked Account (if you are using Consolidated Billing)Estimated Charges: By Linked Account and Service (if you are using Consolidated Billing)http://aws.typepad.com/aws/2012/05/monitor-estimated-costs-using-amazon-cloudwatch-billing-metrics-and-alarms.htmlAlarm ActionsThe ability to stop or terminate your EC2 instances when a CloudWatch alarm is triggered. http://aws.typepad.com/aws/2013/01/amazon-cloudwatch-alarm-actions.html
Cross Origin Resource Sharing SupportThe CORS specification gives you the ability to build web applications that make requests to domains other than the one which supplied the primary content.http://aws.typepad.com/aws/2012/08/amazon-s3-cross-origin-resource-sharing.htmlRoot Domain Website HostingYou can now host your website at the root of your domain (e.g. http://mysite.com).You can now use redirection rules to redirect website traffic to another domain.http://aws.typepad.com/aws/2012/12/root-domain-website-hosting-for-amazon-s3.html
EBS Volume StatusStatus Checks and Events - The new DescribeVolumeStatus API reflects the status of the volume and lists an event when a potential inconsistency is detected. The event tells you why a volume’s status is impaired and when the impairment started. By default, when we detect a problem, we disable I/O on the volume to prevent application exposure to potential data inconsistency.Re-Enabling I/O – The “IO Enabled” status check fails when I/O is blocked. You can re-enable I/O by calling the new EnableVolumeIO API.Automatically Enable I/O – Using the ModifyVolumeAttribute/DescribeVolumeAttribute APIs you can configure a volume to automatically re-enable I/O. We provide this for cases when you might favor immediate volume availability over consistency. For example, in the case of an instance’s boot volume where you’re only writing logging information, you might choose to accept possible inconsistency of the latest log entries in order to get the instance back online as quickly as possible.http://aws.typepad.com/aws/2012/03/the-next-type-of-ec2-status-check-ebs-volume-status.htmlProvisioned IOPSA new type of EBS volume called Provisioned IOPS that gives you the ability to dial in the level of performance that you need (currently up to 2,000 IOPS per volume). You can stripe (RAID 0) two or more volumes together in order to reach multiple thousands of IOPS. (20,000)The ability to launch EBS-Optimized instances which feature dedicated throughput between these instances and EBS volumes.http://aws.typepad.com/aws/2012/08/fast-forward-provisioned-iops-ebs.htmlEBS Snapshot Copy (Between Regions)You can now copy EBS Snapshots from one AWS Region to another. You can copy any accessible Snapshots that are in the "completed" status. This includes Snapshots that you created, Snapshots that were shared with you, and also Snapshots from the AWS Marketplace, VM Import/Export, and Storage Gateway. If you copy a Marketplace product to a new Region, you need to make sure that the product is supported in the destination Region.You can initiate copies from the AWS Management Console or from the command line. You can also use the new CopySnapshot function from your own code. http://aws.typepad.com/aws/2012/12/ebs-snapshot-copy.html
Gateway-Cached VolumesWith Gateway-Cached volumes, your storage volume data is stored encrypted in Amazon S3, visible within your enterprise's network via an iSCSI interface. Recently accessed data is cached on-premises for low-latency local access. You get low-latency access to your active working set, and seamless access to your entire data set stored in Amazon S3.http://aws.typepad.com/aws/2012/10/aws-storage-gateway-new-gateway-cached-volume-model.htmlAMIAmazon machine image to run Storage Gateway as a Gateway-Cached Volume for your EC2 instanceshttp://docs.aws.amazon.com/storagegateway/latest/userguide/EC2Gateway.html
Automatic Scaling For Data & Traffic – Amazon CloudSearch scales up and down seamlessly as the amount of data or query volume changes. Amazon CloudSearch handles the operational footprint and provisions search instances for you.Low Latency, High Throughput – Amazon CloudSearch always stores your index in RAM to ensure low latency and high throughput performance even at large scale. Amazon CloudSearch was created from the same A9 technology that powers search on Amazon.com.Easy Administration – Amazon CloudSearch is a fully-managed service. Hardware and software provisioning, setup and configuration, software patching, and data partitioning are handled for you.Rich Search Features – Amazon CloudSearch indexes and searches both structured data and plain text. It includes most search features that developers have come to expect from a search engine, such as faceted search, free text search, Boolean search, customizable relevance ranking, query time rank expressions, field weighting, and sorting of results using any field. Amazon CloudSearch also provides near real-time indexing of document updates.Low Costs – Amazon CloudSearch is designed to be cost-efficient. You pay low hourly rates, and only for the resources you consume. Amazon CloudSearch offers low total cost of ownership for your search applications compared to operating a search environment on your own.
AWS Data Pipeline is a web service that provides a simple management system for data-driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the “data sources” that contain your data, the “activities” or business logic such as EMR jobs or SQL queries, and the “schedule” on which your business logic executes. For example, you could define a job that, every hour, runs an Amazon Elastic MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service (Amazon S3) log data, loads the results into a relational database for future lookup, and then automatically sends you a daily summary email. AWS Data Pipeline handles
Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. It is optimized for datasets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.
Traditionally, transcoding has been complex for customers in three significant ways. First, customers need to buy and manage transcoding software, which can be expensive and challenging to maintain and configure. Second, producing transcoded output for different kinds of devices often involves trial and error to find the right transcoding setting that play properly and look good to the end user. This trial and error process wastes compute resources. Third, traditional encoding solutions don’t scale up and down with customers’ business needs. Instead, with traditional solutions, customers need to guess how much capacity to provision ahead of time, which inevitably means either wasted money (if they provision too much and leave capacity underutilized) or delay to their business (if they provision too little and need to wait to run their encoding jobs).With Amazon Elastic Transcoder, developers simply use the web-based console or APIs to create a transcoding job that specifies an input file, the transcoding setting, and the output file. This eliminates these three complexities. First, there is no need to buy, configure or manage underlying transcoding software. Second, Amazon Elastic Transcoder has pre-defined presets for various devices that remove the need to find the right settings for different devices through trial and error. The system also supports custom presets which let customers tune output to specific transcode requirements such as a unique size or bit rate needs. Finally, Amazon Elastic Transcoder automatically scales up and down to handle customers’ workloads, eliminating wasted capacity and minimizing time spent waiting for jobs to complete. It also enables customers to process multiple files in parallel and organize their transcoding workflow using a feature called transcoding pipelines. With Amazon Elastic Transcoder’s pipelines feature, customers set up pipelines for these various scenarios and ensure that their files are transcoded when and how they want, thus allowing them to seamlessly scale for spiky workloads efficiently. For example, a news organization may want to have a “high priority” transcoding pipeline for breaking news stories, or a User-Generated Content website may want to have separate pipelines for low, medium, and high resolution outputs to target different devices.
ECU : One EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processorM3.xlargeThe Extra Large Instance (m3.xlarge) has 15 GB of memory and 13 ECU (EC2 Compute Units) spread across 4 virtual cores, with moderate I/O performance.M3.2xlargeThe Double Extra Large Instance (m3.2xlarge) has 30 GB of memory and 26 ECU spread across 8 virtual cores, with high I/O performance.cr1.8xlargeOur new High Memory Cluster Eight Extra Large (cr1.8xlarge) instance type is designed to host applications that have a voracious need for compute power, memory, and network bandwidth such as in-memory databases, graph databases, and memory intensive HPC.Two Intel E5-2670 processors running at 2.6 GHz with Intel Turbo Boost and NUMA support.244 GiB of RAM.Two 120 GB SSD for instance storage.10 Gigabit networking with support for Cluster Placement Groups.HVM virtualization only.Support for EBS-backed AMIs only.hs1.8xlargeThe High Storage Eight Extra Large (hs1.8xlarge) instances are a great fit for applications that require high storage depth and high sequential I/O performance. Each instance includes 117 GiB of RAM, 16 virtual cores (providing 35 ECU of compute performance), and 48 TB of instance storage across 24 hard disk drives capable of delivering up to 2.4 GB per second of I/O performance.
m1 and m3 generation Standard general applicationsMicro instance for lower throughput applicationsHigh Memory instances for memory-bound applicationsHigh-CPU instances for scaled-out compute-intensive applicationsCluster Compute instances are designed for compute-intensive applications Cluster GPU instances for highly parallelized processing.High I/O instance provides random I/O instance performance
EC2We are reducing Linux On Demand prices for First Generation Standard (M1) instances, Second Generation Standard (M3) instances, High Memory (M2) instances and High CPU (C1) instances in all regions. All prices are effective from February 1, 2013. These reductions vary by instance type and region, but typically average 10-20% price drops Data TransferWe are reducing prices for data transfer between AWS locations. Our new lower pricing applies to data transfer between all 9 global AWS regions, and from AWS regions to all global CloudFront edge locations. Previously, we have charged normal internet bandwidth prices for data transfer, but are now lowering these charges significantly -- allowing you to even more cost effectively move data between regions for serving customers in local geographies, for disaster recovery, and for many other use cases. The new prices are effective February 1, 2013, and you don’t need to do anything to take advantage of these new prices. Multi-AZ RDSNo longer costs the same as 2xRDS instance:Region Old Price New Price SavingsUS East (Northern Virginia) $0.180 $0.153 15%US West (Northern California) $0.230 $0.167 27%US West (Oregon) $0.180 $0.153 15%AWS GovCloud (US) $0.240 $0.187 22%Europe (Ireland) $0.230 $0.167 27%Asia Pacific (Singapore) $0.230 $0.196 15%Asia Pacific (Tokyo) $0.240 $0.204 15%Asia Pacific (Sydney) $0.230 $0.196 15%South America (Sao Paulo) $0.300 $0.204 32%
Many customers have a requirement to retain digital information for long periods of time (e.g., 7 years, 21 years, life of the patient, or indeterminate duration) in a format whereby it can be retrieved when needed, albeit infrequently. This presents a challenge in being able to store large (and continually growing) volumes of information in a manner that is durable, economical, and low-maintenance. The Amazon Glacier service is designed to enable customers to efficiently and reliably store unlimited amounts of archival data at low cost, with high durability (i.e., designed to provide average annual durability of 99.999999999%), and for long periods of time. You can choose to retrieve your data anytime within a 3 to 5 hour time window, rather than instantaneously. This enables you to effectively meet the dual (and often conflicting) goals of cost effective long-term storage and near real-time data retrieval.In Amazon Glacier, data is stored as archives that are uploaded to Amazon Glacier and organized into vaults, which customers can control access to using the AWS Identity and Access Management (IAM) service. You retrieve data by scheduling a job, which typically completes within 3 to 5 hours.Amazon Glacier integrates seamlessly with other AWS services such as Amazon S3 and the AWS storage and database services. Amazon S3 enables you to create lifecycle policies that will archive data to Glacier (and allow retrieval) automatically
Promoting RDS Read Replicas to Primary is designed to help shard existing RDS servers that are getting over loaded. This would be difficult to do manually without a lot of downtime. Basically what the customer would do is setup up to 5 read replicas. Wait for them to sync. Then they would break the replication and making each a master and update their applications to point to the different masters based on how they wanted to split the data. After that they can just delete the tables or databases that are not used on the new masters. This will take a lot less downtime than the option of stopping everything and launching new masters from a snapshot. Initially we won't be able to rename the databases but that will follow a month or so after this release.
By optimising your application to take advantage of the dynamic content delivery of CloudFront you will gain several advantages: - Your server load will go down as you only have to serve the CloudFront origin - Latency is no longer a factor for delivery of the content. This means you could move the origin servers to the least expensive region thus saving costs.Complex applications such as tourist information systems and real estate sites have been completely cached by CloudFront, alleviating the need to scale up complex search servers such as Solr and Lucene. Costly investment in infrastructure can be minimised needing only enough compute power to server as an origin for CloudFront. Imagine being able to scale up to serve customers around the globe without having to worry about server capacity.
This year saw the release of DynamoDB. In addition to being a highly scalable NoSQL database, DynamoDB can also act as a Hadoop Compliant FileSystem for EMR. This means with the use of Pig and Hive, DynamoDB can be mapped as an external table for extract, transform and load (ETL) operations. Combined with the new release of AWS Data Pipeline, customer have many options to move data around and into the cloud with a wide range of tools to address the major challenges of Big Data.