3. Agenda “What’s New”
Servidores Oracle
Solaris 11 y Virtualización
Almacenamiento
Sistemas de Ingeniería
SuperCluster
4. SPARC/Solaris at OOW
50+ SPARC, Solaris and SuperCluster Sessions
30+ demos running on SPARC Solaris
New this year:
– Dedicated „Systems Venue‟ at Westin Market St.
– „Meet the Experts‟ discussions following sessions
– Lunches and „Recharging Stations‟ on premise for
charging laptops and mobile devices
– Latest products on display
For all the details: Focus on Oracle Servers
New Systems Venue
5. SPARC/Solaris at OOW
Larry Ellison‟s Welcoming Keynote, Sunday night
John Fowler/Thomas Kurian Keynote, Tues. 8am
John Fowler on Engineered Systems
(SuperCluster), Tues., 10:30 am
Masood Heydari, SPARC Systems Roadmap and
Update, Mon., 1;45 pm
Markus Flierl, Solaris Strategy and Update, Mon.
12:15 pm
Executive Session Highlights
24. Availability End to End
Security
Performance
Oracle Solaris 11. Oracle Database. Oracle Java.
Engineered to Work Together
Compliance
25. New Solaris/RAC Kernel Mode
Acceleration
Allows Solaris to respond
directly to lock requests
Saves lock state in memory
shared by database and kernel
Best UNIX for Oracle RAC
New with Solaris 11.1
30-40%
lower latency lock grants
Up to 20%
higher throughput
Consistent, predictable
RAC performance
Solaris
RAC
Database
Solaris
RAC
Database
27. New Optimized Shared Memory
interface (OSM)
Works with Oracle DB
Automatic Memory
Management (AMM)
Dynamic, NUMA- aware
granule based shared memory
Increased Availability
New Database Technology
Dynamically resize your
Database SGA online
without a reboot
Bring Oracle Database
instances up 2x faster
Oracle DB
SGA
31. Zones support for Exadata stack on
SPARC SuperCluster
RDSv3 Exclusive IP-typed Zones
Advanced IPoIB packet security
InfiniBand Limited membership
Pkey support
High speed, secure low latency
database deployments
Secure Multitenant Database Consolidation
New in Solaris 11.1
Simplify and consolidate
databases platforms
Domain Architecture Optimized for Application W orkloads
T4-4 Node 1
Oracle Solaris 10
Oracle Solaris 11
DB Domain
GP Domain
Zones
Solaris Zone
Solaris Zone
Solaris Zone
Solaris
Zone
Zones
Zones
InfiniBand Network
EXADATA
STORAGE
EXADATA
STORAGE
EXADATA
STORAGE
ZFS
STORAGE
APPLIANCE
T4-4 Node 1
Oracle Solaris 10
Oracle Solaris 11
DB Domain
GP Domain
Zones
Solaris Zone
Solaris Zone
Solaris Zone
Solaris
Zone
Zones
Zones
36. Auto-offloading of CPU-intensive
security functions onto T4
crypto accelerator
Hardware acceleration for Oracle
DB Advanced Security Transparent
Data Encryption (TDE)
Turbo charged JRE security
End to End Data Encryption
Solaris 11 and SPARC T4
No compromise,
No tradeoffs
No additional costs
Use encryption
pervasively to
reduce risk
Storage
Solaris ZFS
File System
Oracle Fusion
Middleware
Weblogic
SOAP
SSL
Oracle
Database
Storage
Tablespace
Encrypt
SSL
Unified Key Management
IPsec (VPN)
SSL SSLSSL SSL
ZFS Filesystem Crypto 4x faster vs. x86
10Gb/s SSL T4 uses 50% less threads to saturate 10GbE
OpenSSL 4.3x faster single-thread security vs Power7
39. Oracle Solaris 11: First Fully Virtualized OS
Reduce costs. Increase agility. Infrastructure as a Service.
Server
Web Tier
Application Tier
Database Tier
Network
Storage
Finance
Dataset
Finance
Zone
HR
Dataset
HR
Zone
Sales
Dataset
Sales
Zone
43. Edge Virtual Bridging
– Making the network “virtualization
aware”
– Offload bandwidth control on switches
Data Center Bridging
– Convergence of storage and networking
– Enabler for low latency RDMA over
Ethernet
– Multiple lanes of traffic on the same link
Software Defined Networking
New with Solaris 11.1
Save cost by leveraging
Ethernet for storage Prioritize bandwidth
for key applications
45. Federated File System support for a
single unified namespace
A collection of machines can be
bound into a FedFS unified
namespace using a private location
database
Clients are seamlessly redirected
when looking up or modifying (NFS)
data.
Cloud-Scale Data Management
New with Solaris 11.1
Share data easily across
cloud clients
46. Zone updates now execute
in parallel
Time savings exponential across
datacenter
Fast Zone Updates
New with Solaris 11.1
4x less downtime
during maintenance windows
0 10 20 30 40
Solaris 11.1
Solaris 11
Minutes
Updating a T4 System with 20 Zones
Increase consolidation ratios without
increasing maintenance windows
47. Zones on shared storage
Zones framework automatically
manages
• Configuration/un-configuration of
storage services
• Attach/detach of storage devices
• zpool creation, import, export
• For SAN and iSCSI
Easy Mobility for Zones
New with Solaris 11.1
Move zones around between systems
quickly and easily
Z
o
n
e
B
Z
o
n
e
A
Z
o
n
e
C
48. Safe deployment of mission critical
Solaris 10 applications in Solaris 10
Zone Clusters
More control and flexibility with
support of exclusive IP
Improved resource and priority
management for zone clusters
Simplified Zone cluster set-up
through configuration wizard
Zone Clusters for Solaris 10 Applications
New with Solaris Cluster 4.1
Solaris 11
Zone
Solaris 11
Zone
Solaris 10
Zone
Solaris 10
Zone
Solaris 11
Zone
Solaris 11
Zone
Web Tier
Application Tier
Database Tier
Protect application investment
Take advantage of the
latest server platforms
50. Total Cloud Control: Solaris IaaS with
Complete Lifecycle
Management
Integrated Cloud
Stack Management
Business-Driven
Application Management
Self-Service IT | Simple and Automated | Business Driven
92. Discover Oracle SuperCluster
as a single system
Hardware event interface view
of Oracle SuperCluster as a
single system
Monitoring and active
management tasks separated
by role
Total Systems Management
Oracle Enterprise Manager
Ops Center 12c
Oracle produced a world record single-server SPECjEnterprise2010 benchmark result of 27,843.57 SPECjEnterprise2010 EjOPS using one of Oracle's SPARC T5-8s for both the application and the database tier. This result directly compares the 8-chip SPARC T5-8 (8 SPARC T5s) to the 8-chip IBM Power 780 (8 POWER7+). The 8-chip SPARC T5 is 2.6x faster than the 8-chip IBM POWER7+ server.Both Oracle &IBM used virtualization to provide 4-chips for application and 4-chips for DB.The server cost/performance for the SPARC T5 server was 7.1x better than the IBM POWER7+. The cost/performance of the SPARC T5-8 is $10.72 compared to the IBM Power 780 at $76.64.The total configuration cost/performance (hardware+software) for the SPARC T5 was 3.6x better than the IBM POWER7+. The cost/performance of the SPARC T5-8 is $56.21 compared to the IBM Power 780 at $199.43. The IBM system had 1.6x better performance per core, but this did not reduce the total SW+HW. This shows performance-per-core is a poor predictor of characteristics relevant to customers.IBM has a non-virtualized result (1server for application &1server for database). The IBM PowerLinux 7R2 achieved 13,161.07 SPECjEnterprise2010 EjOPS which means it was 2.1x slower than the SPARC T5-8 server. The total configuration cost/performance (HW+SW) for the SPARC T5 server was 11% better than the IBM POWER7+ server. The cost/performance of the SPARC T5-8 is $56.21 compared to the IBM PowerLinux 7R2 at $62.26. The total IBM hardware plus software cost was $2,174,152 versus the total HW+SW cost of $1,565,092. IBM could only provide 768 GB of memory while Oracle was able to deliver 2 TB in the SPARC T5-8.The SPARC T5-8 requires only 8 rack units, the same as the space of the IBM Power 780. IBM had a hardware core density of 4 cores per rack unit which contrasts with the 16 cores per rack unit for the SPARC T5-8. Again performance-per-core is a poor predictor of characteristics relevant to customers.The virtualized SPARC T5 server ran the application tier servers on 4 chips using Oracle Solaris Zones and the database tier in a 4-chip Oracle Solaris Zone. The virtualized IBM POWER7+ server ran the application in a 4-chip LPAR and the database in a 4-chip LPAR.The SPARC T5-8 ran the Oracle Solaris 11.1 operating system and used Oracle Solaris Zones to consolidate eight Oracle WebLogic application server instances and one database server instance to achieve this result. The IBM system used LPARS and AIX V7.1.This result demonstrated less than 1 second average response times for all SPECjEnterprise2010 transactions and represents JEE 5.0 transactions generated by 227,500 users.
Oracle's SPARC T5-4 server delivered world record single server performance of 409,721 QphH@3000GB with price performance of $3.94/QphH@3000GB on the TPC-H @3000GB benchmark. The 4-chip SPARC T5 is significantly faster than both 8-chip IBM POWER7 processor and 8-chip HP Intel x86 processor-based servers.This result demonstrates a complete Data warehouse solution that shows the performance both of individual and concurrent query processing streams, faster loading, and refresh of the data during business operations. The SPARC T5-4 server delivers superior performance and cost efficiency when compared to IBM's POWER7 solution.The SPARC T5-4with four SPARC T5 processors is 2.1 times faster than the IBM Power 780 server with eight POWER7 processors and 2.5 times faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T5-4also delivered better performance per core than these eight processor systems from IBM and HP. The SPARC T5-4with four SPARC T5 processors is 2.1 times faster than the IBM Power 780 server with eight POWER7 processors on the TPC-H @3000GB benchmark. The SPARC T5-4is 38% better in price performance compared to the IBM Power 780 server on the TPC-H @3000GB benchmark. SPARC T5-4is 2.8 times faster than the IBM Power 780 for data loading. SPARC T5-4is up to 7.6 times faster than the IBM Power 780 server for the Refresh Function. The SPARC T5-4with four SPARC T5 processors is 2.5 times faster than the HP ProLiant DL980 G7 server with the same number of cores on the TPC-H @3000GB benchmark. SPARC T5-4is 4.1 times faster than the HP ProLiant DL980 G7 server for data loading. SPARC T5-4is up to 8.9 times faster than the HP ProLiant DL980 G7 for the Refresh Function. The SPARC T5-4delivered 6% better performance than the SPARC Enterprise M9000-64 server and 2.1 times better than the SPARC Enterprise M9000-32 server on the TPC-H @3000GB benchmark.
Oracle's SPARC T5-8 server equipped with eight 3.6 GHz SPARC T5 processors obtained a result of 8,552,523 tpmC on the TPC-C benchmark. This result is a world record for single servers. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Partitioning. The SPARC T5-8 delivered a single system TPC-C world record of 8,552,523 tpmC with a price performance of $0.55/tpmC using Oracle Database 11g Release 2. This configuration is available 09/25/13. The SPARC T5-8 has 2.8x times better performance than the 4-processor IBM x3850 X5 system equipped with Intel Xeon processors. The SPARC T5-8 delivers 1.7x the performance compared to the next best 8 chip result. The SPARC T5-8 delivers 2.4x the performance per chip vs. IBM Power 780 3-node cluster result. The SPARC T5-8 delivers 1.8x the performance per chip vs. IBM Power 780 non-clustered. The SPARC T5-8 delivers 1.4x the performance per chip compared to the IBM Flex x240 Xeon result. The SPARC T5-8 delivers 1.7x the performance per chip compared to the Sun Server X2-8 system equipped with Intel Xeon processors. The SPARC T5-8 demonstrated over 3.1 Million 4KB IOP/s with 76% idle, demonstrating its ability process a large IO workload with lots of processing headroom. This result showed Oracle's integrated hardware and software stacks provide industry leading performance. The Oracle solution utilized Oracle Solaris 11.1 with Oracle Database 11g Enterprise Edition with Partitioning and demonstrates stability and performance with this highly secure operating environment to produce the world record TPC-C benchmark performance.
Oracle, using Oracle Solaris and Oracle JDK, delivered a two socket server world record result on the SPECjbb2013 benchmark, Multi-JVM metric. This benchmark was y to showcase Java performance. SPECjbb2013 is the replacement for SPECjbb2005 (SPECjbb2005 will soon be retired by SPEC). Oracle's SPARC T5-2 server achieved 75,658 SPECjbb2013-MultiJVM max-jOPS and 23,268 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. This result is a two chip world record. (Oracle has submitted this result for review by SPEC.) There are no IBM POWER7 or POWER7+ based server results on the SPECjbb2013 benchmark. IBM has published IBM POWER7+ based servers on the SPECjbb2005 which will soon be retired by SPEC. The SPARC T5-2 server running is 1.9x faster than the 2-chip HP ProLiant ML350p server (2.9 GHz E5-2690 Sandy Bridge-based) based on SPECjbb2013-MultiJVM max-jOPS. The 2-chip SPARC T5-2 server is 15% faster than the 4-chip HP ProLiant DL560p server (2.7 GHz E5-4650 Sandy Bridge-based) based on SPECjbb2013-MultiJVM max-jOPS. The Sun Server X3-2 system running Oracle Solaris 11 is 5% faster than the HP ProLiant ML350p Gen8 server running Windows Server 2008 based on SPECjbb2013-MultiJVM max-jOPS. Oracle's SPARC T4-2 server achieved 34,804 SPECjbb2013-MultiJVM max-jOPS and 10,101 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. (Oracle has submitted this result for review by SPEC.) Oracle's Sun Server X3-2 system achieved 41,954 SPECjbb2013-MultiJVM max-jOPS and 13,305 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. (Oracle has submitted this result for review by SPEC.) Oracle's Sun Server X2-4 system achieved 65,211 SPECjbb2013-MultiJVM max-jOPS and 22,057 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. (Oracle has submitted this result for review by SPEC.) These results were obtained using Oracle Solaris 11 along with Java Platform, Standard Edition, JDK 7 Update 17 on the SPARC T5-2 server, SPARC T4-2 server, Sun Server X3-2 and Sun Server X2-4. From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."
The cryptography benchmark suite was internally developed by Oracle to measure the maximum throughput of in-memory, on-chip encryption operations that a system can perform. Multiple threads are used to achieve the maximum throughput. Systems powered by Oracle's SPARC T5 processor show outstanding performance on the tested encryption operations, beating Intel processor based systems. A SPARC T5 processor running Oracle Solaris 11.1 runs from 2.4x to 4.4x faster on AES 256-bit key encryption than the Intel E5-2690 processor running Oracle Linux 6.3 for in-memory encryption of 32 KB blocks using CFB128, CBC, CCM and GCM modes fully hardware subscribed. AES CFB mode is used by the Oracle Database 11g for Transparent Data Encryption (TDE) which provides security to database storage. Performance LandscapePresented below are results for running encryption using the AES cipher with the CFB, CBC, CCM and GCM modes for key sizes of 128, 192 and 256. Decryption performance was similar and is not presented. Results are presented as MB/sec (10**6). Encryption Performance – AES-CFBAES-CFB is the mode used in the Oracle Database for tablespaceencryptionPerformance is presented for in-memory AES-CFB128 mode encryption. Multiple key sizes of 256-bit, 192-bit and 128-bit are presented. The encryption was performance on 32 KB of pseudo-random data (same data for each run).
With Oracle Solaris 10, we raised the bar in terms of the innovation included in the operating system. Oracle Solaris is a trusted platform for enterprise deployments among our customer base for providing a highly available and mission critical foundation that they can rely on. Larry Ellison has often said “If it must work, it runs on Solaris”. With features such as our predictive self healing capability (through the Service Management Facility and Fault Management Architecture) we can continue to run in the event of any software and hardware failures either by restarting services, or isolating hardware components until they can be replaced some time later. ZFS, the next generation file system, provides us tremendous scale and data integrity when managing the high volumes of data in today’s computing industry. Built in data services like snapshot and cloning, encryption, RAID and others mean that customers no longer need to purchase additional expensive software to secure their data. With Oracle Solaris Cluster, customers could benefit from the unique integration with the fastest levels of fail over for application clustering. Through technologies like Oracle Solaris auditing and Role Based Authentication Control, customers could ensure they met compliance requirements while having unprecedented observability of live production systems through the DTrace dynamic tracing framework in a safe way.Oracle Solaris 11 brings all of the attributes of a mission critical enterprise operating system together but with a focus on also being an agile and flexible environment for cloud computing at a large scale. With built-in server, storage and network virtualization, no other operating system provides the complete virtualized solution to deal with the demands of cloud computing in a scalable way. We’ve done a lot of work to ensure we can provision the operating system quickly and consistency, and manage it for the complete lifecycle in a fail safe environment. One of the technology foundations of this approach is Oracle Solaris Zones, which provide OS virtualization at extremely low overhead – the perfect vehicle in deploying applications within a multi-tenant cloud environment.
What it could beKey elements of an optimized datacenterIn order to have an optimized data center that can become a platform for innovation you need to work through 3 steps.Performance: Improve overall application performance, not just individual componentsSimplicity: Simple to deploy, manage, support, and upgradeRisk: Reduce deployment risk, downtime risk, security risk, vendor lock-in risk
The Oracle Real Application Clusters (RAC) distributed database product includes the Lock Management System (LMS), a user-level distributed lock protocol which mediates requests for database blocks between processes on the nodes of a database cluster. Fulfilling a request requires traversing and copying data across the user/kernel boundary on the requesting and serving nodes, even for the significant number of requests for blocks with uncontended locks. We have created a kernel accelerator (KA), which filters database block requests destined for LMS processes and directly grants requests for blocks with uncontended locks, thereby eliminating user-kernel context switches, the associated data copying, and LMS application-level processing for those requests.The KA exports shared memory in which the LMS locking daemon places its lock table. The KA intercepts DBMS block requests over the RDSv3 communications protocol used between cluster nodes and calls into a DBMS-provided kernel accelerator run-time (KA RT) module, which consults the shared-memory lock table. If the lock is available, the KA replies from the kernel, granting the request directly to the requesting node; if the lock is not available, the KA passes the request up to the LMS user process, which handles the request in the same fashion as when no KA is present.This not only speeds up the process of granting locks, but it also frees up CPU cycles, thus allowing for better throughput in the order of 30-40% depending on the workload.Note: The next generation Oracle Database technology plans to include Oracle RAC Kernel Mode Acceleration. The kernel accelerator is available for Oracle Solaris 11 and Oracle Linux.
As industry trends have driven memory prices down it is now feasible to deliver very large memory systems with more than 64 TB of memory. This unprecedented availability of memory leads to new opportunities such as running enterprise applications entirely in memory. Large memory systems introduce problems for current operating systems because most operating systems virtual memory subsystems were designed in the 1980s or best case the 1990s and are not designed to deal with the capacity or unique problems that very large memory configurations introduce.Oracle engineers foresaw this opportunity and by working with hardware designers designed a new virtual memory system for Solaris that could not only scale but also optimized the placement of resources near heavily used components. The result is a high performance virtual memory system that scales with the size of the system memory.Very large memory pages are now possible with Solaris 11.1 virtual memory system to better match application needs such as the database. Many database records can now be easily stored in these large memory pages improving overall database performance by reducing multi-step memory operationsAnother innovation is the new built-in memory predictor which monitors large memory page use and adjusts the size of the memory pages to better match the demand for those pages. The predictor is looking at memory demand across the system. For the database, this means that the predictor will try to ensure that the large pages that the database requests at start up are kept available if at all possible. This optimization also helps smaller memory systems by more closely matching the size of the memory pages with the application needs.Note: This feature is only available in Oracle Solaris 11.
The System Global Area (SGA) is a group of shared memory areas that are dedicated to an Oracle “instance” (an instance is your database programs and RAM). All Oracle processes use the SGA to hold information. The SGA is used to store incoming data (the data buffers as defined by the db_cache_size parameter), and internal control information that is needed by the database. You control the amount of memory to be allocated to the SGA by setting some of the Oracle “initialization parameters”. Optimized Shared Memory (OSM) - a new shared memory interface in Solaris - allows dynamic resizing of SGA without having to reserve memory and reboot the database like Linux or AIX. This also helps to improve the start up and shutdown of database instances. Using one (or fewer) OSM segments will be faster and simpler than using many Intimate Shared Memory (ISM) segments.OSM has the advantages of Dynamic Intimate Shared Memory (DISM) but has resolved the issues of DISM. With OSM memory can be allocated as needed (rather than all at once), and the traditional memcntl() interface can be used to adjust allocation policy. The new interfaces are "optimized" for real world usage – they offer flexibility without compromising performance, and functionality without onerous user requirements. The interfaces are based on the concept of a "granule size", which is set once as part of creating the segment This approach was adopted because it reflects an important real-world usage pattern, and it allows the application to communicate a very relevant bit of usage information (the granule size) to the operating system.Note: The next generation Oracle Database technology will use OSM. This feature is only available in Oracle Solaris (S11.1, S11 and S10U10) The next generation Oracle Database will also have fast instance startup of about 2x faster depending on SGA size. Fast instance startup doesn't allocate the entire SGA at startup. Parts of the SGA are allocated and the rest is allocated over time. With OSM, this will simply lock the granules that you need.
11.2.0.3 as well as 12.1.0.1 instance can use 30 TB SGA !!12c DB instance with 30 TB SGA can start in 130 sec !!Solaris optimizations for fast DB startup : Scalable kernel memory allocator for SGABenefits 11g Database more than 12c Directly affects no-mount time during startupParallelized metadata initialization in kernel for ISM : Benefits both 11g and 12c Database instancesImproves 'open' phase of startup Database optimizations for fast DB startup :Faster spawning of background processes (use of posix_spawn( ) interface) Available in 11g and 12c both Deferred SGA allocation (only available in 12c)Deferred creation of selected background processes like PQ slaves, until required (only available in 12c)
Since the acquisition of Sun, Oracle Solaris engineering teams have been working closely with the Oracle Database team and how innovations in the OS could help database deployments. Oracle Solaris DTrace is a dynamic tracing capability in the operating system that allows administrators to observe what’s happening in real time on live production systems safely. This technology has powered the ZFS Storage Appliance analytics, and administrators can use it to ask questions about application performance right across a system. With Oracle’s next generation database we can look for outlier I/O events through integration with DTrace. We can check the v$kernel_io_outlier table to extract information about time spent in the kernel for I/O's whose end to end latency exceeds a given threshold (500ms be default but tunable via the '_io_outlier_threshold' tunable - the example below was on an instance with this set to 200ms): SQL> desc v$kernel_io_outlier Name Null? Type ----------------------------------------- -------- ---------------------------- TIMESTAMP NUMBER IO_SIZE NUMBER IO_OFFSET NUMBER DEVICE_NAME VARCHAR2(513) PROCESS_NAME VARCHAR2(64) TOTAL_LATENCY NUMBER SETUP_LATENCY NUMBER QUEUE_TO_HBA_LATENCY NUMBER TRANSFER_LATENCY NUMBER CLEANUP_LATENCY NUMBER PID NUMBER CON_ID NUMBER SQL> select IO_SIZE,PID,TOTAL_LATENCY,SETUP_LATENCY,QUEUE_TO_HBA_LATENCY,TRANSFER_LATENCY,CLEANUP_LATENCY from v$kernel_io_outlier; DEVICE_NAME -------------------------------------------------------------------------------- IO_SIZE PID TOTAL_LATENCY SETUP_LATENCY QUEUE_TO_HBA_LATENCY TRANSFER_LATENCY CLEANUP_LATENCY ----------- ----- ------------------------- ------------------------- ---------------------------------------- ------------------------------- ----------------------------- 64 0 402554 2020 107 400361 64sd@3,0:a,rawThis example shows that this single 64k write to a SCSI target had an end to end latency of just over 400 milliseconds and the breakdown is: SETUP_LATENCY –Time in microseconds spent during initial I/O setup before sending to scsi target device driver (2020 microseconds)QUEUE_TO_HBA_LATENCY – Time in microseconds spent in the scsi target device driver before being sent to the Host Bus Adaptor (107 microseconds)TRANSFER_LATENCY – Time spent transferring (DMA) to the physical device (~400 milliseconds)CLEANUP_LATENCY – Time in microseconds spent freeing resources used by the completed I/O (64 microseconds)Note: The next generation Oracle Database technology uses DTrace to provide I/O observability. This feature is only available on Oracle Solaris.
Other databases have row and column formats but you must choose ONE format for a given table.Therefore you get either fast OLTP or fast Analytics on that table but not both. Oracle’s unique dual format architecture allows data to be stored in both row and column format simultaneously. This eliminates the tradeoffs required by others.Up until now, this could only be achieved by having a second copy of the table (Data Mart, Reporting DB, Operational Data Store), which adds cost and complexity to the environment, requires additional ETL processing and incurs time delays.With Oracle’s unique approach, there is a single copy of the table on storage. So there are no additional storage costs, synchronization issues, etc.The Oracle optimizer is In-Memory aware. It has been optimized to automatically route analytic queries to the column store, and OLTP queries to the row store.
The Oracle SPARC SuperCluster is a general purposed engineered system that combines the computing power of the SPARC T4 processor, the performance and scalability of Oracle Solaris 11, the optimized database performance of the Oracle Exadata storage, and the accelerated middleware processing of the Oracle Exalogic Elastic Cloud. One of the limitations of the Exadata Database Machine was the inability to host virtualized environments and carve up the system. With SPARC SuperCluster, Oracle Solaris Zones are supported and all the protocols to accelerate network performance on the InfiniBand backplane of the system have been implemented in non-global zones. This provides an additional level of consolidation opportunities for administrators wanting the benefits of Exadata Database Machine, but some of the flexibility of virtualization.Oracle's Transparent Data Encryption (TDE) feature simplifies the encryption of data within datafiles, preventing unauthorized access to it from the operating system. Tablespace encryption, allows encryption of the entire contents of a tablespace.Data is transparently encrypted when written to disk and transparently decrypted after an application user has successfully authenticated and passed all authorization checks. Authorization checks include verifying the user has the necessary select and update privileges on the application table and checking Database Vault, Label Security and Virtual Private Database enforcement policies.Oracle's SPARC T4 processor with hardware cryptography acceleration can greatly improve performance over software implementations. This should greatly expand the use of TDE for many customers.Performance on Oracle TDE (Transparent Data Encryption)SPARC T4 44% faster secure queries than x86 Westmere (AES/NI)Combination of fast query processing and TDETests 8 different queries on 2-socket serversConsistent SPARC T4 query time 128-bit to 256-bit ciphersOracle Advanced Security TDE column encryption was introduced in Oracle Database 10g Release 2, enabling encryption of application table columns, containing credit card or social security numbers. Oracle Advanced Security TDE tablespace encryption and support for hardware security modules (HSM) were introduced with Oracle Database 11gR1. Hardware Security Module (HSM) - A device used to secure keys and perform cryptographic operations. These devices can be standalone network based appliances or plug-able PCI cards. In the context of TDE, these devices can create and store the TDE master key. Advanced Encryption Standard (AES) – A symmetric cipher algorithm defined in the Federal Information Processing (FIPS) standard no. 197. AES provides 3 approved key lengths: 256, 192, and 128 bits. PKCS#11 – A standard developed by RSA for communicating with cryptographic devices. Transparent Data Encryption is one of the three components of the Oracle Advanced Security option for Oracle Database 11g Release 2 Enterprise Edition; it provides transparent encryption of stored data to support your compliance efforts. Applications do not have to be modified and will continue to work seamlessly as before. Data is automatically encrypted when it is written to disk and automatically decrypted when accessed by the application. Key management is built-in, eliminating the complex task of creating, managing and securing encryption keys.
Oracle Solaris DTrace is a comprehensive dynamic tracing facility that is built into Oracle Solaris that can be safely used by administrators and developers on live production systems to examine the behavior of both user programs and of the operating system itself. Oracle Solaris DTrace enables you to explore your system to understand how it works, track down performance problems across many layers of software, or locate the cause of aberrant behavior. The following Oracle Solaris DTrace probes have been added to Java Hotspot VMVM Lifecycle ProbesThread Lifecycle ProbesClass Loading ProbesGarbage CollectionProbes, Monitor Probes This instrumentation available with Java Mission Control allows users to closely monitor their Java applications, uncover performance bottlenecks and troubleshoot runtime issues.
Organizations worldwide are scrambling to secure sensitive information in response to regulatory pressure for protecting data privacy and integrity, as well as protect from increasingly sophisticated attacks targeting this data. Encrypting data in applications, however, requires costly and complex code changes, often with disastrous performance consequences. Fortunately these pitfalls can be avoided.Oracle Advanced Security is an option for Oracle Database Enterprise Edition providing three main features – Transparent Data Encryption (TDE), Network encryption, and strong authentication. Transparent Data Eencryption (TDE) provides the ability to encrypt sensitive application data on storage media completely transparent to the application itself and no application modifications are needed. Applications do not have to be modified and will continue to work seamlessly as before. Data is automatically encrypted when it is written to disk and automatically decrypted when accessed by the application. Key management is built-in, eliminating the complex task of creating, managing and securing encryption keys. TDE addresses encryption requirements associated with public and private privacy and security mandates such as PCI and California SB1386. Oracle's Transparent Data Encryption (TDE) feature simplifies the encryption of data within data files, preventing unauthorized access to it from the operating system. Tablespace encryption, allows encryption of the entire contents of a tablespace. Data is transparently encrypted when written to disk and transparently decrypted after an application user has successfully authenticated and passed all authorization checks. Authorization checks include verifying the user has the necessary select and update privileges on the application table and checking Database Vault, Label Security and Virtual Private Database enforcement policies.Through integration with Oracle's SPARC T4 processor, hardware cryptography acceleration can greatly improve performance over software implementations. This should greatly expand the use of TDE for many customers.Note: Oracle Advanced Security TDE column encryption was introduced in Oracle Database 10g Release 2, enabling encryption of application table columns, containing credit card or social security numbers. Oracle Advanced Security TDE tablespace encryption and support for hardware security modules (HSM) were introduced with Oracle Database 11gR1. Hardware Security Module (HSM) is a device used to secure keys and perform cryptographic operations. These devices can be standalone network based appliances or plug-able PCI cards. In the context of TDE, these devices can create and store the TDE master key. Advanced Encryption Standard (AES) – A symmetric cipher algorithm defined in the Federal Information Processing (FIPS) standard no. 197. AES provides 3 approved key lengths: 256, 192, and 128 bits. PKCS#11 – A standard developed by RSA for communicating with cryptographic devices.
Let’s take a closer look now at clouds and some of the features that we’ve integrated into the operating system to provide huge value for cloud environments
Oracle Solaris 11 is the first fully-virtualized OS in the industry delivering full service, storage and networking virtualization. This gives Oracle Solaris the unique ability of being able to provide a full solution for an Infrastructure-as-a-Service offering.As we’ve mentioned before, Oracle Solaris Zones is the cornerstone of our server virtualization, offering application environments with extremely low overhead. This provides the ability to consolidate multiple services onto a single server – whether it’s your web tier, application tier, or database tier. Oracle Solaris Zones scales due to the fact that only a single kernel instance is being used.Our next generation file system, ZFS, is the cornerstone of our storage virtualization. With today’s demands for storing larger amounts of data, ZFS provides an integrated file system and volume manager with extreme data integrity, scalability, and nearly zero administrative overhead. ZFS takes advantage of traditional disk storage and solid-state disk storage with a hybrid storage pool approach, providing extremely efficient read and write caches to make accessing and writing data fast. With ZFS, volumes can be created on remote storage and shared out through iSCSI and Fibre Channel as a block device. This gives the ability to host Oracle Solaris Zones on shared storage in the cloud.And now with Oracle Solaris 11, administrators can create a fully virtualized network for more efficient sharing of network resources. This previously missing functionality now completes the virtualization solution, allowing administrators to create virtualized network interfaces allowing isolated and dedicated network stacks between an application and the physical network interfaces. Network resource management allows organizations to meet quality of service goals for networking, by specifying specific CPU resources or bandwidth limits. Network virtualization is integrated in Oracle Solaris Zones.All of these technologies bring significant cost reduction through consolidation, but increased agility and flexibility allowing you to quickly deploy new services in the cloud, respond to increased workloads and ensure necessary service level agreements are met.
This large European city required a highly available online services platform for city residents. Oracle Solaris 11 packaging and update technology has greatly shortened the time they spend updating systems. Each week the customer automatically pulls any new packages from the Oracle support repository to their local IPS repository. About once a month administrators review the READMEs for the updates and initiate the update process. They no longer have to assemble the latest patches and run the new configuration through several days of testing before updating the systems. It was not uncommon for these systems to be updated once or twice a year. With this new model the customer is able to keep the systems in tighter compliance with the latest updates and spend much less time updating the systems.Solaris Zones allow to securely stack both the SAP services and the Oracle Database on the same nodes while restricting access to the respective environments to only those who need itZone Cluster gives a flexible high availability solutions layered on top of the virtualization layer, giving the application administrators control over the high availability. Zone Cluster met all the ease of use needs they were used to with Solaris 10 and Solaris Cluster 3.3Rapid deployment of new Zones and the appropriate Cluster agent brings new services online quicklyDelegated administration allows the SAP and database administrators to safely control the state of their own Zones without endangering the other Zones and services on the systems
Software defined networking is an emerging architecture in today’s data center designs separating out the control plane from the data plane of typical switches and routers. This gives increased agility in cloud environments for being able to model or optimize network topologies, and provides a level of management control that was previously proprietary within switches and routers. Through a variety of IEEE network standards, Oracle Solaris 11.1 has kept up with the pace and provided support for Edge Virtual Bridging and Data Center Bridging.Edge Virtual Bridging offloads the decisions and control from virtual switching (using network virtualization in Oracle Solaris 11) to the physical switch network as a standard protocol. This may be done for a variety of reasons includingManagement – Virtual switches are normally set up and controlled by server administrators. In this case, you may want to make sure the network administrators have control over all your network, both physical and virtualMonitoring – A common network monitoring tool will be able to see all the traffic and give a centralized view (as opposed to traffic hidden away through virtual switching)Security – The physical switches may have better functionality to ensure an improved security modelData Center Bridging provides converged storage and network through Ethernet, by implementing lossless Ethernet (critically important for storage when enabling Fibre Channel or RDMA over Ethernet and when trying to avoid implementing over TCP due to performance degradation) and allocate bandwidth control on links.
New to Oracle Solaris 11 is the addition of Federated File System support. Oracle Solaris 11.1 is one of the first commercially available operating systems to provide this. Essentially this allows administrators to seamlessly provide a unified namespace through a series of NFSv4 namespace referrals. NFS referrals are a way for an NFSv4 server to point to file systems located on other NFSv4 servers, as a way of connecting multiple NFSv4 servers into a uniform namespace. NFSv2, NFSv3 and other clients can follow a referral because it appears to them to be a symbolic link. NFS referrals are useful when you want to create what appears to be a single set of filenames across multiple servers, and you prefer not to use autofs to do this. Note that only NFSv4 servers may be used, and that the servers must be running the Oracle Solaris 11 release or later to host a referral.
As customers deploy more and more virtual environments, the complexity of managing them increases. With the new packaging system, IPS, zones can now be updated in parallel leading to significant performance gains. For customers with 100’s if not 1000’s of systems to manage, this can be a significant cost saving. In this example we can see nearly 30 minutes difference when updating a SPARC T4 based system with 20 Oracle Solaris Zones. Package updates are cached in the global zone meaning that each zone does not need to re-download these packages over the wire. With IPS, we also ensure that all zones installed on the system are consistent with the global zone, and each have their own respective Zone ZFS Boot Environments to allow an unprecedented level of flexibility and control when updating a system.
While customers could have placed zones in shared storage (SAN) themselves the configuration is tricky and the various steps are all left to the customer. This resulted in differing implementations, and often customers were not taking advantage of ZFS as their backend storage file system. With Oracle Solaris Zones on Shared Storage, we reduce the complexity of this commonly asked functionality, so customers can easily leverage their backend storage environment. This has become more important now as customers look for mobility of their virtualization. With Oracle Solaris Zones on shared storage the move becomes the time to shutdown/detach/attach/boot the zone – this can be a very fast operation depending on the applications within the zone.Note: Zones on NFS are still not supported.
Oracle Solaris Cluster is an additional solution that helps provide further layers of high availability to Oracle Solaris 11. Oracle Solaris Cluster is engineered in lock step to Oracle Solaris 11, and through integration at the kernel layer, provides consistently faster failover times than any other3rd party product. The latest release of Oracle Solaris Cluster 4.1 is required for Oracle Solaris 11.1. It includes a number of new features that help you protect your application investments and take advantage of some of the new features of Oracle Solaris 11.1.Oracle Solaris 10 Zone ClustersOracle Solaris 10 can now be deployed inside an Oracle Solaris Zone cluster in addition to Oracle Solaris 11. This new feature enables the deployment of Oracle Solaris 10 applications in a protective clustering environment within an Oracle Solaris 11 based system. Customers can thus leverage Oracle Solaris 11 best of breed features such as network virtualization and enhanced installation tools while minimizing the risk to new application environments by deploying tested and mature Oracle Solaris 10 solutions. The resulting benefits are protected customer investments and lower TCO. Integrated network virtualizationOracle Solaris Zone clusters can now be configured with “Exclusive IP” which is the Oracle Solaris 11 default. This enables the use of all advantages of the Oracle Solaris network virtualization within a Zone cluster. Added to this, Oracle Solaris Cluster can also take advantage of some of the network resource and priority management features of network virtualization.Configuration WizardsThe configuration wizards guide the users through the resource configuration step-by-step. It automatically discovers default values and auto-selects them to minimize manual interaction. It also presents possible options for multiple choices, allows manual entry of values and verifies the validity of the choices. The wizards help avoid configuration errors, lower the need for training for beginners and save time for advanced users.
So what we exactly we mean by a complete, integrated and business-driven cloud management solution?There are three key aspects of Oracle Enterprise Manager 12c that help accomplish this: Complete Cloud Lifecycle Solution To start with, Oracle Enterprise Manager 12c contains solutions to manage all phases of the building, managing, and consuming an enterprise cloud. Using Oracle Enterprise manager 12c you can build and manage a rich catalog of cloud services – whether it is Infrastructure-as-a-Service, Database-as-a-Service, or Platform-as-a-Service , all from a single product. Integrated Cloud Stack ManagementSecondly, Oracle Enterprise Manager 12c enables integrated management of the entire cloud stack – all the way from application to disk. Oracle Enterprise Manager 12c therefore eliminates much of the integration pains and costs that customers would have to otherwise incur by trying to create a cloud environment by integrating multiple point solutions. Business-Driven Clouds Finally, Oracle Enterprise Manager 12c enables creation of application-aware and business-driven clouds that has deep insight into applications, business services and transactions. Applications – whether they are packaged or home grown – power your businesses and therefore it is critical that an enterprise Cloud platform is not only able to run these applications but also has deep business insight and visibility. As the leading providers of business applications and the middleware that many of your custom applications are built on top of, we are able to offer you a cloud solution that is optimized for business services.
And finally, let’s take a look at Oracle Solaris 11 in action at some of our customer sites.
The guarantee debuted in May 2000 and covers applications that ran on Solaris 2.6 and forward (1997) We guarantee any app that runs successfully on Solaris 2.6 (released 1997) -- but, apps built as far back as 1992 may qualify for that!Source code compatibility says that your SPARC Solaris application can be recompiled for Solaris on x86 platforms and vice versa. We guarantee that as well.For customers who want to quickly move their older environments to the latest Solaris release to take advantage of the latest hardware (think consolidation play) we offer zones for older Solaris environments. Use physical to virtual tools to easily move those environments forward.
Oracle's SPARC T5-1B server module with Oracle VM for SPARC running Oracle Solaris 11 shows that the SPARC T5 processor is 3.1x faster than an x86 system running Sandy Bridge processors when run in a popular virtualized environment. The test showed this performance difference with only two VMs. The test was a heavy OLTP workload based on a customer workload. On a per processor basis, the SPARC T5 processor running the iGen OLTP workload on two Oracle Solaris VM for SPARC Domains is 3.1x faster than an Intel Xeon E5-2690 processor running the same workload on two virtual machines with Red Hat Linux. An Intel Xeon E5-2690 processor based server using a popular virtualization software lost 23% of its performance in a virtualized environment with only two VMs. The SPARC T5 processor only lost 0.4% of its performance in a virtualized environment. The SPARC T5 processor offers better native performance than the x86 system. On a per processor basis, the SPARC T5 processor running the iGen OLTP workload is 2.5x faster than an Intel Xeon E5-2690 processor running the same workload with Red Hat Linux. Increasing the number of Logical Domains on the SPARC T5 processor system, the Oracle Database 11g shows no loss in stability and performance. Oracle VM for SPARC has the lowest overhead of any virtualization technology available. Paravirtualized IO (virtio) was used on both the SPARC T5 processor and the Intel Xeon E5-2690 processor servers.
Oracle Solaris and Oracle VM for SPARC leads the industry on high-performance and efficient virtualization. This test measures the effect of virtualization on performance of cpu-intensive enterprise workloads. Virtualization also effects network and storage performance and those are measured in other tests. A Java enterprise workload showed a 13% performance loss on an x86 server running two Red Hat VMs using a leading virtualization software product. The same workload on Oracle's SPARC T4-2 server running two Oracle Solaris 11 VMs using Oracle VM for SPARC showed no performance loss. The performance loss for the Java enterprise workload grew bigger to 26% on the x86 server when the number of Red Hat VMs running was increased to four using the same leading virtualization software product. Performance LandscapeOverheads of different virtualization methods were measured using a compute and memory intensive Java workload distributed evenly to measure maximum Java throughput performance of the hardware.Performance loss is presented as the percentage difference between running without virtualization (native) to running with virtualization. The Java workload is modeled based on a real world Java application, namely a supply chain, but it has many elements generally found in many enterprise applications and Java SE applications. The Java workload allows for the capability to easily take advantage of additional hardware threads for compute intensive operations. The workload is architected in a way that it can be run fairly on a wider range of chips and systems architectures to distribute the Java workload.
Oracle's SPARC T5-2 server using Oracle VM Server for SPARC exhibits dramatically lower network latency under virtualization than a popular virtualization solution. The network latency was measured using the Netperf benchmark. The results for Oracle VM Server for SPARC compared to a popular virtualization solution on TCP network latencies are very similar to the TCP network latencies: TCP network latency between 2 Oracle VM Server for SPARC guests running on separate SPARC T5-2 servers each using paravirtualized I/O is 27% faster than between two Red Hat 6.1 guests hosted with a popular virtualization solution on separate x86 based servers each through a paravirtual I/O interface. TCP network latency between two Oracle VM Server for SPARC guests running on separate SPARC T5-2 servers each using direct I/O is 53% faster than between two Red Hat 6.1 guests hosted by a popular virtualization soltuion on separate x86 based servers each using direct I/O. The results for Oracle VM Server for SPARC compared to a popular virutalization solution on UDP network latencies are very similar to the TCP network latencies: UDP network latency between two Oracle VM Server for SPARC guests running on separate SPARC T5-2 servers each using paravirtual I/O is 20% faster than between two Red Hat 6.1 guests hosted with a popular virtualization solution on separate x86 based servers each using paravirtual I/O. UDP network latency between two Oracle VM Server for SPARC guests running on separate SPARC T5-2 servers each using direct I/O is 45% faster than between two Red Hat 6.1 guests hosted with a popular virtualization solution on separate x86 based servers each using direct I/O. TCP and UDP network latencies between two Oracle VM Server for SPARC guests running on separate SPARC T5-2 servers each using SR-IOV were significantly less than when using paravirtual I/O *. Terminology notes: VM – virtual machineguest – encapsulated operating system instance, typically running in a VM.direct I/O – network hardware driven directly and exclusively by guestsparavirtual I/O – network hardware driven by hosts, indirectly by guests via paravirtualized driversSR-IOV – single root i/o virtualization; virtualized network interfaces provided by network hardware, driven directly by guests. LDom – logical domain (previous name for Oracle VM Server for SPARC)
Oracle's SPARC T5-2 server using Oracle VM Server for SPARC exhibits dramatically lower network latency under virtualization than a popular virtualization solution. The network latency was measured using the Netperf benchmark. The results for Oracle VM Server for SPARC compared to a popular virtualization solution on TCP network latencies are very similar to the TCP network latencies: TCP network latency between 2 Oracle VM Server for SPARC guests running on separate SPARC T5-2 servers each using paravirtualized I/O is 27% faster than between two Red Hat 6.1 guests hosted with a popular virtualization solution on separate x86 based servers each through a paravirtual I/O interface. TCP network latency between two Oracle VM Server for SPARC guests running on separate SPARC T5-2 servers each using direct I/O is 53% faster than between two Red Hat 6.1 guests hosted by a popular virtualization soltuion on separate x86 based servers each using direct I/O. The results for Oracle VM Server for SPARC compared to a popular virutalization solution on UDP network latencies are very similar to the TCP network latencies: UDP network latency between two Oracle VM Server for SPARC guests running on separate SPARC T5-2 servers each using paravirtual I/O is 20% faster than between two Red Hat 6.1 guests hosted with a popular virtualization solution on separate x86 based servers each using paravirtual I/O. UDP network latency between two Oracle VM Server for SPARC guests running on separate SPARC T5-2 servers each using direct I/O is 45% faster than between two Red Hat 6.1 guests hosted with a popular virtualization solution on separate x86 based servers each using direct I/O. TCP and UDP network latencies between two Oracle VM Server for SPARC guests running on separate SPARC T5-2 servers each using SR-IOV were significantly less than when using paravirtual I/O *. Terminology notes: VM – virtual machineguest – encapsulated operating system instance, typically running in a VM.direct I/O – network hardware driven directly and exclusively by guestsparavirtual I/O – network hardware driven by hosts, indirectly by guests via paravirtualized driversSR-IOV – single root i/o virtualization; virtualized network interfaces provided by network hardware, driven directly by guests. LDom – logical domain (previous name for Oracle VM Server for SPARC)
4 new products have joined Exadata and Exalogic in the Engineered Systems family. These new products adhere to the same technology principles as Exadata and Exalogic, and extend the Engineered Systems’ value proposition. We have been in this business delivering Engineered Systems since 2008 with the introduction of Exadata followed closely by Exalogic. The success of these products has been trulyamazing – outpacing even our goals for adoption. We have over 1200 systems sold to date and expect that number to be 4000 later this year. Exalogic has already surpassed Exadata as the fastest growing product in Oracle’s history and the demand for our new products leads us to believe we will see even greater adoption of our products as they were designed to be “Better Together”.With all of our products the deployment time is minimized, manageability is maximized, the lowest total cost of ownership in comparison to our competitors is delivered to you, implementation risk is minimized, and as always you have one single point of contact for support. These core principles apply to our existing products, our new products, and will certainly apply to the innovation we continue to deliver.
As a simple and high-level overview, SuperCluster can be thought of as a combination of Exadata and Exalogic.
SC datasheet is at http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-supercluster-ds-496616.pdf
Dynamic Domains isolate the hardware.OVM for SPARC isolates the OS.Zones isolate the process.
The new release of Oracle SuperCluster software features several significant enhancements: Greater configuration flexibility, including the ability to configure multiple domains of any type Full support for Solaris zones in all domains, including Database domains. This allows multiple RAC clusters to be configured in a single Database domain Greater flexibility around CPU and memory configuration More useable disk space for zone root file systems
Multiple domains of any type can be configured in each T5-8 node, including multiple Database domains. If no domain types other than Database Domains are present, the SuperCluster will operate much like a SPARC version of Exadata. The control domain must boot Solaris 11, so up to 7 Solaris 10 domains are possible per T5-8 node.The Exadata Storage Servers are managed directly from the Database Domains. The disks and flash storage associated with Exadata Storage Servers are available only to Database Domain database instances – they can not be used for general-purpose storage.The ZFS Storage Appliance offers shared storage to all domains on a SuperCluster. The way it is configured depends on the type of domain.Exalogic software runs in a Solaris 11 Application Domain, and the ZFS Storage Appliance is appropriately configured in Solaris 11 Application Domains to run Exalogic software.Fiber Channel HBAs can be added to any type of domain. SAN support on SuperCluster is the same as for T5 servers generally.
Key points to highlight:Initial products offered will have full configs – later in 2013 will allow lower mem densitiesNew hot plug IO carriers – different than EMs in T5-4 – allow use of standard LP cards and F40 flash acceleration
Comparable to 10,000 disks on 100 array frames20X more writes than previous Exadata version
Comparable to 10,000 disks on 100 array frames20X more writes than previous Exadata version
While not high in IOPS due to a single tray, it is redundant with dual controllers.
Manageability through Ops CenterTransforming Complexity Into SimplicityOracle handles it; doesn’t become your problemIncluded with Oracle Premier supportNo financial barrier to effective managementAlways there, allows Oracle to leverage it to deliver simplicitySimple and Flexible ManagementManages all Oracle SuperCluster componentsSupports multiple levels of virtualizationDomains based on Oracle VM Server for SPARCOracle Solaris Zones and Oracle Solaris ContainersOps Center is included with every SuperCluster. Ops Center simplifies monitoring of SuperCluster and provides Oracle with a foundation to build upon for future manageability enhancements.
Oracle hardware and software are not only engineered to work together, they are engineered to be maintained, updated, and supported together. We are uniquely qualified to provide optimized performance at every level of the integrated stack,delivering the essential services and resources your business needs to maintain high availability, increase operational efficiency, and gain competitive advantage.Oracle Premier Support provides fully integrated system support with a single point of accountability… 24/7 support with access to Engineered Systems experts 2 hour onsite response Updates, upgrades and support for the Oracle operating system, database and integrated server and storage hardware. Access to My Oracle Support portal which contains a 1 million article database and many, many proactive tools to help you keep systems running at peak performance Oracle Automated Service Request – where your system phones home to Oracle to let us know if there is a problem with the hardware…and now, qualifying customers can also receive the enhanced coverage of Oracle Platinum Services for no additional cost.Oracle Platinum Services is a special entitlement under Oracle Premier Support, delivered at no additional cost.It’s exclusively available on Oracle Exadata, Exalogic and SuperCluster based on certified configurations.It provides 24/7 Oracle remote fault monitoring Backed by extremely fast response times:5 Minute Fault Notification15 Minute Restoration or Escalation to Development30 Minute Joint Debugging with Development And, quarterly patching deployed by OracleOracle Platinum Services takes standard support to a whole new level with additional, no cost services targeted to delivering high availability.To learn more about Oracle Platinum Services go to: www.oracle.com/goto/platinumservices
Data Sheet – http://www.oracle.com/us/products/servers-storage/servers/sparc/supercluster/supercluster-t5-8/oracle-supercluster-t5-8-ds-1964480.pdf
This is the FIRST slide must be shown with each presentation that show benchmark results!
This is the SECOND slide must be shown with each presentation that show benchmark results!