SlideShare uma empresa Scribd logo
1 de 90
Baixar para ler offline
John Campbell
Distinguished Engineer
IBM DB2 for z/OS Development
campbelj@uk.ibm.com
DB2 11 for z/OS: Migration Planning
and Early Customer Experiences
Disclaimer:
Information regarding potential future products is intended to outline our general
product direction and it should not be relied on in making a purchasing decision.
The Information mentioned regarding potential future products is not a
commitment, promise, or legal obligation to deliver any material, code or
functionality. Information about potential future products may not be incorporated
into any contract. The development, release, and timing of any future features or
functionality described for our products remains at our sole discretion.
Performance Disclaimer:
This document contains performance information based on measurements done in
a controlled environment. The actual throughput or performance that any user will
experience will vary depending upon considerations such as the amount of
multiprogramming in the user’s job stream, the I/O configuration, the storage
configuration, and the workload processed. Therefore, no assurance can be given
that an individual user will achieve throughput or performance improvements
equivalent to the numbers stated here.
2
Objectives
• Share lessons learned, surprises, pitfalls
• Provide hints and tips
• Address some myths
• Provide additional planning information
• Provide usage guidelines and positioning on new enhancements
• Help customers migrate as fast as possible, but safely
3
Agenda
• Introduction
• ESP Highlights
• Migration Considerations
• Availability
• Utilities
• Performance and Scalability
• Other Enhancements
• Summary
4
5
Introduction
DB2 11 Major Themes
6
• Out-of-the-box CPU Savings
– Improving efficiency, reducing costs, no application changes
– Up to 10% for complex OLTP
– Up to 10% for update intensive batch
– Up to 40% for queries
– Additional performance improvements through use of new DB2 11 features
• Enhanced Resiliency and Continuous Availability
– Improved autonomics which reduces costs and improves availability
– Making more online changes without affecting applications
– Online REORG improvements, less disruption
– DROP COLUMN, online change of partition limit keys
– Extended log record addressing capacity - 1 yottabyte (or 1B petabytes)
– BIND/REBIND, DDL, Online REORG to break into persistent threads
• Enhanced business analytics
– Expanded SQL, XML, and analytics capabilities
– Temporal and SQLPL enhancements
– Hadoop integration, NoSQL and JSON support
– Transparent archiving
• Simpler, faster DB2 version upgrades
– Improved product quality/reliability – through iterative approach on 3 monthly cycle (1:N rallies) to FVT, SVT, Performance testing, and
stabilization phase ahead of start of ESP
– Application changes divorced from DB2 system upgrade (APPLCOMPAT)
– Access path stability improvements
Announce
Oct. 1, 2013
GA
Oct. 25 2013
7
ESP Highlights
8
Core - 21 WW Customers
Geography
11 EMEA
 9 NA
 1 SA
Industry
 7 Banking
 5 Insurance
 3 Healthcare
 2 Financial Markets
 1 Automotive
Extended - 6 WW Customers
Geography
 3 EMEA
 2 NA
 1 SA
Industry
 3 Banking
 2 Computer Services
 1 Professional Services
ESP Start February 2013
First Code Drop March 2013
“Regular” service process July 2013
GA October 25, 2013
DB2 11 ESP Highlights
DB2 11 ESP Client Feedback
9
• Very much improved quality and reliability at this early stage in the release cycle
• Good performance and CPU savings
DRDA workload up to 20% CPU reduction
CICS workload up to 18% CPU reduction
Batch workload up to 20% CPU reduction
• Greatest hits
– BIND, REBIND, DDL, Online REORG break in
– Transparent archiving
– IFI 306 filtering by object (Qreplication)
– Online schema change
– Utility improvements particularly Online REORG
– Extended LRBA/LRSN
– Optimizer and migration improvements
– GROUP BY Grouping Sets
DB2 11 Early Support Program (ESP)
10
“Overall we are very satisfied and astonished about the
system stability of DB2 V11. In V10 we experienced this in
another way.” – European Insurance
“We have seen very few problems in [Installation, Migration, and Performance]. Overall,
it has been a very pleasant experience!!…The quality of the code is clearly much
higher than for the ESP for DB2 10…” - European Banking/FSS
“Good code stability, no outages, no main failures, only a few
PMRs….” – European Banking
“We have been involved in several DB2 for z/OS ESP’s. This one will rank
as one of, if not the smoothest one yet.” – Large NA retailer
DB2 11 Early Support Program (ESP) …
11
“I saw a significant performance improvement in recovery of catalog and directory.
(V10 5:53 minutes, V11 2:50 minutes) That rocks! … DB2 11 is the best version I
have ever seen.” - European Gov’t
“Overall, we have been impressed with the new version of
DB2.” – NA Manufacturer
“ Higher availability, performance, lower CPU consumption amongst other new
features were the benefits perceived by Banco do Brazil with DB2 11 for z/OS.
During our testing with DB2 11 we noticed improved performance, along with
stability. ” - Paulo Sahadi, IT Executive, Banco do Brasil
“We have seen some incredible performance results with DB2 11, a
major reduction of CPU time, 3.5% before REBIND and nearly 5%
after REBIND. This will significantly bring down our operating costs”
– Conrad Wolf, Golden Living
12
Migration Considerations
13
Prerequisites – Hardware & Operating System
• Processor requirements:
– zEC12, z196, z10 processors supporting z/Architecture
– Will probably require increased real storage for a workload
compared to DB2 10 for z/OS (up to 15%)
• Software Requirements:
– z/OS V1.13 Base Services (5694-A01) at minimum
– DFSMS V1 R13 – DB2 Catalog is SMS managed
– Language Environment Base Services
– z/OS Version 1 Release 13 Security Server (RACF)
– IRLM Version 2 Release 3 (Shipped with DB2 11 for z/OS)
– z/OS Unicode Services and appropriate conversion definitions
are required
– IBM InfoSphere Data Replication (IIDR) 10.2.1
– For DB2 Connect – please the next slides
Prerequisites – DB2 Connect
• DB2 for z/OS V11 in all modes should operate with existing versions of DB2
Connect in place, even back to DB2 Connect V8
– DB2 for z/OS Development will investigate any connectivity related issues with
existing applications using older versions of DB2 Connect and try to provide a fix
– If any issues cannot be resolved within the DB2 for z/OS server, DB2 Connect will
have to be upgraded to an in-service level to obtain a fix
• For continuous availability during the migration process the minimum
recommended level before leaving DB2 10 is V9.7 FP6 or V10.1 FP2
– This is the level that provides continuous availability for a given application server as
a customer goes from V10 NFM base -> V11 CM -> V11 NFM
• The minimum level for full DB2 11 for z/OS exploitation is currently V10.5 FP2
– Required for specific new function: array support for stored procedures, WLB support
with global variables, autocommit performance improvements, improved client info
– This recommended level could and probably will change and go up over time as we
gain more customer experiences, roll through best practices, and provide defect fixes
into newer driver levels
Prerequisites – DB2 Connect ...
• Most DB2 for z/OS engine features in NFM are supported with any version of
DB2 Connect
• DB2 for z/OS Development are being proactive in rrecommending customers to
move from the client or runtime client packages towards using the data server
(ds) driver instead
• For "evergreen" and/or new function the general upgrade path is the following:
1. DB2 for z/OS Server
2. DB2 Connect Server (if present – we are encouraging direct connect)
3. Drivers installed on application servers (push from client, runtime client -> ds driver)
4. End user workstations (also push from client, runtime client -> ds driver)
• We do have customers that will push out the drivers first - those are generally
driven by the need for specific application enhancements e.g.,
– The most common example is in the .NET arena - wanting the latest tooling and
driver support in the MS arena
16
Pre-migration planning
• Run DSNTIJPM (DSNTIJPB) pre-migration job
• Check for situations needing attention before migration
– Take the actions recommended by the report headers
• Run DSNTIJPM or DSNTIJPB, to identify them
– DSNTIJPM ships with DB2 11 and should be run on DB2 10 to identify pre-migration
catalog clean-up requirements
• DSNTIJPM may provide DDL or utility statements for the clean-up
– DSNTIJPB is the same job and is shipped for DB2 10 to maximize prepare time
17
Important preparation
• Old plans and packages before V9 -> REBIND
• Views, MQTs, and Table functions with Period Specification -> DROP
– Those created in V10 are not supported
– Period Specification must be on base table
18
Items deprecated in earlier versions – Now eliminated
• Password protection for active log and archive log data sets
• DSNH CLIST NEWFUN values of V8 and V9 – Use V10 or V11
• Some DB2 supplied routines
– SYSPROC.DSNAEXP –> Use the EXPLAIN Privilege and issue EXPLAIN directly
– AMI-based DB2 MQ (DB2MQ) functions –> use the MQI-based functions in
schema (see APAR PK37290 for guidance)
• DB2MQ1C.*, DB2MQ2C.*
• DB2MQ1N.*,DB2MQ2N.*
• CHARSET application programming default value (KATAKANA) – use CCSIDs
• BIND PACKAGE options ENABLE and DISABLE (REMOTE) REMOTE (location-
name,...,<luname>,...) -- specific names cannot be specified
• Sysplex Query Parallelism – Single member parallelism is still supported
• DSN1CHKR – There are no longer any links in the Catalog or Directory
19
APPLCOMPAT – Application Compatibility
• Requirements
– De-couple the need for application program changes to deal with incompatible SQL
DML and XML changes from the actual DB2 system migration to the new DB2 release
which introduced the incompatible SQL DML and XML changes
– Provide a mechanism to identify application programs affected by incompatible SQL
DML and XML changes
– Provide a mechanism to introduce changes at an individual application program
(package) level
• Enable support so that application program changes can be phased in over much longer time
• Enable support for mixed DB2 release co-existence in data sharing
• Enable support for up to two back level releases of DB2 (N-2)
• Solution
– APPLCOMPAT which separates DB2 system migration to the new DB2 release from
application program migration to deal with incompatible SQL DML and XML
introduced by the new release
20
APPLCOMPAT – Application Compatibility ...
• APPLCOMPAT zparm provides default for BIND/REBIND
– V10R1 for DB2 10 SQL DML behaviour
– V11R1 for DB2 11 SQL DML behaviour
– Default is V11R1 for new installs, V10R1 for migration
• APPLCOMPAT option on BIND/REBIND to override zparm default
• CURRENT APPLICATION COMPATIBILITY special register and
DSN_PROFILE_ATTRIBUTES for DDF
– For dynamic SQL
• Does not address issues with new reserved words or other incompatibilities that could
only be resolved by having multiple levels of the DB2 parser
• BIF_COMPATIBILITY zparm is independent of APPLCOMPAT
• New SQL functionality available in in V11 NFM cannot be used until package is bound
withy APPLCOMPAT value of V11 R1
21
APPLCOMPAT – Application Compatibility ...
• Migration automatically sets V10R1 prior to NFM … otherwise
– DSNT225I -DSN BIND ERROR FOR PACKAGE location.collid.member
APPLCOMPAT(V11R1) OPTION IS NOT SUPPORTED
– IFCID376 – Summary of V10 function usage
– IFCID366 – Detail of V10 function usage, identifies packages
– We expect changes necessary to avoid V10R1 usage to happen after reaching
NFM
• Workaround to distinguish packages which have to absolutely run as V10R1 until they
are corrected
– Annotate the package using SQL COMMENT ON PACKAGE colid.name.”version” IS
‘V10R1’
• If version is a pre-compiler timestamp then the double quotes are necessary
– Stored in the REMARKS column in SYSIBM.SYSPACKAGE table
• Can be queried and be exploited by housekeeping
22
APPLCOMPAT vs. BIF_COMPATIBILITY
• BIF_COMPATIBILITY=V9|V9_DECIMAL_VARCHAR is still honored in all modes
of V11
– The ‘undocumented’ timestamp support is back again with
APPLCOMPAT(V11R1) e.g.,
• EUR date format concatenated to the TIME (and microseconds)
23
Migration Overview DB2 10 -> DB2 11
DB2 11 Enabling
New Function
Mode (ENFM)
DB2 11
Catalog
Data Sharing
Coexistence
DB2 11
Conversion
Mode (CM)
DB2 11 New
Function Mode
(NFM)
DSNTIJTC
(CATMAINT
UPDATE)
DSNTIJNF
(CATENFM
COMPLETE)
DSNTIJEN
(CATENFM
START)
DB2 10
Catalog
DB2 11
Libraries
DB2 10
Libraries
DB2 10 New
Function Mode
(NFM) With
SPE
1 – 2 months
1 week
Minutes
Use APPLCOMPAT(V10R1) here Use APPLCOMPAT(V10R1) or
APPLCOMPAT(V11R1) here
24
Migration and Fallback Paths
• With DB2 11, you can always drop back to the previous stage
• Cannot fallback to DB2 10 after entry to DB2 11 (ENFM), but can return to DB2
11 (CM*)
DB2 10
NFM
DB2 11
CM*
From here,
you can
only return
to ENFM
DB2 11
CM
DB2 11
CM*
From here,
you can go
to NFM or
ENFM*
DB2 11
ENFM
DB2 11
ENFM*
DB2 11
NFM
1 2
4 4
3
54
2 3 3
5
1. DSNTIJTC
2. DSNTIJEN
3. DSNTIJNF
4. DSNTIJCS
5. DSNTIJES
25
Preparing your current DB2 10 NFM for Migration to DB2 11 CM
• Apply the Fallback SPE APAR, PM31841 and any prerequisite fixes
– Your DB2 10 system MUST be at the proper service level
– See Info APARs II14660
• Non-Data Sharing
– Current DB2 10 must be started with the SPE applied, or migration to DB2 11 will
terminate
• Data Sharing
– Before migrating a member to DB2 11, all other started DB2 10 members must
have the fallback SPE applied
– The fallback SPE must be on all active DB2 10 group members for DB2 11 to start
Important – Apply SPE to ALL Data Sharing
Members Before Starting Migration!
26
Other recommendations
• Run Online REORGs against Catalog and Directory objects prior to the
ENFM/NFM migration
– Check that REORG can break in
– Check data consistency of Catalog and Directory
– Improve the performance of the ENFM process
• CATMAINT and ENFM will not execute if entries found in SYSUTILX
– DB2 will no longer blindly re-initialize it
27
Availability
28
BIND/REBIND/DDL/Online REORG breaking into persistent
thread running packages bound with RELEASE(DEALLOCATE)
• Persistent threads with RELEASE(DEALLOCATE) which were previous blocking
– e.g., IMS Pseudo WFI, CICS Protected ENTRY threads, etc
• Types of REORGs which invalidated packages were previously blocked
– REORG REBALANCE
– Materializing REORG
• The 'break-in' behavior is ON by default (zparm PKGREL_COMMIT =YES)
• Break-in is performed on a “best efforts” basis
• Break-in mechanism can handle idling threads at a transaction boundary (i.e.,
where commit or abort is the last thing performed)
29
BIND/REBIND/DDL/Online REORG breaking into persistent
thread running packages bound with RELEASE(DEALLOCATE) …
• Several factors come into play for a successful break-in
– Persistent thread must COMMIT
– The timing of the COMMIT and the frequency of the COMMITs are both key
– Increasing the zparm for IRLM resource timeout (IRLMRWT) helps to keep the
BIND/REBIND/DDL/Online REORG operation waiting to increase the chances of a
successful break-in
• The break-in mechanism does not apply when
– Running packages bound KEEPDYNAMIC(YES), or
– OPEN cursors defined WITH HOLD at the time of COMMIT, or
– If the COMMIT happens inside a stored procedure
• RELEASE(COMMIT) would also not break-in for the above conditions
30
BIND/REBIND/DDL/Online REORG break in - How does it work
1. BIND/REBIND/DDL/Online REORG is initiated and waits on a package lock
– Will timeout after 3x IRLM timeout limit (IRLMRWT)
2. At 1/2 of the IRLM timeout limit, DB2 will get notified by IRLM that someone is stuck on
a package lock
– If DB2 has an S-holder, DB2 will post a system task to take further action
3. DB2 system task is awakened and checks to see if a ‘recycle’ of locally attached threads
has been done in the last 10 seconds
– If not, the break-in operation will proceed
– DB2 is trying to avoid a battering of the system via BIND/REBIND/DDL/Online REORG
4. Send broadcast to all DB2 members to perform a recycle of locally attached threads
5. If task proceeds, it will loop through all locally attached threads (not DIST!) and see if
they were last committed/aborted in > 1/2 of the IRLM timeout limit
– If so, the BIND/REBIND/DDL/Online REORG is likely waiting on them
6. The next test is to see if DB2 can do anything about it?
– Each thread must be at a transaction boundary (i.e., commit or abort is the last thing)
– If so, DB2 can process the thread
31
BIND/REBIND/DDL/Online REORG break in - How does it work …
7. DB2 will fence the API for the thread, grab the agent structure and drive a ‘dummy
COMMIT’
– The commit is transactionally OK since we are at a transaction boundary
– DB2 will be the coordinator as this is single-phase commit and get out
– On the COMMIT, RDS sees that there is a waiter for a package lock held by this agent and
will switch to RELEASE(COMMIT) for this commit cycle
– The lock is freed and DB2 is one step closer to the BIND/REBIND/DDL/Online REORG
breaking in
8. Repeat for all qualifying threads
9. BIND/REBIND/DDL/Online REORG should break-in provided there are no blockers
that had to be excluded e.g., long running read only application process without a
commit
10. If the application starts using the thread during the recycle processing, it will be
blocked at the API level
– DB2 will spin the thread in a timed wait loop until the recycle is done
– DB2 will wait a millisecond approximately between polls
– DB2 has also taken care to fence end-of-task (cancel application TCB), end-of-memory
(force down the home ASID during recycle), associate, dissociate, etc
BIND Break-in – Simple customer test
Time -----------
Thread: T1 Tread reuse
BIND: BIND
1 transaction
One CICS ENTRY Thread
Break
in
BIND waits approximately 30 sec before it breaks into an idle thread
30 sec is half the transaction time out interval
32
BIND break in – additional customer testing
Action Threads Result
BIND Batch No Commit No break-in
BIND Batch frequent Commit Break-in
BIND 50*CICS ENTRY Break-in
DDL:
Create Index CICS ENTRY Break-in
Drop Index CICS ENTRY Break-in
Alter Table Add Column CICS ENTRY Break-in
Alter index (NO) cluster CICS ENTRY Break-in
Alter Tablespace to UTS CICS ENTRY Break-in
Alter Partition CICS ENTRY Break-in
33
34
ALTER LIMITKEY enhancement
• Behavior is different depending on how the table partitioning is controlled
• With table-controlled table partitioning, this is a pending alteration
– Dropping of these alters can occur at any time
• With index-controlled table partitioning
– If alter is done via ALTER INDEX ALTER PARTITION
• Partition goes into ‘hard’ reorg pending (REORP)!
• Tablespace remains index-controlled
• Alter cannot be withdrawn!
– If the alter is done by ALTER TABLE ALTER PARTITION
• If the partition is not empty
– Partition goes into ‘hard’ reorg pending (REORP)!
– Tablespace is converted to table-controlled partitioning!
– Alter cannot be withdrawn
• If the partition is empty
– Alter is executed immediately
– Tablespace is converted to table-controlled partitioning
35
ALTER LIMITKEY enhancement …
• Two new zparms introduced
– PREVENT_ALTERTB_LIMITKEY
• ALTER TABLE ALTER PARTITION leads to SQLCODE -876
• ALTER INDEX ALTER PARTITION is still possible – do not use it because of REORP
– PREVENT_NEW_IXCTRL_PART
• Can no longer create new index-controlled partitioned tablespaces
• Materializing REORG can now break-in to a persistent thread running
RELEASE(DEALLOCATE) package
• REORG REBALANCE
– Not possible for partitions with pending ALTER LIMITKEY changes
– Will work for the other partitions
– Will work for partitions which are ‘hard’ reorg pending (REORP)
36
DROP COLUMN
• Works well
– Can convert to UTS and concurrently DROP COLUMN
– Materializing REORG can be run at the partition level – if all partitions are covered
– All packages touching the table will be invalidated
• Restrictions
– Cannot use DROP COLUMN in classic tablespace type (SQLCODE -650)
– Cannot drop a column contained in an index or view (SQLCODE -478)
– Cannot add a dropped column before the materializing REORG (SQLCODE -20385)
– Cannot create a view with a dropped column (SQLCODE -20385)
– Cannot drop the same column a second time before the materializing REORG
(SQLCODE -205)
– Cannot unload from an image copy taken before the materializing REORG
(DSNU1227I)
– Cannot recover to a PIT before the materializing REORG (DSNU556I)
37
Utilities
38
REORG Enhancements
• SWITCHTIME option avoids the need for multiple jobs to control the start of the
drain
• Part level COPY when reorganizing a subset of partitions
– Tape support added, but no support yet for STACK YES
– Changes required to existing jobs
• REORG SORTDATA NO RECLUSTER YES|NO
– RECLUSTER NO will bypass sort (and speed up conversion to extended format)
• Good idea, but only saves time, if the data is actually already in sequenced order!
• Do not use on huge tables which are not already clustered as will run for a long time
– RECLUSTER NO is enforced for SHRLEVEL CHANGE with SORTDATA NO
• Specify SORTDATA to get reclustering
– DSNU2904I DATA RECORDS WILL BE UNLOADED VIA unload-method
• CLUSTERING INDEX
• TABLE SCAN
• TABLE SPACE SCAN
39
REORG Enhancements …
• Automated building of the mapping table with new 10 byte LRBA/LRSN
– V11 CM behavior
• An existing mapping table in V10 or V11 format will be reused
• If mapping table does not exist, mapping table will be automatically temporarily created
– V11 NFM behavior
• Existing mapping table if in V11 format will be reused
• If mapping table exists but in V10 format, a new mapping table will be automatically created
in the same database as the original mapping table
• If mapping table does not exist, mapping table will be automatically created in database
specified by zparm, or in declared database or in DSNDB04
– Recommendations
• Predefine and keep mapping tables around for regularly scheduled REORG jobs to avoid SQL
DDL contention on the Catalog
• Use single specific database as specified by zparm for all mapping tables
• Modify schema of existing mapping tables to V11 format as part of migration process to NFM
i.e., ALTER TABLE TBMAP ALTER COLUMN LRSN SET DATA TYPE CHAR(10) NOT NULL;
• Wait for APAR PI08339 if you want automated building of mapping tables
40
REORG Enhancements …
• Use of DRAIN_ALLPARTS YES option (not default) has the potential to
significantly reduce the ‘outage’
– Avoid deadlocks between drains and claims across NPIs and partitions when
reorganizing subset of partitions
– Solution is to momentarily drain all partitions being reorganized
– More likely to be successful in getting successful DRAIN to make the SWITCH
– Big reductions seen in the elapsed time to complete DRAIN and SWITCH
– REORGs should run with less problems using this feature
• REORG message output - DSNU1138I provides drain begin / end information
• PARALLEL (maximum number of subtasks) option to control the number of
subtasks
• Be aware of changed defaults e.g., NOPAD YES for REORG DISCARD
• LOGRANGES NO option should only be used when SYSLGRNX is known to be
logically corrupted and/or has to be reset
41
REORG Enhancements …
• REORG REBALANCE
– Now supports SHRLEVEL(CHANGE) – big step forward for 24*7
– Can now deal with partitions that were empty (or did not contain enough
data for a compression dictionary to be built during the UNLOAD phase)
before the REORG
• Will now build a single compression dictionary that will get applied to all target partitions
• There is no longer a need for subsequent REORG to gain compression
– Can now break-in on persistent threads running RELEASE(DEALLOCATE) packages
• Partition pruning for UTS PBG tablespaces
– Option to physically remove and contract the number of UTS PBG partitions
– Only performed when zparm REORG_DROP_PBG_PARTS=ENABLE
– Disabled by default
– There is no support for PIT recovery to a point in time prior to SWITCH phase for a
pruned tablespace
42
RUNSTATS and RTS Enhancements
• Inline Statistics are rough estimates and should not be compared against a
separate RUNSTATs
• Now possible to avoid DSNU602I STATISTICS ARE NOT COLLECTED FOR
NONPATITIONED INDEX on REORG PART operation
– When SORTNPSI option on REORG job or REORG_PART_SORT_NPSI zparm set to
AUTO or YES, and
– REORG sorted all of the non-partitioned index keys because the amount of data that
was being reorganized relative to the size of objects exceeded internal thresholds
• New RESET ACCESSPATH option
– Reset missing and/or conflicting access path statistics in the Catalog
– Does not affect space statistics in the Catalog or RTS
• Avoid DSNU1363I THE STATS PROFILE FOR TABLE table-name NOT FOUND
– Will use fixed defaults
• No support for USE PROFILE with inline statistics in REORG and LOAD
• Can externalize RTS in-memory blocks via the following command
–ACCESS DATABASE (DB) SP(TS) MODE(STATS) command
43
RECOVER enhancements
• Fast Log Apply (FLA) now implemented for RECOVER INDEX
– Previously DB2 would wait until a log record was to be applied before reading the
associated index page into the local bufferpool where it would then be cached
– Now DB2 will use list prefetch to read all the index pages that are needed to apply
log records for, before applying any log record
– Potential for significant savings in elapsed time
– Should now reconsider decision: run RECOVER INDEX in parallel with RECOVER
TABLESPACE [PART] vs. wait for RECOVER TABLESPACE [PARTs] to complete and then
run REBUILD INDEX
– Enhancement taken back to V9 and V10 via APAR PI07694
• Optimization to point-in-time RECOVER list of objects
– Recover objects only when necessary when performing PIT recovery when
TOLOGPOINT or TORBA are specified
– It does not apply to log only recoveries, RECOVER BACKOUT, and recovers to current
– DIAGNOSE TYPE(607) is required to activate this behavior
44
Performance and Scalability
45
Performance Enhancements - no REBIND needed (CM)
• DDF performance improvements
– Reduced SRB scheduling on TCP/IP receive using new CommServer capabilities
improved autocommit OLTP performance
• xProcs above the bar
• zIIP enablement for all SRB-mode DB2 system agents that are not response time
critical
• Avoid cross-memory overhead for writing log records
• Data decompression performance improvement
• INSERT performance
– Latch contention reduction
– CPU reduction for Insert column processing and log record creation
– Data sharing LRSN spin avoidance
– Page fix/free avoidance in GBP write
46
Performance Enhancements - no REBIND needed (CM) ...
• Sort performance improvements
• DPSI performance improvements for merge
• Performance improvements with large number of partitions
• XML performance improvements
• Optimize RELEASE(DEALLOCATE) execution so that it is consistently better
performing than RELEASE(COMMIT)
• IFI 306 filtering capabilities to improve QReplication capture performance
• Utilities performance improvements
• Automatic index pseudo delete clean-up
• ODBC/JDBC Type 2 performance improvements
• Java stored procedures
– Multi threaded JVMs, 64-bit JVM – more efficient
47
Performance Enhancements – no REBIND needed (CM) ...
• ACCESS DATABASE command performance
• DGTT performance improvement
– Avoid incremental binds for reduced CPU overhead
• P-procs for LIKE predicates against Unicode tables
• Improved performance for ROLLBACK TO SAVEPOINT
• zEC12 exploitation
• Latch contention reduction and other high n-way scalability improvements
• Data sharing performance improvements
48
Performance Enhancements requiring REBIND
(CM with or without APREUSE)
• Most In-memory techniques
• Non correlated subquery with mismatched length
• Select list do-once
• Column processing improvements
• RID overflow to workfile handled for Data Manager set functions
• Performance improvements for common operators
• DECFLOAT data type performance improvements
49
Performance Enhancements requiring REBIND (CM without
APREUSE)
• Query transformation improvements – less expertise required to write
performant SQL
• Enhanced duplicate removal
• DPSI and page range performance improvements
• Optimizer CPU and I/O cost balancing improvements
50
Performance Enhancements - DBA or application effort required
(NFM)
• Suppress-null indexes
• New PCTFREE FOR UPDATE attribute to reduce indirect references
• DGTT performance improvements
• Global variables
• Optimizer externalization of missing/conflicting statistics
• Extended optimization - selectivity overrides (filter factor hints)
• Open data set limit raised to 200K
51
Optional Enhancements need NFM + DBA effort
• DSNTIJCB – Optional – Convert BSDS for extended 10-byte RBAs
– -STOP DB2 MODE(QUIESCE)
• DSNTIJCV – Optional – Convert Catalog and Directory table and index spaces to
extended 10-byte RBA format
– Reorgs all Catalog and Directory table spaces SHRLEVEL CHANGE
– Can be split up to run reorgs in parallel
DB2 Lab Measurement Summary
52
Query
Batch
OLTP
XML
Example of Customer Performance Testing
• DB2 10 NFM baseline
• DB2 11 CM before REBIND
• DB2 11 CM after REBIND
• DB2 11 NFM (no need for further REBIND)
• DB2 11 NFM after REORG (to migrate object to extended LRSN)
• DB2 11 NFM Extended LRSN
53
Example of Customer Performance Testing ...
• Make sure that the CPU numbers are normalized across those intervals i.e., use
CPU milliseconds per commit
• Easy to combine statistics and accounting by stacking the various components of
CPU resource consumption:
– MSTR TCB / (commits + rollbacks)
– MSTR SRB / (commits + rollbacks)
– MSTR IIP SRB / (commits + rollbacks)
– DBM1 TCB / (commits + rollbacks)
– DBM1 SRB / (commits + rollbacks)
– DBM1 IIP SRB / (commits + rollbacks)
– IRLM TCB / (commits + rollbacks)
– IRLM SRB / (commits + rollbacks)
– Average Class 2 CP CPU * occurrences / (commits + rollbacks)
– Average Class 2 SE CPU * occurrences / (commits + rollbacks)
54
CICS Test Transaction Profile
Avg. DML
•3 Insert
•7 Select
•5 Open
•103 Fetch
Avg. Buffer pool
•65 Getpages
•13 Sync Read
CPU consumption
•Class 1: 3.2 msec
•Class 2: 2.3 msec
With CP-Speed 802 MIPS
Fetch Intensive
• Transaction types
– E-Bank logon
– Balance check
– Financial Statement history
– Account search
55
CICS transaction CPU time
0.00
0.50
1.00
1.50
2.00
2.50
3.00
3.50
10NFM 11CM 11RBND 11NFM 11Reo 11 E L
CPU msec
DB2 AS zIIP per TRAN
DB2 AS GCP per
TRAN
CL2CPU per TRAN
5% Activity
moved to zIIP
In CM
56
DB2 system address space CPU per CICS transaction
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
10NFM 11CM 11RBND 11NFM 11Reo 11 E L
CPU Time
mSec
IRLM SRB
IRLM TCB
DBM1 zIIP
DBM1 SRB
DBM1 TCB
MSTR zIIP
MSTR SRB
MSTR TCB
Activity
moved to zIIP
in CM
57
Test Batch job profile
DML
• Commits: 4528
• Delete: 37613
• Rows: 482836
• Update: 45099
• Select: 119773
• Insert: 525884
• Fetch: 548947
Buffer pool
• Getpage: 4.8 M
• Sync Read: 133 K
• Dyn Pref: 57 K
CPU
• Class 1: 69 Sec
• Class 2: 67 Sec
With CP-Speed 802 MIPS
Elapsed:
• Class 1: 07:02 min
Insert and
Delete Intensive
58
Batch test CPU time
0.00
20.00
40.00
60.00
80.00
100.00
120.00
10NFM 11CM 11RBND 11NFM 11Reo 11 E L
CPU Sec
DB2 AS zIIP per BATCH
DB2 AS GCP per BATCH
CL2CPU per BATCH
LRSN spin
loop eliminated
in V11 E L
No credit to V11
For V11 Reo
improvement
59
DB2 system address space CPU for Batch
0.00
5.00
10.00
15.00
20.00
25.00
10NFM 11CM 11RBND 11NFM 11Reo 11 E L
CPU sec
IRLM SRB
IRLM TCB
DBM1 zIIP
DBM1 SRB
DBM1 TCB
MSTR zIIP
MSTR SRB
MSTR TCB
Activity
moved to zIIP
In CM
LRSN spin loop
eliminated in 11 EL
reduces MSTR SRB
60
TPC-H using Static SQLPL
• 10% out-of-box improvement with DB2 11 when rebinding with APREUSE
• 34% improvement in DB2 11 when rebinding to obtain DB2 11 access path
61
-1.4%
-10%
-34%
62
Automatic Pseudo Deleted Index Entry Clean-up
• Recap on impact of pseudo deleted index entries
– Index size grows with increasing number of pseudo-deleted index entries
• More getpages and lock requests required
• Increased CPU cost and possibly longer elapsed times for access via index search
– Applications may encounter deadlocks and timeouts during INSERT/UPDATE/DELETE
• Collisions with committed pseudo-deleted index entries
• RID reuse by INSERT following DELETE => deadlock
• Prior to V11, how are they cleaned up
– DB2 removes pseudo-deleted entries during mainline operations
• Insert / delete operations remove pseudo-deleted entries from index pages
• SQL running with isolation level RR removes pseudo-deleted entries
– Pages that only contain pseudo-deleted index entries are called pseudo-empty
• DB2 attempts to clean up pseudo-empty index pages as part of DELETE processing
– REORG INDEX removes pseudo-empty index pages and pseudo-deleted entries that
were not cleaned up by the mainline processing
63
Automatic Pseudo Deleted Index Entry Clean-up …
• Autonomic solution provided in CM and turned on automatically for all indexes 24*7
– Automatic clean-up of pseudo-deleted index entries in index leaf pages
– Automatic clean-up of pseudo-empty index pages
– Designed to have minimal or no disruption to concurrent DB2 work
– Clean-up is done under system tasks, which run as enclave SRBs and are zIIP eligible
– Parent thread (one per DB2 member) loops through RTS to find candidate indexes
– Child clean-up threads only clean up an index if it already is opened for INSERT, UPDATE or
DELETE on the DB2 member
• Avoid creating GBP dependency on indexes
• Potential disruption can be minimized by managing down the number of clean-up
threads or specifying time when indexes are subject to clean-up
– Can control the number of concurrent clean-up threads or disable the function using
zparm INDEX_CLEANUP_THREADS
• 0=Disable, 1-128, 10 is default
– Entries in new Catalog table SYSIBM.SYSINDEXCLEANUP
• Define when / which objects are to be considered in a generic way
Automatic Pseudo Deleted Index Entry Clean-up
• Up to 39% DB2 CPU reduction per transaction in DB2 11 compared to DB2 10
• Up to 93% reduction in Pseudo deleted entries in DB2 11
• Consistent performance and less need of REORG in DB2 11
64
0
500000
1000000
1500000
2000000
2500000
0
0.0005
0.001
0.0015
0.002
0.0025
0.003
0.0035
Day1 Day2 Day3 Day4 Day5 #ofpseudo_deletedentries
CPUtime(sec)
WAS Portal Workload 5 Days Performance
V10 Total CPU time
V11 Total CPU time
V10 sum of REORGPSEUDODELETES
V11 sum of REORGPSEUDODELETES
65
Performance Enhancements
• Qreplication log filtering
– Reduce IFI log read cost by qualifying the objects with DBID/PSID
– Additional benefit if objects are compressed
– Move filtering from Qreplication capture task to DB2 engine
– Potential for very significant reduction in the number of log records replicated
– Requires IBM Infophere Data Replication Q Replication or Change Data Capture
10.2.1
• Archive Transparency
– Very useful new feature
– Need to carefully examine the additional cost of ORDER BY sort when accessing the
archive
– When application only fetches a limited number of rows from the result set then
the cost can increase significantly when also accessing the archive
– Will typically be used by customers selectively on a case by case basis
66
Performance Enhancements …
• Optimizer enhancements
– Improved performance for legacy application programs
– Better chance of achieving matching index scan
– No need to rewrite SQL to get most of the improvements
– Still important to choose the right data type and avoid implicit casting
– Still very important to run RUNSTATS
• GROUP BY grouping sets
– Important feature for data analysis: CUBE, ROLLUP
– All processing is performed in a single pass over the table
– But there are some performance differences relative to the old GROUP BY with the
same result set
• SELECT C1, COUNT(*) FROM T1 GROUP BY C1
– No sort performed if the access path uses an index with leading column C1
• SELECT C1, COUNT(*) FROM T1 GROUP BY GROUPING SETS ((C1))
– A sort is always performed
67
Extended LRBA/LRSN
• What you need to know for DB2 11 CM and DB2 10 NFM?
– 6 byte format - LRBA/LRSN before DB2 11  x‘LLLLLLLLLLLL’
– 10 byte extended format– LRBA/LRSN has addressing capacity of 1 yottabyte
(2**80)
• 10 byte extended format - LRSN with DB2 11  x‘00LLLLLLLLLLLL000000’
• 10 byte extended format - LRBA with DB2 11  x‘00000000RRRRRRRRRRRR’
– Where do we find LRBA/LRSN?
• DB2 Catalog  SYSCOPY, SYSxxxxPART, …..
• DB2 Directory  SYSUTILX, SYSLGRNX, ….
• BSDS  pointer, Active & Archive Log values..
• DB2 Logs  active & archive logs
• DB2 Pagesets  Catalog & Directory and all user-pagesets
68
Extended LRBA/LRSN …
• What you need to know for DB2 11 CM and DB2 10 NFM? …
– DB2 11 CM
• DB2 internal coding deals with 10 byte extended format LRBA/LRSN values only
• LRSN in Utility output is shown in 10 byte extended format with precision ‘000000’ except
– QUIESCE utility, which externalizes LRSN in 10 byte extended format with precision ‘nnnnnn’
• RECOVER utility handles 10 byte extended format LRBA/LRSN input
• Column ‘RBA_FORMAT’ in SYSIBM.SYSxxxPART is set to ‘B‘ for new defined or objects,
which are reorged or loaded with replace-option (possible values ‚B, blank‚ U, E)
– DB2 11 CM / DB2 10 NFM coexistence in data sharing
• Full toleration of 10 byte extended format LRBA/LRSN value as input to the RECOVER Utility
• Sanity checks included for ‘wrongly used 6 byte format LRBA/LRSN’
69
Extended LRBA/LRSN …
• What you need to know for DB2 11 NFM?
– Migration to DB2 11 NFM (via DSNTIJEN)
• Catalog & Directory Table ‘LRBA/LRSN Columns’ are altered to 10 byte extended format
• SYSIBM.SYSLGRNX entries are now stored as 10 byte extended format LRBA/LRSN values
• SYSIBM.SYSCOPY
– Conversion of all LRBA/LRSN values is done for existing data to 10 byte extended format with
leading byte ‘00’ and precision, ‘000000’ for LRSN and right justified right with leading ‘00000000’
for LRBA values
– New data is stored in 10 byte extended format with precision ‘nnnnnn’
• LRBA/LRSN for all Utilities use now 10 byte extended format
• LRBA/LRSN values are still written to DB2 logs in 6 byte format
• LRBA/LRSN values are still written to DB2 pagesets in 6 byte format
70
Extended LRBA/LRSN …
• What you need to know for DB2 11 NFM? …
– BSDS converted to 10 byte extended format LRBA/LRSN in NFM only (DSNJCNVT)
• There is no way back for BSDS!
• Now LRBA/LRSN values are written to DB2 logs of the subject DB2 member now in 10 byte
extended format with precision ‘nnnnnn’
• LRBA/LRSN values are still written to DB2 pagesets in 6 byte format
– Conversion (10 to 6 or 6 to 10 byte) has to be done
– LRSN Spin can still happen
– DSN1LOGP and REPORT RECOVER output will show 10 byte extended format LRBA/LRSN although
never externalized to pagesets (different output, for DSN1PRNT of pagesets)
• Can be done, whenever you want to do it after entry to V11 NFM, regardless of pageset
formats
71
Extended LRBA/LRSN …
• What you need to know for DB2 11 NFM? …
– Reorg Catalog and Directory pagesets to ‘extended format’ (in NFM only!)
• Can be done whenever you want to, regardless of BSDS and user pageset formats
• Now LRBA/LRSN values are written to converted pagesets in 10 byte extended format
– LRSN with precision ‘nnnnnn’, if update is done on a DB2 member with 10 byte extended format
BSDS
– LRSN with precision ‘000000’, if update in done in a member with 6 byte format BSDS
• Column ‘RBA_FORMAT’ in SYSIBM.SYSxxxPART is updated to ‘E’
• LRSN Spin could still happen for DB2 member with 6 byte format BSDS
• Can be converted back to 6 byte format (all or at part level)
72
Extended LRBA/LRSN …
• What you need to know for DB2 11 NFM? …
– Reorg User pagesets to ‘extended format’ (in NFM only!)
• Can be done whenever you want to, regardless of BSDS, Catalog & Directory pageset
formats
• Now LRBA/LRSN values are written to converted pagesets in 10 byte extended format
– LRSN with precision ‘nnnnnn‘, if update is done in a member with 10 byte extended format BSDS
– LRSN with precision ‘000000‘, if update in done in a member with 6 byte format BSDS
• Column ‘RBA_FORMAT’ in SYSIBM.SYSxxxPART is set to ‘E‘
• LRSN Spin could still happen for a DB2 member with 6 byte format BSDS
• Can be converted back to 6 byte (all or at part level)
• Is done by REORG, LOAD .. REPLACE or REBUILD with ‘RBALRSN_CONVERSION EXTENDED’
or if zparm OBJECT_CONVERTED=EXTENDED
• ‘RECOVER ... TOCOPY ...’ using a 6 byte Copy can reset format back to ‘basic’
73
Extended LRBA/LRSN …
• Enhancements to improve usability characteristics based on 6 byte/10 byte
format LRBA/LRSN handling
– Prevent DSNJCNVT from converting DB2 10 NFM BSDS to extended format
– Support 10 byte extended format input to RECOVER in DB2 10
– Perform sanity checks to guard against invalid LRSN values i.e., 6 byte LRSN values
with leading byte of zeros, to prevent PIT recoveries using bad RBA/LRSN from failing
(RC=8 in UTILINIT phase instead)
– Sanity check also performed in DB2 10 (coexistence)
– Support for ‘NOBASIC’ value for OBJECT_CONVERSION zparm to prevent converting
back pagesets in extended format , and to ‘EXTENDED’ as default if ‘NOBASIC’ is set
and catalog column is <> ‘E’
– Add LRSN values to archive log information in REPORT RECOVERY utility output
– Technical white paper being produced explains about ‘6/10 byte LRBA/LRSN
handling’
– Several enhancements to DB2 11 books
74
Extended LRBA/LRSN …
• Recommended best practice migration strategy
1. Run pre-migration jobs and steps to clean-up
2. Migration to DB2 11 CM
3. Migration to DB2 11 NFM
4. Convert ALL BSDS of data sharing group within ‘n’ weekends
5. Reorg ALL Directory & Catalog Pagesets to ‘extended LRBA/LRSN format’
6. Set OBJECT_CREATE and UTILITY_CONVERSION zparms to EXTENDED
- New objects will be created in 10 byte extended format
- REORG, LOAD REPLACE and REBUILD will convert user objects to extended format without
need to change utility control statements
7. Reorg all objects to extended LRBA/LRSN format by executing normal reorg jobs or
some additional jobs
• Perform regular check for ongoing progress by selecting rows where RBA_FORMAT = ‘E’ in
SYSIBM.SYSxxxxPART
8. If all done, set OBJECT _CONVERSION zparm to NOBASIC
75
How to convert 10 byte LRSN to Timestamp
• DB2 10 NFM or less – use TIMESTAMP function
LRSN-format:
6 byte wherever used from  e.g. ‘CBE2B5955DCF’
convert by:
SELECT TIMESTAMP(x'CBE2B5955DCF ' || x'0000') from ….
• DB2 11 CM – use TIMESTAMP function
LRSN-format:
6 byte in logs, catalog and directory, pages  e.g. ‘CBE2B5955DCF’
10 byte in all outputs (except DSN1PRNT) e.g. ‘00CBE2B5955DCF086C00’
convert by:
SELECT TIMESTAMP(x'CBE2B5955DCF ' || x'0000') from ….
76
How to convert 10 byte LRSN to Timestamp …
• DB2 11 NFM – use TIMESTAMP function
LRSN-format:
6 byte for non-converted data pages (DSN1PRNT)  e.g. ‘CBE2B5955DCF’
10 byte in Catalog and Directory and in all outputs  e.g. ‘00CBE2B5955DCF086C00’
convert by:
SELECT TIMESTAMP(x'CBE2B5955DCF ' || x'0000') from ….
6 byte LRSN can be used by ‘cut and paste’
10 byte LRSN can be used, if first 2-digits are cut and digits 3 to 14 are used, but only if first two digits are
‘00’ otherwise this conversion is NOT usable!
SELECT TIMESTAMP(bx'CBE2B5955DCF0000') from ….
6 byte LRSN can be used by ‘cut and paste’ and padded with ‘0000’ at the right
10 byte LRSN can be used, if first 2-digits are cut and digits 3 to 18 are used, but only if first two digits are
‘00’ otherwise this conversion is NOT usable!
77
How to convert 10 byte LRSN to Timestamp …
• DB2 11 NFM – use new ‘binary hex’ function
– SELECT TIMESTAMP(bx'00CBE2B5955DCF086C00000000000000') from ...
 6 byte LRSN can be used by ‘cut and paste’, ‘00’ in front of and padded with
‘000000000000000000’ digits at the right
 10 byte LRSN can be used by ‘cut and paste’ and right padded with ‘000000000000’
– (BX’ can be replaced by (BINARY(X’ or (VARBINARY(X’…..
– Convert 10 byte RBA/LRSN to Timestamp
 Works great, but need APPLCOMPAT(V11R1)!
78
Other performance recommendations
• Make sure HVCOMMON in IEASYSxx can accommodate log output buffer
• Configure additional 1MB LFAREA (z/OS parameter in IEASYSxx) for maximum
benefit
• LRSN spin avoidance requires both BSDS and objects conversion in NFM
• Monitor log I/O performance due to log record size increase
– 3% to 40% increase in log record size observed following BSDS conversion
• Essential to make sure enough zIIP capacity available before V11 CM migration
– zIIP ‘Help Function” IIPHONORPRIORITY should be set to YES in case there is a
shortage of zIIP capacity
– Continue to monitor zIIP capacity thereafter
• Bufferpool re-classification change - prefetched pages will again be reclassified
as random after random getpage
– May need to re-evaluate VPSEQT setting for certain bufferpools
• MRU (Most Recently Used) used for pages brought in by utilities
• New FRAMESIZE parameter independent from PGFIX parameter
Customer Value
• For many customers value is driven on how sub-capacity workload licensing
works
– Based on 4-hour rolling average MSU utilisation
– Highest rolling average figure for each month used to calculate software
charges for all MLC products (IBM and non-IBM)
– Provided DB2 forms a significant component of the total MSU usage during
peak period, any MSU savings will translate directly to MLC savings
– Typically this is the online day - mid morning and mid afternoon
– Factor in the impact on overall z/OS software stack cost reduction: z/OS,
CICS, MQ
79
79
Customer value …
80
80
81
Performance Summary
• Opportunity for improved performance for legacy application programs
• REBIND of static SQL packages is very important
• Good validation of potential from ESP customers and IBM internal workloads
• Your mileage will vary based on your SQL application workload as certain
features only apply to certain workloads
• Impressive CPU savings observed for some workloads
• Highly optimized static SQL and/or simple SQL may not see much benefit
• More benefit for more complex SQL i.e., not read a single row by primary key
• Do not sell (or buy) the savings before you have seen them for your workload
81
82
Other Enhancements
83
Remove package security vulnerabilities
• Problem use case scenario
– Each main routine has it’s own plan and both names are the same
– All packages are bound into a single collection
– Each plan is bound with PKLIST(col.*)
– If EXECUTE privilege is granted on one plan, this authid/user can run any main
program
• Solution
– New BIND PLAN option PROGAUTH supported by a new table
SYSIBM.DSNPROGAUTH in the Catalog
– To ensure that a main program M can only be executed with plan P
• Insert row into SYSIBM.DSNPROGAUTH with PROGNAME M, PLANNAME P, ENABLED Y
• Bind plan P with PROGAUTH(ENABLE)
84
Archive transparency
• Create an archive-table and connect the base-table to the archive-table
– Via ALTER base-table ENABLE archive clause
– Archive-table and base-table must have exactly the same columns
– No additional columns are allowed e.g., archive-timestamp
• Set SYSIBMADM.MOVE_TO_ARCHIVE global variable to ‘Y’ or ‘E’
– DB2 automatically moves deleted rows to the archive table
– If set to ‘Y’, update to rows will fail with SQLCODE -20555
– If set to ‘E’ , update will only work for active rows in the base table
– Delete of active rows in the base table will then appear in the archive-table
• If SYSIBMADM.MOVE_TO_ARCHIVE global variable is set to ‘N’
– Delete of active rows in the base table are lost
– So important to check that the setting of the global variable to ‘Y’ or ‘E’ actually
worked as ‘N’ is the default value
85
Archive transparency …
• Must set SYSIBMADM.GET_ARCHIVE global variable to ‘Y’ for query to search the
rows from the archive-table
– Update only applies to active rows in the base-table
• Subsequent query may get updated rows and not updated rows
• ARCHIVESENSITIVE (YES|NO) option on package BIND
– Only affects read from archive-table
– Deleted rows will only be moved to archive-table if MOVE_TO_ARCHIVE global
variable is set correctly
• REORG DISCARD on base-table
– Generates LOAD statement to load rows into the archive-table
– DISCARD dataset can be used as input
• Dynamic scrollable cursors are not allowed
• Package owner must have the WRITE privilege for the respective global variables
86
Summary
87
Summary
• Share lessons learned, surprises, pitfalls
• Provide hints and tips
• Address some myths
• Provide additional planning information
• Provide usage guidelines and positioning on new enhancements
• Help customers migrate as fast as possible, but safely
DB2 11 Resources
88
• IBM Information Center / Knowledge Center
• DB2 11 Technical Overview Redbook (SG24-8180)
• DB2 11 for z/OS Performance Topics (SG24-8222)
• DB2 11 links: https://www.ibm.com/software/data/db2/zos/family/db211/
– Links to DB2 11 Announcement Letter, webcasts and customer case studies
– Whitepaper: “DB2 11 for z/OS: Unmatched Efficiency for
Big Data and Analytics”
– Whitepaper: “How DB2 11 for z/OS Can Help Reduce
Total Cost of Ownership”
• DB2 11 Migration Planning Workshop
– http://ibm.co/IIJxw8
• Free eBook available for download
– http://ibm.co/160vQgM
• “DB2 11 for SAP Mission Critical Solutions”
– http://scn.sap.com/docs/DOC-50807
Join The World of DB2, Big Data & Analytics on System z
89
90

Mais conteúdo relacionado

Mais procurados

Database management system
Database management systemDatabase management system
Database management systemAmit Sarkar
 
Mainframe
MainframeMainframe
Mainframeshivas
 
Oracle 12c and its pluggable databases
Oracle 12c and its pluggable databasesOracle 12c and its pluggable databases
Oracle 12c and its pluggable databasesGustavo Rene Antunez
 
Strengthen your security posture! Getting started with IBM Z Pervasive Encryp...
Strengthen your security posture! Getting started with IBM Z Pervasive Encryp...Strengthen your security posture! Getting started with IBM Z Pervasive Encryp...
Strengthen your security posture! Getting started with IBM Z Pervasive Encryp...Tony Pearson
 
Database design (conceptual, logical and physical design) unit 2 part 2
Database design (conceptual, logical and physical design)  unit 2 part 2Database design (conceptual, logical and physical design)  unit 2 part 2
Database design (conceptual, logical and physical design) unit 2 part 2Ram Paliwal
 
IBM z/OS Communications Server z/OS Encryption Readiness Technology (zERT)
IBM z/OS Communications Server z/OS Encryption Readiness Technology (zERT)IBM z/OS Communications Server z/OS Encryption Readiness Technology (zERT)
IBM z/OS Communications Server z/OS Encryption Readiness Technology (zERT)zOSCommserver
 
TID Chapter 10 Introduction To Database
TID Chapter 10 Introduction To DatabaseTID Chapter 10 Introduction To Database
TID Chapter 10 Introduction To DatabaseWanBK Leo
 
Elevating Application Performance with the latest IBM COBOL offerings
Elevating Application Performance with the latest IBM COBOL offeringsElevating Application Performance with the latest IBM COBOL offerings
Elevating Application Performance with the latest IBM COBOL offeringsDevOps for Enterprise Systems
 
Fundamentals of Database system
Fundamentals of Database systemFundamentals of Database system
Fundamentals of Database systemphilipsinter
 
Database Management Systems - Management Information System
Database Management Systems - Management Information SystemDatabase Management Systems - Management Information System
Database Management Systems - Management Information SystemNijaz N
 
Db2 and storage management (mullins)
Db2 and storage management (mullins)Db2 and storage management (mullins)
Db2 and storage management (mullins)Craig Mullins
 
Z4R: Intro to Storage and DFSMS for z/OS
Z4R: Intro to Storage and DFSMS for z/OSZ4R: Intro to Storage and DFSMS for z/OS
Z4R: Intro to Storage and DFSMS for z/OSTony Pearson
 
Random access memory
Random access memoryRandom access memory
Random access memoryAnikMazumdar2
 
External Cards and Slots
External Cards and SlotsExternal Cards and Slots
External Cards and SlotsArif Samoon
 

Mais procurados (20)

Database management system
Database management systemDatabase management system
Database management system
 
Mainframe
MainframeMainframe
Mainframe
 
Database, Lecture-1.ppt
Database, Lecture-1.pptDatabase, Lecture-1.ppt
Database, Lecture-1.ppt
 
Oracle 12c and its pluggable databases
Oracle 12c and its pluggable databasesOracle 12c and its pluggable databases
Oracle 12c and its pluggable databases
 
Oracle Database Cloud Service
Oracle Database Cloud ServiceOracle Database Cloud Service
Oracle Database Cloud Service
 
Strengthen your security posture! Getting started with IBM Z Pervasive Encryp...
Strengthen your security posture! Getting started with IBM Z Pervasive Encryp...Strengthen your security posture! Getting started with IBM Z Pervasive Encryp...
Strengthen your security posture! Getting started with IBM Z Pervasive Encryp...
 
Database design (conceptual, logical and physical design) unit 2 part 2
Database design (conceptual, logical and physical design)  unit 2 part 2Database design (conceptual, logical and physical design)  unit 2 part 2
Database design (conceptual, logical and physical design) unit 2 part 2
 
IBM z/OS Communications Server z/OS Encryption Readiness Technology (zERT)
IBM z/OS Communications Server z/OS Encryption Readiness Technology (zERT)IBM z/OS Communications Server z/OS Encryption Readiness Technology (zERT)
IBM z/OS Communications Server z/OS Encryption Readiness Technology (zERT)
 
TID Chapter 10 Introduction To Database
TID Chapter 10 Introduction To DatabaseTID Chapter 10 Introduction To Database
TID Chapter 10 Introduction To Database
 
Elevating Application Performance with the latest IBM COBOL offerings
Elevating Application Performance with the latest IBM COBOL offeringsElevating Application Performance with the latest IBM COBOL offerings
Elevating Application Performance with the latest IBM COBOL offerings
 
Fundamentals of Database system
Fundamentals of Database systemFundamentals of Database system
Fundamentals of Database system
 
Dbms
DbmsDbms
Dbms
 
Rdbms
RdbmsRdbms
Rdbms
 
Database Management Systems - Management Information System
Database Management Systems - Management Information SystemDatabase Management Systems - Management Information System
Database Management Systems - Management Information System
 
Db2 and storage management (mullins)
Db2 and storage management (mullins)Db2 and storage management (mullins)
Db2 and storage management (mullins)
 
Z4R: Intro to Storage and DFSMS for z/OS
Z4R: Intro to Storage and DFSMS for z/OSZ4R: Intro to Storage and DFSMS for z/OS
Z4R: Intro to Storage and DFSMS for z/OS
 
DATABASE MANAGEMENT
DATABASE MANAGEMENTDATABASE MANAGEMENT
DATABASE MANAGEMENT
 
Database concepts
Database conceptsDatabase concepts
Database concepts
 
Random access memory
Random access memoryRandom access memory
Random access memory
 
External Cards and Slots
External Cards and SlotsExternal Cards and Slots
External Cards and Slots
 

Destaque

Planning and executing a DB2 11 for z/OS Migration by Ian Cook
Planning and executing a DB2 11 for z/OS  Migration  by Ian Cook Planning and executing a DB2 11 for z/OS  Migration  by Ian Cook
Planning and executing a DB2 11 for z/OS Migration by Ian Cook Surekha Parekh
 
ALL ABOUT DB2 DSNZPARM
ALL ABOUT DB2 DSNZPARMALL ABOUT DB2 DSNZPARM
ALL ABOUT DB2 DSNZPARMIBM
 
IBM DB2 for z/OS Administration Basics
IBM DB2 for z/OS Administration BasicsIBM DB2 for z/OS Administration Basics
IBM DB2 for z/OS Administration BasicsIBM
 
Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero
 Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero
Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - faveroWillie Favero
 
DB2 10 & 11 for z/OS System Performance Monitoring and Optimisation
DB2 10 & 11 for z/OS System Performance Monitoring and OptimisationDB2 10 & 11 for z/OS System Performance Monitoring and Optimisation
DB2 10 & 11 for z/OS System Performance Monitoring and OptimisationJohn Campbell
 
Using Release(deallocate) and Painful Lessons to be learned on DB2 locking
Using Release(deallocate) and Painful Lessons to be learned on DB2 lockingUsing Release(deallocate) and Painful Lessons to be learned on DB2 locking
Using Release(deallocate) and Painful Lessons to be learned on DB2 lockingJohn Campbell
 
DB2 for z/OS Bufferpool Tuning win by Divide and Conquer or Lose by Multiply ...
DB2 for z/OS Bufferpool Tuning win by Divide and Conquer or Lose by Multiply ...DB2 for z/OS Bufferpool Tuning win by Divide and Conquer or Lose by Multiply ...
DB2 for z/OS Bufferpool Tuning win by Divide and Conquer or Lose by Multiply ...John Campbell
 
DB2 Accounting Reporting
DB2  Accounting ReportingDB2  Accounting Reporting
DB2 Accounting ReportingJohn Campbell
 
DB2 for z/OS Architecture in Nutshell
DB2 for z/OS Architecture in NutshellDB2 for z/OS Architecture in Nutshell
DB2 for z/OS Architecture in NutshellCuneyt Goksu
 
An Intro to Tuning Your SQL on DB2 for z/OS
An Intro to Tuning Your SQL on DB2 for z/OSAn Intro to Tuning Your SQL on DB2 for z/OS
An Intro to Tuning Your SQL on DB2 for z/OSWillie Favero
 
Modeling Physical Systems with Modern Object Oriented Perl
Modeling Physical Systems with Modern Object Oriented PerlModeling Physical Systems with Modern Object Oriented Perl
Modeling Physical Systems with Modern Object Oriented PerlJoel Berger
 
Libro blanco espesantes essd
Libro blanco espesantes essdLibro blanco espesantes essd
Libro blanco espesantes essdLoluca Tascón
 
Best Practices For Optimizing DB2 Performance Final
Best Practices For Optimizing DB2 Performance FinalBest Practices For Optimizing DB2 Performance Final
Best Practices For Optimizing DB2 Performance FinalDatavail
 
The Five R's: There Can be no DB2 Performance Improvement Without Them!
The Five R's: There Can be no DB2 Performance Improvement Without Them!The Five R's: There Can be no DB2 Performance Improvement Without Them!
The Five R's: There Can be no DB2 Performance Improvement Without Them!Craig Mullins
 
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle
IBM DB2 Analytics Accelerator  Trends & Directions by Namik Hrle IBM DB2 Analytics Accelerator  Trends & Directions by Namik Hrle
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
 
DB2 10 Universal Table Space - 2012-03-18 - no template
DB2 10 Universal Table Space - 2012-03-18 - no templateDB2 10 Universal Table Space - 2012-03-18 - no template
DB2 10 Universal Table Space - 2012-03-18 - no templateWillie Favero
 
CICS TS V5 Technical Overview
CICS TS V5 Technical OverviewCICS TS V5 Technical Overview
CICS TS V5 Technical OverviewSAFowlkes
 
DB2 for z/OS - Starter's guide to memory monitoring and control
DB2 for z/OS - Starter's guide to memory monitoring and controlDB2 for z/OS - Starter's guide to memory monitoring and control
DB2 for z/OS - Starter's guide to memory monitoring and controlFlorence Dubois
 
Presentation db2 connections to db2 for z os
Presentation   db2 connections to db2 for z osPresentation   db2 connections to db2 for z os
Presentation db2 connections to db2 for z osxKinAnx
 

Destaque (20)

Planning and executing a DB2 11 for z/OS Migration by Ian Cook
Planning and executing a DB2 11 for z/OS  Migration  by Ian Cook Planning and executing a DB2 11 for z/OS  Migration  by Ian Cook
Planning and executing a DB2 11 for z/OS Migration by Ian Cook
 
ALL ABOUT DB2 DSNZPARM
ALL ABOUT DB2 DSNZPARMALL ABOUT DB2 DSNZPARM
ALL ABOUT DB2 DSNZPARM
 
IBM DB2 for z/OS Administration Basics
IBM DB2 for z/OS Administration BasicsIBM DB2 for z/OS Administration Basics
IBM DB2 for z/OS Administration Basics
 
Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero
 Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero
Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero
 
DB2 10 & 11 for z/OS System Performance Monitoring and Optimisation
DB2 10 & 11 for z/OS System Performance Monitoring and OptimisationDB2 10 & 11 for z/OS System Performance Monitoring and Optimisation
DB2 10 & 11 for z/OS System Performance Monitoring and Optimisation
 
Using Release(deallocate) and Painful Lessons to be learned on DB2 locking
Using Release(deallocate) and Painful Lessons to be learned on DB2 lockingUsing Release(deallocate) and Painful Lessons to be learned on DB2 locking
Using Release(deallocate) and Painful Lessons to be learned on DB2 locking
 
DB2 for z/OS Bufferpool Tuning win by Divide and Conquer or Lose by Multiply ...
DB2 for z/OS Bufferpool Tuning win by Divide and Conquer or Lose by Multiply ...DB2 for z/OS Bufferpool Tuning win by Divide and Conquer or Lose by Multiply ...
DB2 for z/OS Bufferpool Tuning win by Divide and Conquer or Lose by Multiply ...
 
DB2 Accounting Reporting
DB2  Accounting ReportingDB2  Accounting Reporting
DB2 Accounting Reporting
 
DB2 for z/OS Architecture in Nutshell
DB2 for z/OS Architecture in NutshellDB2 for z/OS Architecture in Nutshell
DB2 for z/OS Architecture in Nutshell
 
An Intro to Tuning Your SQL on DB2 for z/OS
An Intro to Tuning Your SQL on DB2 for z/OSAn Intro to Tuning Your SQL on DB2 for z/OS
An Intro to Tuning Your SQL on DB2 for z/OS
 
Modeling Physical Systems with Modern Object Oriented Perl
Modeling Physical Systems with Modern Object Oriented PerlModeling Physical Systems with Modern Object Oriented Perl
Modeling Physical Systems with Modern Object Oriented Perl
 
Libro blanco espesantes essd
Libro blanco espesantes essdLibro blanco espesantes essd
Libro blanco espesantes essd
 
Best Practices For Optimizing DB2 Performance Final
Best Practices For Optimizing DB2 Performance FinalBest Practices For Optimizing DB2 Performance Final
Best Practices For Optimizing DB2 Performance Final
 
The Five R's: There Can be no DB2 Performance Improvement Without Them!
The Five R's: There Can be no DB2 Performance Improvement Without Them!The Five R's: There Can be no DB2 Performance Improvement Without Them!
The Five R's: There Can be no DB2 Performance Improvement Without Them!
 
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle
IBM DB2 Analytics Accelerator  Trends & Directions by Namik Hrle IBM DB2 Analytics Accelerator  Trends & Directions by Namik Hrle
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle
 
DB2 10 Universal Table Space - 2012-03-18 - no template
DB2 10 Universal Table Space - 2012-03-18 - no templateDB2 10 Universal Table Space - 2012-03-18 - no template
DB2 10 Universal Table Space - 2012-03-18 - no template
 
CICS TS V5 Technical Overview
CICS TS V5 Technical OverviewCICS TS V5 Technical Overview
CICS TS V5 Technical Overview
 
Database storage engines
Database storage enginesDatabase storage engines
Database storage engines
 
DB2 for z/OS - Starter's guide to memory monitoring and control
DB2 for z/OS - Starter's guide to memory monitoring and controlDB2 for z/OS - Starter's guide to memory monitoring and control
DB2 for z/OS - Starter's guide to memory monitoring and control
 
Presentation db2 connections to db2 for z os
Presentation   db2 connections to db2 for z osPresentation   db2 connections to db2 for z os
Presentation db2 connections to db2 for z os
 

Semelhante a DB2 11 for z/OS Migration Planning and Early Customer Experiences

Db2 family and v11.1.4.4
Db2 family and v11.1.4.4Db2 family and v11.1.4.4
Db2 family and v11.1.4.4ModusOptimum
 
Advantages of migrating to db2 v11.1
Advantages of migrating to db2 v11.1Advantages of migrating to db2 v11.1
Advantages of migrating to db2 v11.1Rajesh Pandhare
 
DB2 10 Webcast #1 Overview And Migration Planning
DB2 10 Webcast #1   Overview And Migration PlanningDB2 10 Webcast #1   Overview And Migration Planning
DB2 10 Webcast #1 Overview And Migration PlanningCarol Davis-Mann
 
DB2 10 Webcast #1 - Overview And Migration Planning
DB2 10 Webcast #1 - Overview And Migration PlanningDB2 10 Webcast #1 - Overview And Migration Planning
DB2 10 Webcast #1 - Overview And Migration PlanningLaura Hood
 
IBM Analytics Accelerator Trends & Directions Namk Hrle
IBM Analytics Accelerator  Trends & Directions Namk Hrle IBM Analytics Accelerator  Trends & Directions Namk Hrle
IBM Analytics Accelerator Trends & Directions Namk Hrle Surekha Parekh
 
Db2 10 memory management uk db2 user group june 2013 [read-only]
Db2 10 memory management   uk db2 user group june 2013 [read-only]Db2 10 memory management   uk db2 user group june 2013 [read-only]
Db2 10 memory management uk db2 user group june 2013 [read-only]Laura Hood
 
DbB 10 Webcast #3 The Secrets Of Scalability
DbB 10 Webcast #3   The Secrets Of ScalabilityDbB 10 Webcast #3   The Secrets Of Scalability
DbB 10 Webcast #3 The Secrets Of ScalabilityLaura Hood
 
Db2 10 memory management uk db2 user group june 2013
Db2 10 memory management   uk db2 user group june 2013Db2 10 memory management   uk db2 user group june 2013
Db2 10 memory management uk db2 user group june 2013Carol Davis-Mann
 
DB2 10 Smarter Database - IBM Tech Forum
DB2 10 Smarter Database   - IBM Tech ForumDB2 10 Smarter Database   - IBM Tech Forum
DB2 10 Smarter Database - IBM Tech ForumSurekha Parekh
 
DB210 Smarter Database IBM Tech Forum 2011
DB210 Smarter Database   IBM Tech Forum 2011DB210 Smarter Database   IBM Tech Forum 2011
DB210 Smarter Database IBM Tech Forum 2011Laura Hood
 
Migration DB2 to EDB - Project Experience
 Migration DB2 to EDB - Project Experience Migration DB2 to EDB - Project Experience
Migration DB2 to EDB - Project ExperienceEDB
 
Software im SAP Umfeld_IBM DB2
Software im SAP Umfeld_IBM DB2Software im SAP Umfeld_IBM DB2
Software im SAP Umfeld_IBM DB2IBM Switzerland
 
Db2 10 Webcast #2 Justifying The Upgrade
Db2 10 Webcast #2   Justifying The UpgradeDb2 10 Webcast #2   Justifying The Upgrade
Db2 10 Webcast #2 Justifying The UpgradeCarol Davis-Mann
 
DB2 10 Webcast #2 - Justifying The Upgrade
DB2 10 Webcast #2  - Justifying The UpgradeDB2 10 Webcast #2  - Justifying The Upgrade
DB2 10 Webcast #2 - Justifying The UpgradeLaura Hood
 
DB2 for z/O S Data Sharing
DB2 for z/O S  Data  SharingDB2 for z/O S  Data  Sharing
DB2 for z/O S Data SharingSurekha Parekh
 
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools Update
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools UpdateDB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools Update
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools UpdateBaha Majid
 
Oracle EBS Upgrade to 12.2.5.1
Oracle EBS Upgrade to 12.2.5.1Oracle EBS Upgrade to 12.2.5.1
Oracle EBS Upgrade to 12.2.5.1Amit Sharma
 
David Baker 2015
David Baker 2015David Baker 2015
David Baker 2015David Baker
 
Db2 V12 incompatibilities_&amp;_improvements_over_V11
Db2 V12 incompatibilities_&amp;_improvements_over_V11Db2 V12 incompatibilities_&amp;_improvements_over_V11
Db2 V12 incompatibilities_&amp;_improvements_over_V11Abhishek Verma
 

Semelhante a DB2 11 for z/OS Migration Planning and Early Customer Experiences (20)

Db2 family and v11.1.4.4
Db2 family and v11.1.4.4Db2 family and v11.1.4.4
Db2 family and v11.1.4.4
 
Advantages of migrating to db2 v11.1
Advantages of migrating to db2 v11.1Advantages of migrating to db2 v11.1
Advantages of migrating to db2 v11.1
 
DB2 10 Webcast #1 Overview And Migration Planning
DB2 10 Webcast #1   Overview And Migration PlanningDB2 10 Webcast #1   Overview And Migration Planning
DB2 10 Webcast #1 Overview And Migration Planning
 
DB2 10 Webcast #1 - Overview And Migration Planning
DB2 10 Webcast #1 - Overview And Migration PlanningDB2 10 Webcast #1 - Overview And Migration Planning
DB2 10 Webcast #1 - Overview And Migration Planning
 
1) planning
1) planning1) planning
1) planning
 
IBM Analytics Accelerator Trends & Directions Namk Hrle
IBM Analytics Accelerator  Trends & Directions Namk Hrle IBM Analytics Accelerator  Trends & Directions Namk Hrle
IBM Analytics Accelerator Trends & Directions Namk Hrle
 
Db2 10 memory management uk db2 user group june 2013 [read-only]
Db2 10 memory management   uk db2 user group june 2013 [read-only]Db2 10 memory management   uk db2 user group june 2013 [read-only]
Db2 10 memory management uk db2 user group june 2013 [read-only]
 
DbB 10 Webcast #3 The Secrets Of Scalability
DbB 10 Webcast #3   The Secrets Of ScalabilityDbB 10 Webcast #3   The Secrets Of Scalability
DbB 10 Webcast #3 The Secrets Of Scalability
 
Db2 10 memory management uk db2 user group june 2013
Db2 10 memory management   uk db2 user group june 2013Db2 10 memory management   uk db2 user group june 2013
Db2 10 memory management uk db2 user group june 2013
 
DB2 10 Smarter Database - IBM Tech Forum
DB2 10 Smarter Database   - IBM Tech ForumDB2 10 Smarter Database   - IBM Tech Forum
DB2 10 Smarter Database - IBM Tech Forum
 
DB210 Smarter Database IBM Tech Forum 2011
DB210 Smarter Database   IBM Tech Forum 2011DB210 Smarter Database   IBM Tech Forum 2011
DB210 Smarter Database IBM Tech Forum 2011
 
Migration DB2 to EDB - Project Experience
 Migration DB2 to EDB - Project Experience Migration DB2 to EDB - Project Experience
Migration DB2 to EDB - Project Experience
 
Software im SAP Umfeld_IBM DB2
Software im SAP Umfeld_IBM DB2Software im SAP Umfeld_IBM DB2
Software im SAP Umfeld_IBM DB2
 
Db2 10 Webcast #2 Justifying The Upgrade
Db2 10 Webcast #2   Justifying The UpgradeDb2 10 Webcast #2   Justifying The Upgrade
Db2 10 Webcast #2 Justifying The Upgrade
 
DB2 10 Webcast #2 - Justifying The Upgrade
DB2 10 Webcast #2  - Justifying The UpgradeDB2 10 Webcast #2  - Justifying The Upgrade
DB2 10 Webcast #2 - Justifying The Upgrade
 
DB2 for z/O S Data Sharing
DB2 for z/O S  Data  SharingDB2 for z/O S  Data  Sharing
DB2 for z/O S Data Sharing
 
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools Update
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools UpdateDB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools Update
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools Update
 
Oracle EBS Upgrade to 12.2.5.1
Oracle EBS Upgrade to 12.2.5.1Oracle EBS Upgrade to 12.2.5.1
Oracle EBS Upgrade to 12.2.5.1
 
David Baker 2015
David Baker 2015David Baker 2015
David Baker 2015
 
Db2 V12 incompatibilities_&amp;_improvements_over_V11
Db2 V12 incompatibilities_&amp;_improvements_over_V11Db2 V12 incompatibilities_&amp;_improvements_over_V11
Db2 V12 incompatibilities_&amp;_improvements_over_V11
 

Último

Gartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptxGartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptxchadhar227
 
Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1ranjankumarbehera14
 
7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.pptibrahimabdi22
 
SR-101-01012024-EN.docx Federal Constitution of the Swiss Confederation
SR-101-01012024-EN.docx  Federal Constitution  of the Swiss ConfederationSR-101-01012024-EN.docx  Federal Constitution  of the Swiss Confederation
SR-101-01012024-EN.docx Federal Constitution of the Swiss ConfederationEfruzAsilolu
 
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制vexqp
 
Data Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdfData Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdftheeltifs
 
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangePredicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangeThinkInnovation
 
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格q6pzkpark
 
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...nirzagarg
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...ZurliaSoop
 
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制vexqp
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Valters Lauzums
 
Ranking and Scoring Exercises for Research
Ranking and Scoring Exercises for ResearchRanking and Scoring Exercises for Research
Ranking and Scoring Exercises for ResearchRajesh Mondal
 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteedamy56318795
 
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...Health
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...nirzagarg
 
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...nirzagarg
 
Harnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptxHarnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptxParas Gupta
 
Digital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham WareDigital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham WareGraham Ware
 

Último (20)

Gartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptxGartner's Data Analytics Maturity Model.pptx
Gartner's Data Analytics Maturity Model.pptx
 
Sequential and reinforcement learning for demand side management by Margaux B...
Sequential and reinforcement learning for demand side management by Margaux B...Sequential and reinforcement learning for demand side management by Margaux B...
Sequential and reinforcement learning for demand side management by Margaux B...
 
Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1Lecture_2_Deep_Learning_Overview-newone1
Lecture_2_Deep_Learning_Overview-newone1
 
7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt7. Epi of Chronic respiratory diseases.ppt
7. Epi of Chronic respiratory diseases.ppt
 
SR-101-01012024-EN.docx Federal Constitution of the Swiss Confederation
SR-101-01012024-EN.docx  Federal Constitution  of the Swiss ConfederationSR-101-01012024-EN.docx  Federal Constitution  of the Swiss Confederation
SR-101-01012024-EN.docx Federal Constitution of the Swiss Confederation
 
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制
怎样办理伦敦大学城市学院毕业证(CITY毕业证书)成绩单学校原版复制
 
Data Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdfData Analyst Tasks to do the internship.pdf
Data Analyst Tasks to do the internship.pdf
 
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With OrangePredicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
Predicting HDB Resale Prices - Conducting Linear Regression Analysis With Orange
 
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
一比一原版(曼大毕业证书)曼尼托巴大学毕业证成绩单留信学历认证一手价格
 
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
Top profile Call Girls In Satna [ 7014168258 ] Call Me For Genuine Models We ...
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
怎样办理圣地亚哥州立大学毕业证(SDSU毕业证书)成绩单学校原版复制
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
 
Ranking and Scoring Exercises for Research
Ranking and Scoring Exercises for ResearchRanking and Scoring Exercises for Research
Ranking and Scoring Exercises for Research
 
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
5CL-ADBA,5cladba, Chinese supplier, safety is guaranteed
 
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
+97470301568>>weed for sale in qatar ,weed for sale in dubai,weed for sale in...
 
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
Top profile Call Girls In Tumkur [ 7014168258 ] Call Me For Genuine Models We...
 
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
Top profile Call Girls In Begusarai [ 7014168258 ] Call Me For Genuine Models...
 
Harnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptxHarnessing the Power of GenAI for BI and Reporting.pptx
Harnessing the Power of GenAI for BI and Reporting.pptx
 
Digital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham WareDigital Transformation Playbook by Graham Ware
Digital Transformation Playbook by Graham Ware
 

DB2 11 for z/OS Migration Planning and Early Customer Experiences

  • 1. John Campbell Distinguished Engineer IBM DB2 for z/OS Development campbelj@uk.ibm.com DB2 11 for z/OS: Migration Planning and Early Customer Experiences
  • 2. Disclaimer: Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The Information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion. Performance Disclaimer: This document contains performance information based on measurements done in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the numbers stated here. 2
  • 3. Objectives • Share lessons learned, surprises, pitfalls • Provide hints and tips • Address some myths • Provide additional planning information • Provide usage guidelines and positioning on new enhancements • Help customers migrate as fast as possible, but safely 3
  • 4. Agenda • Introduction • ESP Highlights • Migration Considerations • Availability • Utilities • Performance and Scalability • Other Enhancements • Summary 4
  • 6. DB2 11 Major Themes 6 • Out-of-the-box CPU Savings – Improving efficiency, reducing costs, no application changes – Up to 10% for complex OLTP – Up to 10% for update intensive batch – Up to 40% for queries – Additional performance improvements through use of new DB2 11 features • Enhanced Resiliency and Continuous Availability – Improved autonomics which reduces costs and improves availability – Making more online changes without affecting applications – Online REORG improvements, less disruption – DROP COLUMN, online change of partition limit keys – Extended log record addressing capacity - 1 yottabyte (or 1B petabytes) – BIND/REBIND, DDL, Online REORG to break into persistent threads • Enhanced business analytics – Expanded SQL, XML, and analytics capabilities – Temporal and SQLPL enhancements – Hadoop integration, NoSQL and JSON support – Transparent archiving • Simpler, faster DB2 version upgrades – Improved product quality/reliability – through iterative approach on 3 monthly cycle (1:N rallies) to FVT, SVT, Performance testing, and stabilization phase ahead of start of ESP – Application changes divorced from DB2 system upgrade (APPLCOMPAT) – Access path stability improvements Announce Oct. 1, 2013 GA Oct. 25 2013
  • 8. 8 Core - 21 WW Customers Geography 11 EMEA  9 NA  1 SA Industry  7 Banking  5 Insurance  3 Healthcare  2 Financial Markets  1 Automotive Extended - 6 WW Customers Geography  3 EMEA  2 NA  1 SA Industry  3 Banking  2 Computer Services  1 Professional Services ESP Start February 2013 First Code Drop March 2013 “Regular” service process July 2013 GA October 25, 2013 DB2 11 ESP Highlights
  • 9. DB2 11 ESP Client Feedback 9 • Very much improved quality and reliability at this early stage in the release cycle • Good performance and CPU savings DRDA workload up to 20% CPU reduction CICS workload up to 18% CPU reduction Batch workload up to 20% CPU reduction • Greatest hits – BIND, REBIND, DDL, Online REORG break in – Transparent archiving – IFI 306 filtering by object (Qreplication) – Online schema change – Utility improvements particularly Online REORG – Extended LRBA/LRSN – Optimizer and migration improvements – GROUP BY Grouping Sets
  • 10. DB2 11 Early Support Program (ESP) 10 “Overall we are very satisfied and astonished about the system stability of DB2 V11. In V10 we experienced this in another way.” – European Insurance “We have seen very few problems in [Installation, Migration, and Performance]. Overall, it has been a very pleasant experience!!…The quality of the code is clearly much higher than for the ESP for DB2 10…” - European Banking/FSS “Good code stability, no outages, no main failures, only a few PMRs….” – European Banking “We have been involved in several DB2 for z/OS ESP’s. This one will rank as one of, if not the smoothest one yet.” – Large NA retailer
  • 11. DB2 11 Early Support Program (ESP) … 11 “I saw a significant performance improvement in recovery of catalog and directory. (V10 5:53 minutes, V11 2:50 minutes) That rocks! … DB2 11 is the best version I have ever seen.” - European Gov’t “Overall, we have been impressed with the new version of DB2.” – NA Manufacturer “ Higher availability, performance, lower CPU consumption amongst other new features were the benefits perceived by Banco do Brazil with DB2 11 for z/OS. During our testing with DB2 11 we noticed improved performance, along with stability. ” - Paulo Sahadi, IT Executive, Banco do Brasil “We have seen some incredible performance results with DB2 11, a major reduction of CPU time, 3.5% before REBIND and nearly 5% after REBIND. This will significantly bring down our operating costs” – Conrad Wolf, Golden Living
  • 13. 13 Prerequisites – Hardware & Operating System • Processor requirements: – zEC12, z196, z10 processors supporting z/Architecture – Will probably require increased real storage for a workload compared to DB2 10 for z/OS (up to 15%) • Software Requirements: – z/OS V1.13 Base Services (5694-A01) at minimum – DFSMS V1 R13 – DB2 Catalog is SMS managed – Language Environment Base Services – z/OS Version 1 Release 13 Security Server (RACF) – IRLM Version 2 Release 3 (Shipped with DB2 11 for z/OS) – z/OS Unicode Services and appropriate conversion definitions are required – IBM InfoSphere Data Replication (IIDR) 10.2.1 – For DB2 Connect – please the next slides
  • 14. Prerequisites – DB2 Connect • DB2 for z/OS V11 in all modes should operate with existing versions of DB2 Connect in place, even back to DB2 Connect V8 – DB2 for z/OS Development will investigate any connectivity related issues with existing applications using older versions of DB2 Connect and try to provide a fix – If any issues cannot be resolved within the DB2 for z/OS server, DB2 Connect will have to be upgraded to an in-service level to obtain a fix • For continuous availability during the migration process the minimum recommended level before leaving DB2 10 is V9.7 FP6 or V10.1 FP2 – This is the level that provides continuous availability for a given application server as a customer goes from V10 NFM base -> V11 CM -> V11 NFM • The minimum level for full DB2 11 for z/OS exploitation is currently V10.5 FP2 – Required for specific new function: array support for stored procedures, WLB support with global variables, autocommit performance improvements, improved client info – This recommended level could and probably will change and go up over time as we gain more customer experiences, roll through best practices, and provide defect fixes into newer driver levels
  • 15. Prerequisites – DB2 Connect ... • Most DB2 for z/OS engine features in NFM are supported with any version of DB2 Connect • DB2 for z/OS Development are being proactive in rrecommending customers to move from the client or runtime client packages towards using the data server (ds) driver instead • For "evergreen" and/or new function the general upgrade path is the following: 1. DB2 for z/OS Server 2. DB2 Connect Server (if present – we are encouraging direct connect) 3. Drivers installed on application servers (push from client, runtime client -> ds driver) 4. End user workstations (also push from client, runtime client -> ds driver) • We do have customers that will push out the drivers first - those are generally driven by the need for specific application enhancements e.g., – The most common example is in the .NET arena - wanting the latest tooling and driver support in the MS arena
  • 16. 16 Pre-migration planning • Run DSNTIJPM (DSNTIJPB) pre-migration job • Check for situations needing attention before migration – Take the actions recommended by the report headers • Run DSNTIJPM or DSNTIJPB, to identify them – DSNTIJPM ships with DB2 11 and should be run on DB2 10 to identify pre-migration catalog clean-up requirements • DSNTIJPM may provide DDL or utility statements for the clean-up – DSNTIJPB is the same job and is shipped for DB2 10 to maximize prepare time
  • 17. 17 Important preparation • Old plans and packages before V9 -> REBIND • Views, MQTs, and Table functions with Period Specification -> DROP – Those created in V10 are not supported – Period Specification must be on base table
  • 18. 18 Items deprecated in earlier versions – Now eliminated • Password protection for active log and archive log data sets • DSNH CLIST NEWFUN values of V8 and V9 – Use V10 or V11 • Some DB2 supplied routines – SYSPROC.DSNAEXP –> Use the EXPLAIN Privilege and issue EXPLAIN directly – AMI-based DB2 MQ (DB2MQ) functions –> use the MQI-based functions in schema (see APAR PK37290 for guidance) • DB2MQ1C.*, DB2MQ2C.* • DB2MQ1N.*,DB2MQ2N.* • CHARSET application programming default value (KATAKANA) – use CCSIDs • BIND PACKAGE options ENABLE and DISABLE (REMOTE) REMOTE (location- name,...,<luname>,...) -- specific names cannot be specified • Sysplex Query Parallelism – Single member parallelism is still supported • DSN1CHKR – There are no longer any links in the Catalog or Directory
  • 19. 19 APPLCOMPAT – Application Compatibility • Requirements – De-couple the need for application program changes to deal with incompatible SQL DML and XML changes from the actual DB2 system migration to the new DB2 release which introduced the incompatible SQL DML and XML changes – Provide a mechanism to identify application programs affected by incompatible SQL DML and XML changes – Provide a mechanism to introduce changes at an individual application program (package) level • Enable support so that application program changes can be phased in over much longer time • Enable support for mixed DB2 release co-existence in data sharing • Enable support for up to two back level releases of DB2 (N-2) • Solution – APPLCOMPAT which separates DB2 system migration to the new DB2 release from application program migration to deal with incompatible SQL DML and XML introduced by the new release
  • 20. 20 APPLCOMPAT – Application Compatibility ... • APPLCOMPAT zparm provides default for BIND/REBIND – V10R1 for DB2 10 SQL DML behaviour – V11R1 for DB2 11 SQL DML behaviour – Default is V11R1 for new installs, V10R1 for migration • APPLCOMPAT option on BIND/REBIND to override zparm default • CURRENT APPLICATION COMPATIBILITY special register and DSN_PROFILE_ATTRIBUTES for DDF – For dynamic SQL • Does not address issues with new reserved words or other incompatibilities that could only be resolved by having multiple levels of the DB2 parser • BIF_COMPATIBILITY zparm is independent of APPLCOMPAT • New SQL functionality available in in V11 NFM cannot be used until package is bound withy APPLCOMPAT value of V11 R1
  • 21. 21 APPLCOMPAT – Application Compatibility ... • Migration automatically sets V10R1 prior to NFM … otherwise – DSNT225I -DSN BIND ERROR FOR PACKAGE location.collid.member APPLCOMPAT(V11R1) OPTION IS NOT SUPPORTED – IFCID376 – Summary of V10 function usage – IFCID366 – Detail of V10 function usage, identifies packages – We expect changes necessary to avoid V10R1 usage to happen after reaching NFM • Workaround to distinguish packages which have to absolutely run as V10R1 until they are corrected – Annotate the package using SQL COMMENT ON PACKAGE colid.name.”version” IS ‘V10R1’ • If version is a pre-compiler timestamp then the double quotes are necessary – Stored in the REMARKS column in SYSIBM.SYSPACKAGE table • Can be queried and be exploited by housekeeping
  • 22. 22 APPLCOMPAT vs. BIF_COMPATIBILITY • BIF_COMPATIBILITY=V9|V9_DECIMAL_VARCHAR is still honored in all modes of V11 – The ‘undocumented’ timestamp support is back again with APPLCOMPAT(V11R1) e.g., • EUR date format concatenated to the TIME (and microseconds)
  • 23. 23 Migration Overview DB2 10 -> DB2 11 DB2 11 Enabling New Function Mode (ENFM) DB2 11 Catalog Data Sharing Coexistence DB2 11 Conversion Mode (CM) DB2 11 New Function Mode (NFM) DSNTIJTC (CATMAINT UPDATE) DSNTIJNF (CATENFM COMPLETE) DSNTIJEN (CATENFM START) DB2 10 Catalog DB2 11 Libraries DB2 10 Libraries DB2 10 New Function Mode (NFM) With SPE 1 – 2 months 1 week Minutes Use APPLCOMPAT(V10R1) here Use APPLCOMPAT(V10R1) or APPLCOMPAT(V11R1) here
  • 24. 24 Migration and Fallback Paths • With DB2 11, you can always drop back to the previous stage • Cannot fallback to DB2 10 after entry to DB2 11 (ENFM), but can return to DB2 11 (CM*) DB2 10 NFM DB2 11 CM* From here, you can only return to ENFM DB2 11 CM DB2 11 CM* From here, you can go to NFM or ENFM* DB2 11 ENFM DB2 11 ENFM* DB2 11 NFM 1 2 4 4 3 54 2 3 3 5 1. DSNTIJTC 2. DSNTIJEN 3. DSNTIJNF 4. DSNTIJCS 5. DSNTIJES
  • 25. 25 Preparing your current DB2 10 NFM for Migration to DB2 11 CM • Apply the Fallback SPE APAR, PM31841 and any prerequisite fixes – Your DB2 10 system MUST be at the proper service level – See Info APARs II14660 • Non-Data Sharing – Current DB2 10 must be started with the SPE applied, or migration to DB2 11 will terminate • Data Sharing – Before migrating a member to DB2 11, all other started DB2 10 members must have the fallback SPE applied – The fallback SPE must be on all active DB2 10 group members for DB2 11 to start Important – Apply SPE to ALL Data Sharing Members Before Starting Migration!
  • 26. 26 Other recommendations • Run Online REORGs against Catalog and Directory objects prior to the ENFM/NFM migration – Check that REORG can break in – Check data consistency of Catalog and Directory – Improve the performance of the ENFM process • CATMAINT and ENFM will not execute if entries found in SYSUTILX – DB2 will no longer blindly re-initialize it
  • 28. 28 BIND/REBIND/DDL/Online REORG breaking into persistent thread running packages bound with RELEASE(DEALLOCATE) • Persistent threads with RELEASE(DEALLOCATE) which were previous blocking – e.g., IMS Pseudo WFI, CICS Protected ENTRY threads, etc • Types of REORGs which invalidated packages were previously blocked – REORG REBALANCE – Materializing REORG • The 'break-in' behavior is ON by default (zparm PKGREL_COMMIT =YES) • Break-in is performed on a “best efforts” basis • Break-in mechanism can handle idling threads at a transaction boundary (i.e., where commit or abort is the last thing performed)
  • 29. 29 BIND/REBIND/DDL/Online REORG breaking into persistent thread running packages bound with RELEASE(DEALLOCATE) … • Several factors come into play for a successful break-in – Persistent thread must COMMIT – The timing of the COMMIT and the frequency of the COMMITs are both key – Increasing the zparm for IRLM resource timeout (IRLMRWT) helps to keep the BIND/REBIND/DDL/Online REORG operation waiting to increase the chances of a successful break-in • The break-in mechanism does not apply when – Running packages bound KEEPDYNAMIC(YES), or – OPEN cursors defined WITH HOLD at the time of COMMIT, or – If the COMMIT happens inside a stored procedure • RELEASE(COMMIT) would also not break-in for the above conditions
  • 30. 30 BIND/REBIND/DDL/Online REORG break in - How does it work 1. BIND/REBIND/DDL/Online REORG is initiated and waits on a package lock – Will timeout after 3x IRLM timeout limit (IRLMRWT) 2. At 1/2 of the IRLM timeout limit, DB2 will get notified by IRLM that someone is stuck on a package lock – If DB2 has an S-holder, DB2 will post a system task to take further action 3. DB2 system task is awakened and checks to see if a ‘recycle’ of locally attached threads has been done in the last 10 seconds – If not, the break-in operation will proceed – DB2 is trying to avoid a battering of the system via BIND/REBIND/DDL/Online REORG 4. Send broadcast to all DB2 members to perform a recycle of locally attached threads 5. If task proceeds, it will loop through all locally attached threads (not DIST!) and see if they were last committed/aborted in > 1/2 of the IRLM timeout limit – If so, the BIND/REBIND/DDL/Online REORG is likely waiting on them 6. The next test is to see if DB2 can do anything about it? – Each thread must be at a transaction boundary (i.e., commit or abort is the last thing) – If so, DB2 can process the thread
  • 31. 31 BIND/REBIND/DDL/Online REORG break in - How does it work … 7. DB2 will fence the API for the thread, grab the agent structure and drive a ‘dummy COMMIT’ – The commit is transactionally OK since we are at a transaction boundary – DB2 will be the coordinator as this is single-phase commit and get out – On the COMMIT, RDS sees that there is a waiter for a package lock held by this agent and will switch to RELEASE(COMMIT) for this commit cycle – The lock is freed and DB2 is one step closer to the BIND/REBIND/DDL/Online REORG breaking in 8. Repeat for all qualifying threads 9. BIND/REBIND/DDL/Online REORG should break-in provided there are no blockers that had to be excluded e.g., long running read only application process without a commit 10. If the application starts using the thread during the recycle processing, it will be blocked at the API level – DB2 will spin the thread in a timed wait loop until the recycle is done – DB2 will wait a millisecond approximately between polls – DB2 has also taken care to fence end-of-task (cancel application TCB), end-of-memory (force down the home ASID during recycle), associate, dissociate, etc
  • 32. BIND Break-in – Simple customer test Time ----------- Thread: T1 Tread reuse BIND: BIND 1 transaction One CICS ENTRY Thread Break in BIND waits approximately 30 sec before it breaks into an idle thread 30 sec is half the transaction time out interval 32
  • 33. BIND break in – additional customer testing Action Threads Result BIND Batch No Commit No break-in BIND Batch frequent Commit Break-in BIND 50*CICS ENTRY Break-in DDL: Create Index CICS ENTRY Break-in Drop Index CICS ENTRY Break-in Alter Table Add Column CICS ENTRY Break-in Alter index (NO) cluster CICS ENTRY Break-in Alter Tablespace to UTS CICS ENTRY Break-in Alter Partition CICS ENTRY Break-in 33
  • 34. 34 ALTER LIMITKEY enhancement • Behavior is different depending on how the table partitioning is controlled • With table-controlled table partitioning, this is a pending alteration – Dropping of these alters can occur at any time • With index-controlled table partitioning – If alter is done via ALTER INDEX ALTER PARTITION • Partition goes into ‘hard’ reorg pending (REORP)! • Tablespace remains index-controlled • Alter cannot be withdrawn! – If the alter is done by ALTER TABLE ALTER PARTITION • If the partition is not empty – Partition goes into ‘hard’ reorg pending (REORP)! – Tablespace is converted to table-controlled partitioning! – Alter cannot be withdrawn • If the partition is empty – Alter is executed immediately – Tablespace is converted to table-controlled partitioning
  • 35. 35 ALTER LIMITKEY enhancement … • Two new zparms introduced – PREVENT_ALTERTB_LIMITKEY • ALTER TABLE ALTER PARTITION leads to SQLCODE -876 • ALTER INDEX ALTER PARTITION is still possible – do not use it because of REORP – PREVENT_NEW_IXCTRL_PART • Can no longer create new index-controlled partitioned tablespaces • Materializing REORG can now break-in to a persistent thread running RELEASE(DEALLOCATE) package • REORG REBALANCE – Not possible for partitions with pending ALTER LIMITKEY changes – Will work for the other partitions – Will work for partitions which are ‘hard’ reorg pending (REORP)
  • 36. 36 DROP COLUMN • Works well – Can convert to UTS and concurrently DROP COLUMN – Materializing REORG can be run at the partition level – if all partitions are covered – All packages touching the table will be invalidated • Restrictions – Cannot use DROP COLUMN in classic tablespace type (SQLCODE -650) – Cannot drop a column contained in an index or view (SQLCODE -478) – Cannot add a dropped column before the materializing REORG (SQLCODE -20385) – Cannot create a view with a dropped column (SQLCODE -20385) – Cannot drop the same column a second time before the materializing REORG (SQLCODE -205) – Cannot unload from an image copy taken before the materializing REORG (DSNU1227I) – Cannot recover to a PIT before the materializing REORG (DSNU556I)
  • 38. 38 REORG Enhancements • SWITCHTIME option avoids the need for multiple jobs to control the start of the drain • Part level COPY when reorganizing a subset of partitions – Tape support added, but no support yet for STACK YES – Changes required to existing jobs • REORG SORTDATA NO RECLUSTER YES|NO – RECLUSTER NO will bypass sort (and speed up conversion to extended format) • Good idea, but only saves time, if the data is actually already in sequenced order! • Do not use on huge tables which are not already clustered as will run for a long time – RECLUSTER NO is enforced for SHRLEVEL CHANGE with SORTDATA NO • Specify SORTDATA to get reclustering – DSNU2904I DATA RECORDS WILL BE UNLOADED VIA unload-method • CLUSTERING INDEX • TABLE SCAN • TABLE SPACE SCAN
  • 39. 39 REORG Enhancements … • Automated building of the mapping table with new 10 byte LRBA/LRSN – V11 CM behavior • An existing mapping table in V10 or V11 format will be reused • If mapping table does not exist, mapping table will be automatically temporarily created – V11 NFM behavior • Existing mapping table if in V11 format will be reused • If mapping table exists but in V10 format, a new mapping table will be automatically created in the same database as the original mapping table • If mapping table does not exist, mapping table will be automatically created in database specified by zparm, or in declared database or in DSNDB04 – Recommendations • Predefine and keep mapping tables around for regularly scheduled REORG jobs to avoid SQL DDL contention on the Catalog • Use single specific database as specified by zparm for all mapping tables • Modify schema of existing mapping tables to V11 format as part of migration process to NFM i.e., ALTER TABLE TBMAP ALTER COLUMN LRSN SET DATA TYPE CHAR(10) NOT NULL; • Wait for APAR PI08339 if you want automated building of mapping tables
  • 40. 40 REORG Enhancements … • Use of DRAIN_ALLPARTS YES option (not default) has the potential to significantly reduce the ‘outage’ – Avoid deadlocks between drains and claims across NPIs and partitions when reorganizing subset of partitions – Solution is to momentarily drain all partitions being reorganized – More likely to be successful in getting successful DRAIN to make the SWITCH – Big reductions seen in the elapsed time to complete DRAIN and SWITCH – REORGs should run with less problems using this feature • REORG message output - DSNU1138I provides drain begin / end information • PARALLEL (maximum number of subtasks) option to control the number of subtasks • Be aware of changed defaults e.g., NOPAD YES for REORG DISCARD • LOGRANGES NO option should only be used when SYSLGRNX is known to be logically corrupted and/or has to be reset
  • 41. 41 REORG Enhancements … • REORG REBALANCE – Now supports SHRLEVEL(CHANGE) – big step forward for 24*7 – Can now deal with partitions that were empty (or did not contain enough data for a compression dictionary to be built during the UNLOAD phase) before the REORG • Will now build a single compression dictionary that will get applied to all target partitions • There is no longer a need for subsequent REORG to gain compression – Can now break-in on persistent threads running RELEASE(DEALLOCATE) packages • Partition pruning for UTS PBG tablespaces – Option to physically remove and contract the number of UTS PBG partitions – Only performed when zparm REORG_DROP_PBG_PARTS=ENABLE – Disabled by default – There is no support for PIT recovery to a point in time prior to SWITCH phase for a pruned tablespace
  • 42. 42 RUNSTATS and RTS Enhancements • Inline Statistics are rough estimates and should not be compared against a separate RUNSTATs • Now possible to avoid DSNU602I STATISTICS ARE NOT COLLECTED FOR NONPATITIONED INDEX on REORG PART operation – When SORTNPSI option on REORG job or REORG_PART_SORT_NPSI zparm set to AUTO or YES, and – REORG sorted all of the non-partitioned index keys because the amount of data that was being reorganized relative to the size of objects exceeded internal thresholds • New RESET ACCESSPATH option – Reset missing and/or conflicting access path statistics in the Catalog – Does not affect space statistics in the Catalog or RTS • Avoid DSNU1363I THE STATS PROFILE FOR TABLE table-name NOT FOUND – Will use fixed defaults • No support for USE PROFILE with inline statistics in REORG and LOAD • Can externalize RTS in-memory blocks via the following command –ACCESS DATABASE (DB) SP(TS) MODE(STATS) command
  • 43. 43 RECOVER enhancements • Fast Log Apply (FLA) now implemented for RECOVER INDEX – Previously DB2 would wait until a log record was to be applied before reading the associated index page into the local bufferpool where it would then be cached – Now DB2 will use list prefetch to read all the index pages that are needed to apply log records for, before applying any log record – Potential for significant savings in elapsed time – Should now reconsider decision: run RECOVER INDEX in parallel with RECOVER TABLESPACE [PART] vs. wait for RECOVER TABLESPACE [PARTs] to complete and then run REBUILD INDEX – Enhancement taken back to V9 and V10 via APAR PI07694 • Optimization to point-in-time RECOVER list of objects – Recover objects only when necessary when performing PIT recovery when TOLOGPOINT or TORBA are specified – It does not apply to log only recoveries, RECOVER BACKOUT, and recovers to current – DIAGNOSE TYPE(607) is required to activate this behavior
  • 45. 45 Performance Enhancements - no REBIND needed (CM) • DDF performance improvements – Reduced SRB scheduling on TCP/IP receive using new CommServer capabilities improved autocommit OLTP performance • xProcs above the bar • zIIP enablement for all SRB-mode DB2 system agents that are not response time critical • Avoid cross-memory overhead for writing log records • Data decompression performance improvement • INSERT performance – Latch contention reduction – CPU reduction for Insert column processing and log record creation – Data sharing LRSN spin avoidance – Page fix/free avoidance in GBP write
  • 46. 46 Performance Enhancements - no REBIND needed (CM) ... • Sort performance improvements • DPSI performance improvements for merge • Performance improvements with large number of partitions • XML performance improvements • Optimize RELEASE(DEALLOCATE) execution so that it is consistently better performing than RELEASE(COMMIT) • IFI 306 filtering capabilities to improve QReplication capture performance • Utilities performance improvements • Automatic index pseudo delete clean-up • ODBC/JDBC Type 2 performance improvements • Java stored procedures – Multi threaded JVMs, 64-bit JVM – more efficient
  • 47. 47 Performance Enhancements – no REBIND needed (CM) ... • ACCESS DATABASE command performance • DGTT performance improvement – Avoid incremental binds for reduced CPU overhead • P-procs for LIKE predicates against Unicode tables • Improved performance for ROLLBACK TO SAVEPOINT • zEC12 exploitation • Latch contention reduction and other high n-way scalability improvements • Data sharing performance improvements
  • 48. 48 Performance Enhancements requiring REBIND (CM with or without APREUSE) • Most In-memory techniques • Non correlated subquery with mismatched length • Select list do-once • Column processing improvements • RID overflow to workfile handled for Data Manager set functions • Performance improvements for common operators • DECFLOAT data type performance improvements
  • 49. 49 Performance Enhancements requiring REBIND (CM without APREUSE) • Query transformation improvements – less expertise required to write performant SQL • Enhanced duplicate removal • DPSI and page range performance improvements • Optimizer CPU and I/O cost balancing improvements
  • 50. 50 Performance Enhancements - DBA or application effort required (NFM) • Suppress-null indexes • New PCTFREE FOR UPDATE attribute to reduce indirect references • DGTT performance improvements • Global variables • Optimizer externalization of missing/conflicting statistics • Extended optimization - selectivity overrides (filter factor hints) • Open data set limit raised to 200K
  • 51. 51 Optional Enhancements need NFM + DBA effort • DSNTIJCB – Optional – Convert BSDS for extended 10-byte RBAs – -STOP DB2 MODE(QUIESCE) • DSNTIJCV – Optional – Convert Catalog and Directory table and index spaces to extended 10-byte RBA format – Reorgs all Catalog and Directory table spaces SHRLEVEL CHANGE – Can be split up to run reorgs in parallel
  • 52. DB2 Lab Measurement Summary 52 Query Batch OLTP XML
  • 53. Example of Customer Performance Testing • DB2 10 NFM baseline • DB2 11 CM before REBIND • DB2 11 CM after REBIND • DB2 11 NFM (no need for further REBIND) • DB2 11 NFM after REORG (to migrate object to extended LRSN) • DB2 11 NFM Extended LRSN 53
  • 54. Example of Customer Performance Testing ... • Make sure that the CPU numbers are normalized across those intervals i.e., use CPU milliseconds per commit • Easy to combine statistics and accounting by stacking the various components of CPU resource consumption: – MSTR TCB / (commits + rollbacks) – MSTR SRB / (commits + rollbacks) – MSTR IIP SRB / (commits + rollbacks) – DBM1 TCB / (commits + rollbacks) – DBM1 SRB / (commits + rollbacks) – DBM1 IIP SRB / (commits + rollbacks) – IRLM TCB / (commits + rollbacks) – IRLM SRB / (commits + rollbacks) – Average Class 2 CP CPU * occurrences / (commits + rollbacks) – Average Class 2 SE CPU * occurrences / (commits + rollbacks) 54
  • 55. CICS Test Transaction Profile Avg. DML •3 Insert •7 Select •5 Open •103 Fetch Avg. Buffer pool •65 Getpages •13 Sync Read CPU consumption •Class 1: 3.2 msec •Class 2: 2.3 msec With CP-Speed 802 MIPS Fetch Intensive • Transaction types – E-Bank logon – Balance check – Financial Statement history – Account search 55
  • 56. CICS transaction CPU time 0.00 0.50 1.00 1.50 2.00 2.50 3.00 3.50 10NFM 11CM 11RBND 11NFM 11Reo 11 E L CPU msec DB2 AS zIIP per TRAN DB2 AS GCP per TRAN CL2CPU per TRAN 5% Activity moved to zIIP In CM 56
  • 57. DB2 system address space CPU per CICS transaction 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 10NFM 11CM 11RBND 11NFM 11Reo 11 E L CPU Time mSec IRLM SRB IRLM TCB DBM1 zIIP DBM1 SRB DBM1 TCB MSTR zIIP MSTR SRB MSTR TCB Activity moved to zIIP in CM 57
  • 58. Test Batch job profile DML • Commits: 4528 • Delete: 37613 • Rows: 482836 • Update: 45099 • Select: 119773 • Insert: 525884 • Fetch: 548947 Buffer pool • Getpage: 4.8 M • Sync Read: 133 K • Dyn Pref: 57 K CPU • Class 1: 69 Sec • Class 2: 67 Sec With CP-Speed 802 MIPS Elapsed: • Class 1: 07:02 min Insert and Delete Intensive 58
  • 59. Batch test CPU time 0.00 20.00 40.00 60.00 80.00 100.00 120.00 10NFM 11CM 11RBND 11NFM 11Reo 11 E L CPU Sec DB2 AS zIIP per BATCH DB2 AS GCP per BATCH CL2CPU per BATCH LRSN spin loop eliminated in V11 E L No credit to V11 For V11 Reo improvement 59
  • 60. DB2 system address space CPU for Batch 0.00 5.00 10.00 15.00 20.00 25.00 10NFM 11CM 11RBND 11NFM 11Reo 11 E L CPU sec IRLM SRB IRLM TCB DBM1 zIIP DBM1 SRB DBM1 TCB MSTR zIIP MSTR SRB MSTR TCB Activity moved to zIIP In CM LRSN spin loop eliminated in 11 EL reduces MSTR SRB 60
  • 61. TPC-H using Static SQLPL • 10% out-of-box improvement with DB2 11 when rebinding with APREUSE • 34% improvement in DB2 11 when rebinding to obtain DB2 11 access path 61 -1.4% -10% -34%
  • 62. 62 Automatic Pseudo Deleted Index Entry Clean-up • Recap on impact of pseudo deleted index entries – Index size grows with increasing number of pseudo-deleted index entries • More getpages and lock requests required • Increased CPU cost and possibly longer elapsed times for access via index search – Applications may encounter deadlocks and timeouts during INSERT/UPDATE/DELETE • Collisions with committed pseudo-deleted index entries • RID reuse by INSERT following DELETE => deadlock • Prior to V11, how are they cleaned up – DB2 removes pseudo-deleted entries during mainline operations • Insert / delete operations remove pseudo-deleted entries from index pages • SQL running with isolation level RR removes pseudo-deleted entries – Pages that only contain pseudo-deleted index entries are called pseudo-empty • DB2 attempts to clean up pseudo-empty index pages as part of DELETE processing – REORG INDEX removes pseudo-empty index pages and pseudo-deleted entries that were not cleaned up by the mainline processing
  • 63. 63 Automatic Pseudo Deleted Index Entry Clean-up … • Autonomic solution provided in CM and turned on automatically for all indexes 24*7 – Automatic clean-up of pseudo-deleted index entries in index leaf pages – Automatic clean-up of pseudo-empty index pages – Designed to have minimal or no disruption to concurrent DB2 work – Clean-up is done under system tasks, which run as enclave SRBs and are zIIP eligible – Parent thread (one per DB2 member) loops through RTS to find candidate indexes – Child clean-up threads only clean up an index if it already is opened for INSERT, UPDATE or DELETE on the DB2 member • Avoid creating GBP dependency on indexes • Potential disruption can be minimized by managing down the number of clean-up threads or specifying time when indexes are subject to clean-up – Can control the number of concurrent clean-up threads or disable the function using zparm INDEX_CLEANUP_THREADS • 0=Disable, 1-128, 10 is default – Entries in new Catalog table SYSIBM.SYSINDEXCLEANUP • Define when / which objects are to be considered in a generic way
  • 64. Automatic Pseudo Deleted Index Entry Clean-up • Up to 39% DB2 CPU reduction per transaction in DB2 11 compared to DB2 10 • Up to 93% reduction in Pseudo deleted entries in DB2 11 • Consistent performance and less need of REORG in DB2 11 64 0 500000 1000000 1500000 2000000 2500000 0 0.0005 0.001 0.0015 0.002 0.0025 0.003 0.0035 Day1 Day2 Day3 Day4 Day5 #ofpseudo_deletedentries CPUtime(sec) WAS Portal Workload 5 Days Performance V10 Total CPU time V11 Total CPU time V10 sum of REORGPSEUDODELETES V11 sum of REORGPSEUDODELETES
  • 65. 65 Performance Enhancements • Qreplication log filtering – Reduce IFI log read cost by qualifying the objects with DBID/PSID – Additional benefit if objects are compressed – Move filtering from Qreplication capture task to DB2 engine – Potential for very significant reduction in the number of log records replicated – Requires IBM Infophere Data Replication Q Replication or Change Data Capture 10.2.1 • Archive Transparency – Very useful new feature – Need to carefully examine the additional cost of ORDER BY sort when accessing the archive – When application only fetches a limited number of rows from the result set then the cost can increase significantly when also accessing the archive – Will typically be used by customers selectively on a case by case basis
  • 66. 66 Performance Enhancements … • Optimizer enhancements – Improved performance for legacy application programs – Better chance of achieving matching index scan – No need to rewrite SQL to get most of the improvements – Still important to choose the right data type and avoid implicit casting – Still very important to run RUNSTATS • GROUP BY grouping sets – Important feature for data analysis: CUBE, ROLLUP – All processing is performed in a single pass over the table – But there are some performance differences relative to the old GROUP BY with the same result set • SELECT C1, COUNT(*) FROM T1 GROUP BY C1 – No sort performed if the access path uses an index with leading column C1 • SELECT C1, COUNT(*) FROM T1 GROUP BY GROUPING SETS ((C1)) – A sort is always performed
  • 67. 67 Extended LRBA/LRSN • What you need to know for DB2 11 CM and DB2 10 NFM? – 6 byte format - LRBA/LRSN before DB2 11  x‘LLLLLLLLLLLL’ – 10 byte extended format– LRBA/LRSN has addressing capacity of 1 yottabyte (2**80) • 10 byte extended format - LRSN with DB2 11  x‘00LLLLLLLLLLLL000000’ • 10 byte extended format - LRBA with DB2 11  x‘00000000RRRRRRRRRRRR’ – Where do we find LRBA/LRSN? • DB2 Catalog  SYSCOPY, SYSxxxxPART, ….. • DB2 Directory  SYSUTILX, SYSLGRNX, …. • BSDS  pointer, Active & Archive Log values.. • DB2 Logs  active & archive logs • DB2 Pagesets  Catalog & Directory and all user-pagesets
  • 68. 68 Extended LRBA/LRSN … • What you need to know for DB2 11 CM and DB2 10 NFM? … – DB2 11 CM • DB2 internal coding deals with 10 byte extended format LRBA/LRSN values only • LRSN in Utility output is shown in 10 byte extended format with precision ‘000000’ except – QUIESCE utility, which externalizes LRSN in 10 byte extended format with precision ‘nnnnnn’ • RECOVER utility handles 10 byte extended format LRBA/LRSN input • Column ‘RBA_FORMAT’ in SYSIBM.SYSxxxPART is set to ‘B‘ for new defined or objects, which are reorged or loaded with replace-option (possible values ‚B, blank‚ U, E) – DB2 11 CM / DB2 10 NFM coexistence in data sharing • Full toleration of 10 byte extended format LRBA/LRSN value as input to the RECOVER Utility • Sanity checks included for ‘wrongly used 6 byte format LRBA/LRSN’
  • 69. 69 Extended LRBA/LRSN … • What you need to know for DB2 11 NFM? – Migration to DB2 11 NFM (via DSNTIJEN) • Catalog & Directory Table ‘LRBA/LRSN Columns’ are altered to 10 byte extended format • SYSIBM.SYSLGRNX entries are now stored as 10 byte extended format LRBA/LRSN values • SYSIBM.SYSCOPY – Conversion of all LRBA/LRSN values is done for existing data to 10 byte extended format with leading byte ‘00’ and precision, ‘000000’ for LRSN and right justified right with leading ‘00000000’ for LRBA values – New data is stored in 10 byte extended format with precision ‘nnnnnn’ • LRBA/LRSN for all Utilities use now 10 byte extended format • LRBA/LRSN values are still written to DB2 logs in 6 byte format • LRBA/LRSN values are still written to DB2 pagesets in 6 byte format
  • 70. 70 Extended LRBA/LRSN … • What you need to know for DB2 11 NFM? … – BSDS converted to 10 byte extended format LRBA/LRSN in NFM only (DSNJCNVT) • There is no way back for BSDS! • Now LRBA/LRSN values are written to DB2 logs of the subject DB2 member now in 10 byte extended format with precision ‘nnnnnn’ • LRBA/LRSN values are still written to DB2 pagesets in 6 byte format – Conversion (10 to 6 or 6 to 10 byte) has to be done – LRSN Spin can still happen – DSN1LOGP and REPORT RECOVER output will show 10 byte extended format LRBA/LRSN although never externalized to pagesets (different output, for DSN1PRNT of pagesets) • Can be done, whenever you want to do it after entry to V11 NFM, regardless of pageset formats
  • 71. 71 Extended LRBA/LRSN … • What you need to know for DB2 11 NFM? … – Reorg Catalog and Directory pagesets to ‘extended format’ (in NFM only!) • Can be done whenever you want to, regardless of BSDS and user pageset formats • Now LRBA/LRSN values are written to converted pagesets in 10 byte extended format – LRSN with precision ‘nnnnnn’, if update is done on a DB2 member with 10 byte extended format BSDS – LRSN with precision ‘000000’, if update in done in a member with 6 byte format BSDS • Column ‘RBA_FORMAT’ in SYSIBM.SYSxxxPART is updated to ‘E’ • LRSN Spin could still happen for DB2 member with 6 byte format BSDS • Can be converted back to 6 byte format (all or at part level)
  • 72. 72 Extended LRBA/LRSN … • What you need to know for DB2 11 NFM? … – Reorg User pagesets to ‘extended format’ (in NFM only!) • Can be done whenever you want to, regardless of BSDS, Catalog & Directory pageset formats • Now LRBA/LRSN values are written to converted pagesets in 10 byte extended format – LRSN with precision ‘nnnnnn‘, if update is done in a member with 10 byte extended format BSDS – LRSN with precision ‘000000‘, if update in done in a member with 6 byte format BSDS • Column ‘RBA_FORMAT’ in SYSIBM.SYSxxxPART is set to ‘E‘ • LRSN Spin could still happen for a DB2 member with 6 byte format BSDS • Can be converted back to 6 byte (all or at part level) • Is done by REORG, LOAD .. REPLACE or REBUILD with ‘RBALRSN_CONVERSION EXTENDED’ or if zparm OBJECT_CONVERTED=EXTENDED • ‘RECOVER ... TOCOPY ...’ using a 6 byte Copy can reset format back to ‘basic’
  • 73. 73 Extended LRBA/LRSN … • Enhancements to improve usability characteristics based on 6 byte/10 byte format LRBA/LRSN handling – Prevent DSNJCNVT from converting DB2 10 NFM BSDS to extended format – Support 10 byte extended format input to RECOVER in DB2 10 – Perform sanity checks to guard against invalid LRSN values i.e., 6 byte LRSN values with leading byte of zeros, to prevent PIT recoveries using bad RBA/LRSN from failing (RC=8 in UTILINIT phase instead) – Sanity check also performed in DB2 10 (coexistence) – Support for ‘NOBASIC’ value for OBJECT_CONVERSION zparm to prevent converting back pagesets in extended format , and to ‘EXTENDED’ as default if ‘NOBASIC’ is set and catalog column is <> ‘E’ – Add LRSN values to archive log information in REPORT RECOVERY utility output – Technical white paper being produced explains about ‘6/10 byte LRBA/LRSN handling’ – Several enhancements to DB2 11 books
  • 74. 74 Extended LRBA/LRSN … • Recommended best practice migration strategy 1. Run pre-migration jobs and steps to clean-up 2. Migration to DB2 11 CM 3. Migration to DB2 11 NFM 4. Convert ALL BSDS of data sharing group within ‘n’ weekends 5. Reorg ALL Directory & Catalog Pagesets to ‘extended LRBA/LRSN format’ 6. Set OBJECT_CREATE and UTILITY_CONVERSION zparms to EXTENDED - New objects will be created in 10 byte extended format - REORG, LOAD REPLACE and REBUILD will convert user objects to extended format without need to change utility control statements 7. Reorg all objects to extended LRBA/LRSN format by executing normal reorg jobs or some additional jobs • Perform regular check for ongoing progress by selecting rows where RBA_FORMAT = ‘E’ in SYSIBM.SYSxxxxPART 8. If all done, set OBJECT _CONVERSION zparm to NOBASIC
  • 75. 75 How to convert 10 byte LRSN to Timestamp • DB2 10 NFM or less – use TIMESTAMP function LRSN-format: 6 byte wherever used from  e.g. ‘CBE2B5955DCF’ convert by: SELECT TIMESTAMP(x'CBE2B5955DCF ' || x'0000') from …. • DB2 11 CM – use TIMESTAMP function LRSN-format: 6 byte in logs, catalog and directory, pages  e.g. ‘CBE2B5955DCF’ 10 byte in all outputs (except DSN1PRNT) e.g. ‘00CBE2B5955DCF086C00’ convert by: SELECT TIMESTAMP(x'CBE2B5955DCF ' || x'0000') from ….
  • 76. 76 How to convert 10 byte LRSN to Timestamp … • DB2 11 NFM – use TIMESTAMP function LRSN-format: 6 byte for non-converted data pages (DSN1PRNT)  e.g. ‘CBE2B5955DCF’ 10 byte in Catalog and Directory and in all outputs  e.g. ‘00CBE2B5955DCF086C00’ convert by: SELECT TIMESTAMP(x'CBE2B5955DCF ' || x'0000') from …. 6 byte LRSN can be used by ‘cut and paste’ 10 byte LRSN can be used, if first 2-digits are cut and digits 3 to 14 are used, but only if first two digits are ‘00’ otherwise this conversion is NOT usable! SELECT TIMESTAMP(bx'CBE2B5955DCF0000') from …. 6 byte LRSN can be used by ‘cut and paste’ and padded with ‘0000’ at the right 10 byte LRSN can be used, if first 2-digits are cut and digits 3 to 18 are used, but only if first two digits are ‘00’ otherwise this conversion is NOT usable!
  • 77. 77 How to convert 10 byte LRSN to Timestamp … • DB2 11 NFM – use new ‘binary hex’ function – SELECT TIMESTAMP(bx'00CBE2B5955DCF086C00000000000000') from ...  6 byte LRSN can be used by ‘cut and paste’, ‘00’ in front of and padded with ‘000000000000000000’ digits at the right  10 byte LRSN can be used by ‘cut and paste’ and right padded with ‘000000000000’ – (BX’ can be replaced by (BINARY(X’ or (VARBINARY(X’….. – Convert 10 byte RBA/LRSN to Timestamp  Works great, but need APPLCOMPAT(V11R1)!
  • 78. 78 Other performance recommendations • Make sure HVCOMMON in IEASYSxx can accommodate log output buffer • Configure additional 1MB LFAREA (z/OS parameter in IEASYSxx) for maximum benefit • LRSN spin avoidance requires both BSDS and objects conversion in NFM • Monitor log I/O performance due to log record size increase – 3% to 40% increase in log record size observed following BSDS conversion • Essential to make sure enough zIIP capacity available before V11 CM migration – zIIP ‘Help Function” IIPHONORPRIORITY should be set to YES in case there is a shortage of zIIP capacity – Continue to monitor zIIP capacity thereafter • Bufferpool re-classification change - prefetched pages will again be reclassified as random after random getpage – May need to re-evaluate VPSEQT setting for certain bufferpools • MRU (Most Recently Used) used for pages brought in by utilities • New FRAMESIZE parameter independent from PGFIX parameter
  • 79. Customer Value • For many customers value is driven on how sub-capacity workload licensing works – Based on 4-hour rolling average MSU utilisation – Highest rolling average figure for each month used to calculate software charges for all MLC products (IBM and non-IBM) – Provided DB2 forms a significant component of the total MSU usage during peak period, any MSU savings will translate directly to MLC savings – Typically this is the online day - mid morning and mid afternoon – Factor in the impact on overall z/OS software stack cost reduction: z/OS, CICS, MQ 79 79
  • 81. 81 Performance Summary • Opportunity for improved performance for legacy application programs • REBIND of static SQL packages is very important • Good validation of potential from ESP customers and IBM internal workloads • Your mileage will vary based on your SQL application workload as certain features only apply to certain workloads • Impressive CPU savings observed for some workloads • Highly optimized static SQL and/or simple SQL may not see much benefit • More benefit for more complex SQL i.e., not read a single row by primary key • Do not sell (or buy) the savings before you have seen them for your workload 81
  • 83. 83 Remove package security vulnerabilities • Problem use case scenario – Each main routine has it’s own plan and both names are the same – All packages are bound into a single collection – Each plan is bound with PKLIST(col.*) – If EXECUTE privilege is granted on one plan, this authid/user can run any main program • Solution – New BIND PLAN option PROGAUTH supported by a new table SYSIBM.DSNPROGAUTH in the Catalog – To ensure that a main program M can only be executed with plan P • Insert row into SYSIBM.DSNPROGAUTH with PROGNAME M, PLANNAME P, ENABLED Y • Bind plan P with PROGAUTH(ENABLE)
  • 84. 84 Archive transparency • Create an archive-table and connect the base-table to the archive-table – Via ALTER base-table ENABLE archive clause – Archive-table and base-table must have exactly the same columns – No additional columns are allowed e.g., archive-timestamp • Set SYSIBMADM.MOVE_TO_ARCHIVE global variable to ‘Y’ or ‘E’ – DB2 automatically moves deleted rows to the archive table – If set to ‘Y’, update to rows will fail with SQLCODE -20555 – If set to ‘E’ , update will only work for active rows in the base table – Delete of active rows in the base table will then appear in the archive-table • If SYSIBMADM.MOVE_TO_ARCHIVE global variable is set to ‘N’ – Delete of active rows in the base table are lost – So important to check that the setting of the global variable to ‘Y’ or ‘E’ actually worked as ‘N’ is the default value
  • 85. 85 Archive transparency … • Must set SYSIBMADM.GET_ARCHIVE global variable to ‘Y’ for query to search the rows from the archive-table – Update only applies to active rows in the base-table • Subsequent query may get updated rows and not updated rows • ARCHIVESENSITIVE (YES|NO) option on package BIND – Only affects read from archive-table – Deleted rows will only be moved to archive-table if MOVE_TO_ARCHIVE global variable is set correctly • REORG DISCARD on base-table – Generates LOAD statement to load rows into the archive-table – DISCARD dataset can be used as input • Dynamic scrollable cursors are not allowed • Package owner must have the WRITE privilege for the respective global variables
  • 87. 87 Summary • Share lessons learned, surprises, pitfalls • Provide hints and tips • Address some myths • Provide additional planning information • Provide usage guidelines and positioning on new enhancements • Help customers migrate as fast as possible, but safely
  • 88. DB2 11 Resources 88 • IBM Information Center / Knowledge Center • DB2 11 Technical Overview Redbook (SG24-8180) • DB2 11 for z/OS Performance Topics (SG24-8222) • DB2 11 links: https://www.ibm.com/software/data/db2/zos/family/db211/ – Links to DB2 11 Announcement Letter, webcasts and customer case studies – Whitepaper: “DB2 11 for z/OS: Unmatched Efficiency for Big Data and Analytics” – Whitepaper: “How DB2 11 for z/OS Can Help Reduce Total Cost of Ownership” • DB2 11 Migration Planning Workshop – http://ibm.co/IIJxw8 • Free eBook available for download – http://ibm.co/160vQgM • “DB2 11 for SAP Mission Critical Solutions” – http://scn.sap.com/docs/DOC-50807
  • 89. Join The World of DB2, Big Data & Analytics on System z 89
  • 90. 90