SlideShare uma empresa Scribd logo
1 de 156
Baixar para ler offline
The Information Management Specialists
DB2 10.1 Basic Database
Administration Workshop
for Linux, Unix and Windows
– CL2X3GB
Iqbal Goralwalla
The Information Management Specialists
Iqbal Goralwalla (iqbal@triton.co.uk)
– About Me
• IBM Gold Consultant
• IBM Champion for Information Management
• Head of DB2 on Midrange (LUW) at Triton Consulting
• Experience of DB2 LUW since DB2 Common Server (V2)
• IBM Certified Advanced Database Administrator
• Worked at the IBM Toronto Software Lab developing
DB2
 Worked on V5, V6, and V8
 Owner of 2 IBM patents on V8
The Information Management Specialists
Unit 1
The Information Management Specialists
DB2 is DB2 is DB2
The Information Management Specialists
DB2 TIMELINE
DB2 9.7
2009
DB2 10.1
2012
PureScale
2009
DB2 6, 7,
8, 9.1
DB2 10.5
2013
The Information Management Specialists
DB2 Editions
© 2012 IBM Corporation
Information Management Technology Ecosystem
4
DB2 Database Product Editions
• Storage Optimization
• Continuous Data Ingest
• Q-replication
• Federation
• Optim & InfoSphere
tools
DB2 AESESmall & Medium Businesses Enterprise Businesses
Database Enterprise Developer Edition
Allows developers to design, build, and prototype applications.
The edition is a product bundle that includes many DB2 features.
The Information Management Specialists
© 2012 IBM Corporation
Information Management Technology Ecosystem
7
DB2 Key Features and Functionality by
Edition
YesYesYesYesYesYesYesTime Travel Query
YesYesYesNoNoNoNoWorkload management
YesYesYesYesYesNoNoTivoli® System Automation
YesYesYesNoNoNoNoTable partitioning
YesYesYesYesYesNoYesSQL Replication between DB2 LUW and Informix
YesYesYesYesYesYesYesReplication tools
YesYesYesNoNoNoYesQuery parallelism
YesYesNoNoNoNoNoQ Replication with two other DB2 LUW servers
YesYesYesYesYesYesYespureXML® storage
YesYesYesYesYesYesYesOracle Compatibility
YesYesYesYesYesNoNoOnline reorganization
YesYesYesNoNoNoNoMulti-Temperature Storage
YesYesYesNoNoNoYes
Materialized query tables (MQTs)
Multidimensional clustering (MDC) tables
YesYesYesYesYesYesYesLBAC / RCAC
YesYesNoNoNoNoNoIBM InfoSphere Optim Query Workload Tuner
YesYes (10 licenses)NoNoNoNoNoIBM InfoSphere Data Architect
YesYesNoNoNoNoNoIBM InfoSphere Optim Performance Manager Extended
YesYesNoNoNoNoNoBM InfoSphere® Optim™ Configuration Manager
YesYesYesYesYesYesYesIBM Data Studio
YesYesYesYesYesNoNoHigh availability disaster recovery (HADR)
YesYesNoNoNoNoNoFederation with DB2 LUW and Oracle
YesYesYesYesYesYesYesFederation with DB2 LUW and Informix Data Server
Yes
DB2 pureScale
Feature
DB2 pureScale
Feature
Up to 16 cores and 64GB
of total cluster size
NoNoNoDB2 pureScale functionality
YesYesNoNoNoNoNoContinuous Data Ingest
YesYesYesYesYesYesNoCompression: backup
YesYes
DB2 Storage
Optimization Feature
NoNoNoNoAdaptive Compression and classic row compression
YesYesYesYesYesNoNoAdvanced Copy Services
Enterprise Developer
Advanced Enterprise
Server
Enterprise ServerWorkgroup Server
Express (incl.
FTL)
DB2
Express-C
DB2
Personal
Functionality
© 2012 IBM Corporation
Information Management Technology Ecosystem
10
Licensing – Metrics and Summary
Windows, Linux,
AIX, Solaris, HP-
UX
Windows, Linux,
AIX, Solaris, HP-
UX
Windows, Linux,
Solaris (x64)
Windows, Linux,
Solaris (x64)
Windows, LinuxPlatforms
supported
UnlimitedDB2 throttles itself
to use a maximum
of 64GB
DB2 throttles itself
to use a maximum
of 8 GB
DB2 throttles itself
to use maximum
of 4 GB
N/AMemory limit
UnlimitedDB2 throttles itself
to use maximum
of 16 cores and 4
sockets
DB2 throttles itself
to use maximum
of 4 cores
DB2 throttles itself
to use maximum
of 2 cores
N/AProcessor
limit
Authorized Users
(minimum of 25
per 100 PVUs)
or
PVUs
Eligible for Sub-
capacity pricing
Authorized Users
(minimum of 5 per
socket)
or
Per Socket
Authorized Users
(minimum of 5 per
server)
or
Per Server
Free Download
(Unsupported)
Per install
(Assumes one
user)
Pricing metric
Enterprise /
Advanced
WorkgroupExpressExpress-CPersonal
© 2012 IBM Corporation
Information Management Technology Ecosystem
11
DB2 Installation
• New in DB2 10:
– You can install the IBM® DB2 pureScale Feature while installing DB2
Enterprise Server Edition, DB2 Workgroup Server Edition, and DB2
Advanced Enterprise Server Edition.
– You can now install IBM Data Studio from the DB2 Launchpad.
Installation Windows UNIX
db2setup Wizard
db2_install command
Response file
Installation Methods
Deprecated in DB2 10!
© 2012 IBM Corporation
Information Management Technology Ecosystem
12
DB2 Installation – DB2 Users (non-pureScale)
On Linux or UNIX, three users and groups are created for a root installation
On Windows, the following user accounts are required:
– Installation user account
• Used to perform installation, normally a member of the Windows Administrators group
– (Optional) one or more setup user accounts
• DB2 instance user
• DB2 Administration Server (DAS) user
Instance Owner
The instance owner
home directory is where
the DB2 instance will be
created
db2inst1
Fenced User
Used to run UDF's and
stored procedures
outside of the address
space used by the DB2
database
db2fenc1
DB2 Administration
Server User
The user ID is used to
run the DB2
administration server on
the system
dasusr1
Administration Server has been deprecated in DB2 9.7!
© 2012 IBM Corporation
Information Management Technology Ecosystem
13
DB2 Installation – Directory Structure
Windows
Binaries: db2.exe, db2start.exe, db2stop.exe,
db2cmd.exe, etc.
Directory for databases, starts with
instance owning name
Partition number
Database ID (directory for SAMPLE database)
Default LOG directory
Automatic Storage directory (for SAMPLE database)
SYSCATSPACE table space (always created)
TEMPSPACE1 table space (always created)
USERSPACE1 table space (always created)
Default DB2 install
location
DMS table space data file (if not using automatic
storage)
db2
program files
node000
IBM
SAMPLE
T00000000
T00000001
T00000002
sqL0001
SQLLOGDIR
my_dms_ts.dat
my_sms_ts
bin
sqllib
© 2012 IBM Corporation
Information Management Technology Ecosystem
14
DB2 Installation – Directory Structure
Linux / UNIX (Automatic Storage)
Main DB2 software directories
Linux/UNIX instance owner’s home directory
DB2 Instance directory
Stored Procedure Directory – External and Internal
Automatic Storage directory (for SAMPLE database)
Default DB2 install location
Instance software directories linked to main DB2 software
DB2 diagnostic logs and other logs
Audit and Security information
Initialization profile for Unix shell
Instance configuration parameters binary file
System Database directory – Catalogs are kept here
Local Database directory
Databases are created under this node
Database ID (directory for SAMPLE database)
T0000000, T0000001, T0000002 – System, Temporary, User table spaces
/
/home/db2inst1
/sqllib
/bin
/opt/ibm/db2/V9.7
/lib
/java
/bnd
/conv
/include
/function
/db2dump
/security
db2profile
db2systm
/sqldbdir
/sqldbdir
/SQL0001
/SAMPLE
/NODE0000
/bin
/lib
/java
/bnd
/conv
The Information Management Specialists
Unit 2
© 2012 IBM Corporation3
Information Management Technology Ecosystem
Discontinued Tools in DB2 10
Control Center and related components are replaced by a new set of GUI tools: IBM
Data Studio and IBM InfoSphere Optim tools
– Note: Replication Center is still available and it is now a standalone tool
IBM Data Studio is the new main tool replacing Control Center.
– It provides an IDE for maintaining databases and developing database applications
Optim Performance Manager is a performance analysis and tuning tool for DB2
systems
Data StudioUser Interface to Spatial Extender
Data StudioVisual Explain
Optim Performance ManagerActivity Monitor, Event Analyzer
Optim Performance ManagerQuery Patroller Center
Optim Performance ManagerMemory Visualizer
Data Studio / Data Studio Web Console
Optim Perfomance Manager
Health Center
Data StudioWizards in Control Center
Data StudioControl Center
Data StudioCommand Editor
IBM InfoSphere Optim ToolsDiscontinued Tools
© 2012 IBM Corporation4
Information Management Technology Ecosystem
What is IBM Data Studio?
Comprehensive data management tool
– An integrated environment for managing databases and developing database
applications
Replaces Control Center in DB2 10
Built on the popular Eclipse framework
Support for Red Hat Linux, SUSE Linux, Windows
2 packaging options:
– Full client: integrated development environment for database
administration and routine and Java application development
– Administration client: smaller foot-print, non-Java routine
development
Optional extra component
– Data Studio Web console: health and availability monitoring
FREE to download!
© 2012 IBM Corporation5
Information Management Technology Ecosystem
Data Lifecycle Management
Develop
Design
Administer
Monitor
Tune
Data
Models
Applications
- Data Modeling
- SQL and XQuery editor
- Routines development
- Debugger
- Database Object Management
- Schema Changes
- Administrative Tasks
- Data Access Control
- Visual Explain
- Statistics Advisor
- Health Monitor
- Job Manager
© 2012 IBM Corporation6
Information Management Technology Ecosystem
Past and Future
IBM Data Studio 2.2
Optim Development
Studio 2.2
Optim Database
Administrator 2.2
IBM Data Studio 3.1
• Merges the functionality of all three tools
into a single product
• Improved usability for DB administration
• Supports set of discontinued functions
from Control Center
Oct/2011
IBM Data Studio 3.1.1
• Supports DB2 10 specific features
• RCAC
• Multi-temperature storage
• Adaptive compression
• Time travel tables
• and more!
2012
NEW
NEW
© 2012 IBM Corporation7
Information Management Technology Ecosystem
Installation
Install Data Studio full client or administration client:
– Installation Manager wizard
– Silent install using a response file
– Migrating or upgrading existing installation is not supported in version 3.1
– Saved workspace information is unaffected in the installation process
Install Data Studio web console:
– Can be installed running the installation wizard, installing in console mode, or
installing silently
– Upgrading from earlier versions is supported
• database connections, alert settings, and user authentication settings
stored locally or in the repository database are retained during upgrade
The Information Management Specialists
Unit 3
© 2012 IBM Corporation
Information Management Technology Ecosystem
16
DB2 Environment – Instances
■ A DB2 instance is a logical database
manager that serves as the access
point to the databases structures
■ All instances share the same
executable binary files
■ Each instance has
− its own configuration (dbm cfg)
− multiple Engine Dispatchable
Units (EDUs) shared among the
databases in that instance
Upgrades an instance to the current release. It
replaces “db2imgr”, discontinued in DB2 10
db2iupgrade
Command Description Example
db2start Start the default instance db2start
db2stop Stop the current instance db2stop -f
db2icrt Create an instance db2icrt –u db2fenc1 db2inst1
db2idrop Drop an instance db2idrop –f db2inst1
db2ilist List all instances db2ilist
db2iupdt Update an instance after installation of a fix pack db2iupdt –u db2fenc1 db2inst1
Instance myinst
Instance level profile registry
dmg cfg files
System db directory
Node directory
DCS directory
Database MYDB1
bufferpool(s) logs db logs
Syscatspace Tablespace1 Userspace1
MyTablespace1
TableX TableY
MyTablespace2
TableZ IndexZ
Database MYDB1
bufferpool(s) logs db logs
Syscatspace Tablespace1 Userspace1
MyTablespace1
Table1 Table2
MyTablespace2
Table3 Index3
© 2012 IBM Corporation
Information Management Technology Ecosystem
18
DB2 Process Model
Single process and multithreaded
model
– System controller: db2sysc (UNIX) or
db2syscs.exe (Windows)
– Threads: Engine Dispatchable Units
(EDU)
DB2 Agents (db2agent)
– Special type of EDU to handle
application requests
– The DB2 engine keeps a pool of agents
available to service requests
– An application is mapped to a
coordinator agent
DB2 has firewall to protect
databases and DBM
– Application runs on different address
space to prevent application errors
leading to corruption of DBM files or
internal buffer
The Information Management Specialists
DB2 Process Model
The Information Management Specialists
DB2 Process Model
The Information Management Specialists
Listing OS threads example
$ ps -fu lpham
UID PID PPID C STIME TTY TIME CMD
lpham 25996 25946 0 12:19 pts/12 00:00:00 -ksh
lpham 26567 26552 0 12:19 pts/12 00:00:00 ksh
lpham 27688 27676 0 12:21 pts/12 00:01:46 db2sysc
lpham 27716 27676 0 12:21 pts/12 00:00:00 db2acd
lpham 27995 27994 0 12:24 pts/13 00:00:00 -ksh
lpham 29321 26567 0 12:30 pts/12 00:00:00 ps -fu lpham
$ps -lLfp 27688 (try ps -m -o THREAD -p 27688 on AIX)
F S UID PID PPID LWP C NLWP PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
5 S lpham 27688 27676 27688 0 21 76 0 - 264903 msgrcv 12:21 pts/12 00:00:01 db2sysc
1 S lpham 27688 27676 27694 0 21 75 0 - 264903 schedu 12:21 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 27695 0 21 76 0 - 264903 semtim 12:21 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 27696 0 21 79 0 - 264903 schedu 12:21 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 27697 0 21 76 0 - 264903 msgrcv 12:21 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 27714 0 21 76 0 - 264903 schedu 12:21 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 27827 1 21 75 0 - 264903 semtim 12:21 pts/12 00:00:06 db2sysc
1 S lpham 27688 27676 27943 27 21 77 0 - 264903 schedu 12:22 pts/12 00:01:39 db2sysc
1 S lpham 27688 27676 28150 0 21 75 0 - 264903 schedu 12:25 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 28153 0 21 76 0 - 264903 schedu 12:25 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 28156 0 21 75 0 - 264903 schedu 12:25 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 30290 0 21 76 0 - 264903 schedu 12:36 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 30291 0 21 75 0 - 264903 schedu 12:36 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 30292 0 21 76 0 - 264903 semtim 12:36 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 30293 0 21 76 0 - 264903 schedu 12:36 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 30295 0 21 77 0 - 264903 semtim 12:36 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 30296 0 21 77 0 - 264903 semtim 12:36 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 30297 0 21 77 0 - 264903 semtim 12:36 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 30298 0 21 76 0 - 264903 msgrcv 12:36 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 30299 0 21 76 0 - 264903 msgrcv 12:36 pts/12 00:00:00 db2sysc
1 S lpham 27688 27676 30300 0 21 76 0 - 264903 msgrcv 12:36 pts/12 00:00:00 db2sysc
The Information Management Specialists
Listing DB2 threads example
$ db2pd -edus
>>>> List of all EDUs for database partition 0 <<<<
db2sysc PID: 27688
db2wdog PID: 27676
db2acd PID: 27716
EDU ID TID Kernel TID EDU Name
===========================================================================================
60 183282690400 30300 db2pfchr (TESTDB)
59 183278496096 30299 db2pfchr (TESTDB)
58 183291079008 30298 db2pfchr (TESTDB)
57 183295273312 30297 db2pclnr (TESTDB)
56 183286884704 30296 db2pclnr (TESTDB)
55 183299467616 30295 db2pclnr (TESTDB)
54 183307856224 30293 db2dlock (TESTDB)
53 183320439136 30292 db2lfr (TESTDB)
52 183303661920 30291 db2loggw (TESTDB)
51 183316244832 30290 db2loggr (TESTDB)
50 183257524576 28156 db2evmli (DB2DETAILDEADLOCK)
49 183261718880 28153 db2taskd (TESTDB)
46 183274301792 28150 db2wlmd (TESTDB)
26 183312050528 27943 db2stmm (TESTDB)
17 183324633440 27827 db2agent (TESTDB)
16 183328827744 27714 db2resync
15 183333022048 27697 db2ipccm
14 183337216352 27696 db2licc
13 183341410656 27695 db2thcln
12 183345604960 27694 db2alarm
1 183085558112 27688 db2sysc
The Information Management Specialists
DB2 Memory Model
The Information Management Specialists
DB2 Memory Usage
• db2pd -dbptnmem
• select * from
table(admin_get_dbp_mem_usage())
• db2mtrk
 -i (instance)
 -d (database)
 -a (applications)
 -p (agents)
The Information Management Specialists
DB2 Memory Usage
© 2012 IBM Corporation
Information Management Technology Ecosystem
17
DB and DBM Configurations
Description Example
View Database Manager Settings db2 get dbm cfg show detail
Change a Database Manager Setting db2 update dbm cfg using health_mon off
Description Example
View Database Settings db2 get db cfg for testdb
db2 connect to testdb
db2 get db cfg show detail
Change a DB Setting db2 update db cfg using logprimary 10
Connection
Management
Memory Tuning Monitoring
Define user
authentication type
Set communication
protocols
Instance
Management
Set sort limits
Set hash limits
Set utility impact limits
Share memory
resources among the
databases
Instance memory
Get database
snapshots
Check database health
and performance
Control instance
services
Enable federation
Set diagnostic log level
Authorization user
groups
■ Examples of what can be changed using DB and DBM configuration
The Information Management Specialists
Unit 4
© 2012 IBM Corporation
Information Management Technology Ecosystem
3
DB2 Storage Model
Buffer Pools
Storage Groups
Physical Disks
SG_A
Table
1
Table
2
Table
3
New
Table Spaces
BP1
Database■ Database
– Contains a set of objects used to
store, manage, and access data
■ Buffer Pool
– Area of main memory for the
purpose of caching data as it is
read from disk
■ Table Space
– Logical space used to store
data objects such as tables and
indexes
■ Storage Group
– Set of storage paths configured
to represent different classes of
storage in the database system,
where table spaces are stored
■ Physical Disk
– Physical location used to store data
© 2012 IBM Corporation
Information Management Technology Ecosystem
5
Table Spaces
Container 2
(Files, directories, raw devices)
Round-robin data distribution
Container 0 Container 1
extents
Database
Container 2
Container 3Container 0
Container 1
■ A layer of abstraction between logical
and physical data
■ Allows assignment of data to particular
logical devices or portions thereof
■ All tables, indexes, and other data are
stored in a table space
■ Associated to a specific buffer pool
■ Managed in three different ways: SMS,
DMS and Automatic Storage
■ An Automatic Storage table space is
associated to a Storage Group, that
defines the set of containers
HUMANRES tbsp
Employee
table
Department
table
SCHED tbsp
Project
table
© 2012 IBM Corporation
Information Management Technology Ecosystem
6
Types of Table Spaces
User Temporary Table SpaceUser Table Space
System Catalog Table Space System Temporary Table Space
■ 1 required
■ Default: SYSCATSPACE
■ Catalog tables with
metadata
■ Must exist!
■ 1 required
■ Default: TEMPSPACE1
■ System temporary area for
operation like join and
sorts.
■ 1 or more required
■ Default: USERSPACE1
■ Default user table space
■ Can be deleted
■ Stores all user defined
tables
■ 1 required
■ Default: USERTEMPSPACE
■ Store temp data from global
temporary tables
© 2012 IBM Corporation
Information Management Technology Ecosystem
21
Multi-Temperature Data Management
■ Provides the ability to assign priority to data (hot, warm, cool, cold) and dynamically
assign it to different classes of storage
– Data temperature signifies priority of the data defined by business
– Data temperature is inversely proportional to volume
• Small portion of hot data vs. large portion of warm/cold data
■ Data can change temperature
– As data ages
– As business criteria behind temperature changes
Data
Volume
Age
Data Volume
Sales data of this month = most frequent
Sales data of this quarter = less frequent
Sales data of previous quarters = rarely accessed
Sales data of past years = historical data
Age
HOT
WARM
COLD
DORMANT
Usage
Reduces
TCO
© 2012 IBM Corporation
Information Management Technology Ecosystem
22
Storage Groups
■ Storage Groups allow the flexibility to implement Multi-temperature Data
Management in Automatic Storage table spaces
■ Different Storage Groups can represent different classes of storage
– Hot data assigned to storage groups with fast devices
– Warm or Cold data assigned to slower devices
■ Easy maintenance when data ages and needs to be moved to a different storage
class
Store data
based on
priority of
accessibility
Reduced
TCO
Easy and
flexible
maintenance
© 2012 IBM Corporation
Information Management Technology Ecosystem
26
Multi-temperature Storage – A Sample Scenario
■ GOAL: Reduce warehouse storage costs while meeting the desired Quality of
Service requirements for access to last 3 quarters of data
■ Step 1: Create two storage groups to reflect the 2 tiers of storage This would result
in transfer rate, overhead, etc being programmatically computed at the storage
group level.
■ Step 2: Assign table spaces to storage groups
CREATE STOGROUP sg_hot ON '/ssd/path1', '/ssd/path2’ DATA TAG 1
CREATE STOGROUP sg_warm ON '/hdd/path1', '/hdd/path2' DATA TAG 5
Data tags represent business priority of
the data and is used by the optimizer
CREATE TABLESPACE q1_2011_tbsp USING STOGROUP sg_warm
CREATE TABLESPACE q2_2011_tbsp USING STOGROUP sg_warm DATA TAG 3
CREATE TABLESPACE q3_2011_tbsp USING STOGROUP sg_hot
© 2012 IBM Corporation
Information Management Technology Ecosystem
27
Multi-temperature Storage – A Sample Scenario
■ Create a new table space and change storage group for Q3 table space
– Q4 table space will reside on hot storage
– Q3 data will be moved and rebalanced across slower storage
■ Data Tag changed to allow optimizer to consider the changed data priority
CREATE TABLESPACE q4_2011_tbsp USING STOGROUP sg_hot
ALTER TABLESPACE q3_2011_tbsp USING STOGROUP sg_warm DATA TAG 3
ALTER TABLESPACE q2_2011_tbsp DATA TAG 5
• Only the most frequently accessed data resides on high-end expensive storage
and meets the QoS requirements for that data access
• The bulk of the data resides on less expensive storage.
• Provides easy management by DBA’s
… A New Quarter Begins
The Information Management Specialists
Unit 5
© 2012 IBM Corporation
Information Management Technology Ecosystem
Allows a single logical table to be broken up into multiple separate
physical storage objects (a.k.a. data partitions)
– Up to 32K data partitions
– Each partition defines a range of values
– A partition will only contain rows that match its range of values
Parallel table scans and index scans
Table Partitioning
10
Partitioned table
pay_1
tbsp1
pay_2
tbsp2
pay_3
tbsp3
pay_4
Payments
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Partition 1 Partition 2 Partition 3 Partition 4
Applications see a single table
Payments
Large Table
Applications see a single table
Non-partitioned table
tbsp1
Payments
© 2012 IBM Corporation
Information Management Technology Ecosystem
Benefits of Table Partitioning
11
Fast dataFast data
rollroll--inin
rollroll--outout
LargerLarger
tabletable
capacitycapacity
GreaterGreater
indexindex
placementplacement
flexibilityflexibility
BetterBetter
optimizationoptimization
of storageof storage
costscosts
IncreasedIncreased
queryquery
performanceperformance
through datathrough data
partitionpartition
eliminationelimination
TableTable
PartitioningPartitioning
© 2012 IBM Corporation
Information Management Technology Ecosystem
Partitioning Columns
– Must be base types (No LOBS, LONG VARCHAR)
– Accepts multiple columns and generated columns
– MINVALUE and MAXVALUE can be used to
specify open boundaries
It only accepts values for the defined ranges
– SQL0327N is raised if no range matches the
data being inserted
Table Partitioning - Syntax
12
pay_1
tbsp1
pay_2
tbsp2
pay_3
tbsp3
pay_4
Payments
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Partition 1 Partition 2 Partition 3 Partition 4
Applications see a single table
CREATE TABLE payments(id INT, paydate DATE, ...)
IN tbsp1, tbsp2, tbsp3 PARTITION BY RANGE (paydate)
(STARTING '1/1/2009' ENDING '12/31/2009' EVERY 3
MONTHS)
Short Form
Long Form
CREATE TABLE payments(id INT, paydate DATE, …)
PARTITION BY RANGE(paydate)
(PARTITION pay1_09 STARTING '1/1/2009' IN tbsp1,
PARTITION pay2_09 STARTING '4/1/2009' IN tbsp2,
PARTITION pay3_09 STARTING '7/1/2009' IN tbsp3,
PARTITION pay4_09 STARTING '10/1/2009' IN tbsp1
ENDING ‘12/31/2009')
© 2012 IBM Corporation
Information Management Technology Ecosystem
Data Partition Elimination
Ability to determine that only a subset of the data partitions in a table are necessary
to answer a query
DB2 EXPLAIN
– Provides detailed information about which data partition are used when a query is run
– db2exfmt provides details from EXPLAIN statement
13
SELECT * FROM PAYMENTS
WHERE paydate BETWEEN '02/03/2009' AND '30/05/2009'
Better response
time!
Improving the
performance!
Better response
time!
Improving the
performance!
© 2012 IBM Corporation3
Information Management Technology Ecosystem
Use Cases of Temporal Data Management
Track and analyze changes in your business
– Easily compare data from two points in time
– Accuracy in time-based reporting
Effectively perform and trace data corrections
– Easily make data changes in the past, i.e. effective
as a past in point time, and record when the change was made
Auditing and compliance
– Ability to show past data for any point in time
– Ability to show which information got changed in the same
transaction and when, up to pico-second precision
© 2012 IBM Corporation5
Information Management Technology Ecosystem
Built into DB2 – automatic and transparent
Three types of temporal tables
5
Temporal Tables – Types
System-period
temporal tables (STTs)
DB2 automatically
maintains historical
versions of the rows in the
history table
You can query the past
state of your data
Example
Employees who have left
the company
You assign a date range to
each row, indicating the
period when the data is valid
in the real world
Valid periods can be in the
past, present, or future
Example
•Insurance policy valid from
Jan 1 to June 30
•4% interest rate is effective
from Nov 1 to 20
Combination of STT and
ATT
Keep application-based
period information as well
as system-based historical
information
Application-period
temporal tables (ATTs)
Bitemporal Tables
© 2012 IBM Corporation8
Information Management Technology Ecosystem
Add new trips:
Amazonia, departing on 10/28/2011 & Ski Heavenly Valley, departing on 3/1/2011
8
Insert Data into a System-Period Temporal Table
INSERT INTO travel
VALUES ('Amazonia','Brazil','10/28/2011',1000.00);
INSERT INTO travel
VALUES (‘Ski Heavenly Valley','California','03/01/2011',400.00);
Current Date = January 1, 2011
trip_name destination
departure_
date
price sys_start sys_end
Amazonia Brazil 10/28/2011 1000.00 01/01/2011 12/30/9999
Ski Heavenly
Valley
California 03/01/2011 400.00 01/01/2011 12/30/9999
System validity period
(inclusive, exclusive)
Both SYS_START and SYS_END columns are inserted by DB2, not the application. For
simplicity, they are represented here as DATEs, rather than TIMESTAMPs
TRAVEL
© 2012 IBM Corporation9
Information Management Technology Ecosystem
9
Destination name is not explicit enough. Alter the DESTINATION column to make it longer
Update the destination column for Ski Heavenly Valley to make it clearer:
DB2 automatically inserted row into history table and supplied sys_start and sys_end dates
Alter and Update a System-Period Temporal Table
trip_name destination departure_date price sys_start sys_end
Amazonia Brazil 10/28/2011 1000.00 01/01/2011 12/30/9999
Ski Heavenly
Valley
Lake Tahoe, CA 03/01/2011 400.00 02/15/2011 12/30/9999
Current Date = February 15, 2011
ALTER TABLE travel ALTER COLUMN destination SET DATA TYPE VARCHAR(50);
UPDATE travel SET destination = 'Lake Tahoe, CA'
WHERE trip_name = 'Ski Heavenly Valley‘;
**History table is automatically modified
trip_name destination departure_date price sys_start sys_end
Ski Heavenly
Valley
California 03/01/2011 400.00 01/01/2011 02/15/2011
New sys_start date
TRAVEL
TRAVEL_HISTORY
© 2012 IBM Corporation10
Information Management Technology Ecosystem
We are no longer offering the Ski Heavenly Valley trip – delete it.
DB2 automatically inserted row into history table and supplied sys_start and sys_end dates
10
Delete from a System-Period Temporal Table
trip_name destination departure_date price sys_start sys_end
Amazonia Brazil 10/28/2011 1000.00 01/01/2011 12/30/9999
Current Date = April 1, 2011
DELETE FROM travel WHERE trip_name = 'Ski Heavenly Valley';
trip_name destination departure_date price sys_start sys_end
Ski Heavenly
Valley
California 03/01/2011 400.00 01/01/2011 02/15/2011
Ski Heavenly
Valley
Lake Tahoe, CA 03/01/2011 400.00 02/15/2011 04/01/2011
System validity period
(inclusive, exclusive)
Ski Heavenly Valley has been removed from base table
TRAVEL
TRAVEL_HISTORY
© 2012 IBM Corporation15
Information Management Technology Ecosystem
Add new trip: Manu Wilderness, departing on 08/02/2011
15
Insert Data into a Application-Period Temporal Table
Current Date = May 1, 2011
trip_name destination
departure_
date
price bus_start bus_end
Manu
Wilderness
Peru 08/02/2011 1500.00 05/01/2011 01/01/2012
BUSINESS_TIME period
(inclusive, exclusive)
bus_start and bus_end columns are
inserted by the application, not DB2
INSERT INTO travel
VALUES ('Manu Wilderness', 'Peru',
'08/02/2011',1500.00,'05/01/2011', '01/01/2012');
Application-period time entries
are independent of the current
date
**
© 2012 IBM Corporation16
Information Management Technology Ecosystem
16
Update an Application-Period Temporal Table
Manu Wilderness trip isn’t selling well, so we’ll offer a special price of $1000.00 for
the month of June.
Current Date = May 15, 2011
trip_name destination departure_date price bus_start bus_end
Manu Wilderness Peru 08/02/2011 1500.00 05/01/2011 06/01/2011
Manu Wilderness Peru 08/02/2011 1000.00 06/01/2011 07/01/2011
Manu Wilderness Peru 08/02/2011 1500.00 07/01/2011 01/01/2012
BUSINESS_TIME period
(inclusive, exclusive)
DB2 inserted 2 rows and updated 1 row.
UPDATE travel FOR PORTION OF BUSINESS_TIME FROM '06/01/2011' TO '07/01/2011'
SET price = 1000.00 WHERE trip_name = 'Manu Wilderness';
trip_name destination departure_date price bus_start bus_end
Manu Wilderness Peru 08/02/2011 1500.00 05/01/2011 01/01/2012
Before (Prior to Update)
After (Updated Table)
© 2012 IBM Corporation
Information Management Technology Ecosystem
8
888
Row Compression – Classic
Also referred to as static row compression
Uses a table-level compression dictionary (1 dictionary per table) to compress data
by row, across multiple columns
Dictionary is used to map repeated byte patterns to smaller symbols. These
smaller symbols replace long patterns in table rows.
After dictionary is created, data is compressed as it is inserted/updated in the
table.
– DB2 automatically creates the dictionary when enough the table has enough data for sampling
Name Dept Salary City ST ZIP
Bob smpo 30000 Dallas TX 75063
John smpo 25000 Dallas TX 75063
Bob smpo 30000 Dallas TX 75063 John smpo 25000 Dallas TX 75063 etc.
Bob (01) 30000 (02) John (01) 25000 (02) etc.
Dictionary
(01) smpo
(02) Dallar, TX, 75063
© 2012 IBM Corporation
Information Management Technology Ecosystem
6
Row Compression
Also known as deep compression
Uses a dictionary-based compression algorithm to replace
recurring strings with shorter symbols within rows
Continuous enhancement since it was introduced in DB2 9.1
Two types available:
– Classic (static) row compression
– Adaptive row compression
• An enhancement to classic row
compression to provide extra storage savings
Included in DB2 Storage Optimization Feature
New in
DB2 10
DB2 9.1 DB2 9.5 DB2 9.7 DB2 10
- Row Compression* - Automatic Dictionary
Creation (ADC)*
- XML compression*
- Temporary table
compression*
- Index compression*
- LOB inlining
- Adaptive
compression*
© 2012 IBM Corporation
Information Management Technology Ecosystem
12
Data Warehouse Compression Results
230GB raw size - Most of the data in a single table
Graph – Storage Savings
Increase in savings by Adaptive Compression
– 3x Compression with Static Compression using reorg
– 5.6x Compression with Automatic dictionary creation and Adaptive
Compression
– 7.4x Compression with Adaptive Compression and full reorg
Compressionfactor
(higherisbetter)
© 2012 IBM Corporation
Information Management Technology Ecosystem
13
Real Customer Results with Adaptive Compression
Customer top 5 tables
– DB2 9.7 – compression rates between 3X and 6X
– DB2 10 – compression rates between 4X and 10X
Sum of all tables DB2 9.7 delivered 5X compression
Sum of all tables DB2 10 delivered 7X compression
© 2012 IBM Corporation
Information Management Technology Ecosystem
14
How to enable row compression?
– Must have DB2 Storage Optimization Feature
– To enable classic row compression
– To enable adaptive row compression
– To disable compression
Data is compressed after the table dictionary is created.
– INSERT/UPDATE/LOAD/IMPORT can trigger the automatic
dictionary creation
– Classic REORG with RESETDICTIONARY option will always generate
a new dictionary and compress all table data
Row Compression – Enablement & Tools
CREATE TABLE / ALTER TABLE … COMPRESS YES STATIC
CREATE TABLE / ALTER TABLE … COMPRESS YES
Adaptive is
the default
in DB2 10
CREATE TABLE / ALTER TABLE … COMPRESS NO
© 2012 IBM Corporation
Information Management Technology Ecosystem
15
Row Compression - Example Scenarios
1) Compressing data for new table
CREATE TABLE Sales (<columns definition>) COMPRESS YES
Load data… Automatic Dictionary Creation (ADC) will kick off and create compression dictionary. Once
dictionary is built, new data put into the table is compressed:
LOAD FROM file OF DEL REPLACE INTO NewSale
2) Compressing data in existing tables
ALTER TABLE Sales COMPRESS YES
Data is still un-compressed. Explicitly compress data via REORG:
REORG TABLE Sales
3) Recreating the dictionary to optimize compression
(Classic Row Compression) Data has changed a lot so current
dictionary is not so effective anymore. Use REORG to recreate
dictionary and re-compress data:
REORG TABLE Sales RESETDICTIONARY
4) Uncompressing your data
Disable compression:
ALTER TABLE Sales COMPRESS NO
Uncompress data:
REORG TABLE Sales
Adaptive Compression
greatly reduces the need for
REORGs to maintain the
compression ratio.
© 2012 IBM Corporation
Information Management Technology Ecosystem
16
Row Compression – Enablement & Tools
Estimating storage savings
– ADMIN_GET_TAB_COMPRESS_INFO_V97
– Instead use: ADMIN_GET_TAB_COMPRESS_INFO and
ADMIN_GET_TAB_DICTIONARY_INFO
SELECT SUBSTR(TABNAME,1 ,10) tabname, OBJECT_TYPE, ROWCOMPMODE,
PCTPAGESSAVED_CURRENT current, PCTPAGESSAVED_STATIC with_static,
PCTPAGESSAVED_ADAPTIVE with_adaptive
FROM TABLE(SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO('DB2INST1','CUSTOMERS')) AS T;
TABNAME OBJECT_TYPE ROWCOMPMODE CURRENT WITH_STATIC WITH_ADAPTIVE
---------- ------------ ------------ ------- ----------- -------------
CUSTOMERS DATA S 60 68 81
CUSTOMERS XML S 58 62 62
Deprecated in DB2 10!
The Information Management Specialists
Unit 6
Moving Data in DB2 UDB for LUW
Utilities
 DB2 provides three utilities for mass data
movement
• EXPORT
• IMPORT
• LOAD
 LOAD executed at the table level
 IMPORT/EXPORT may use views, joins
etc (in certain circumstances)
Moving Data in DB2 UDB for LUW
File Formats
 Determine how data is physically stored
in external files
 Five different file formats supported by
data movement utilities
• ASC (non-delimited ASCII files)
• DEL (delimited ASCII files)
• WSF (Work Sheet Format files)
• IXF (Integrated Exchange Format files)
• CURSOR (V8.1)
Moving Data in DB2 UDB for LUW
Delimited ASCII Files (DEL)
 Used extensively in RDBMS
 Makes use of delimiters
• Row delimiter
• Column delimiter
• Character
100,”Joe”,”Joe Street”
200,”Foo”,”Foo Street”
300,”Moo”,”Moo Street”
Moving Data in DB2 UDB for LUW
Non-Delimited ASCII Files (ASC)
 Fixed-length ASCII files
 Row delimiter
 No column or character delimiters
 All column values are of fixed length
• Variable length character columns are
padded with blanks
100JoeJoe Street
200FooFoo Street
300MooMoo Street
Moving Data in DB2 UDB for LUW
Integrated Exchange Format Files (IXF)
 Consist of unbroken sequence of
variable length records
• Numeric values stored as packed decimal
or binary
• Character values stored as ASCII
 Cannot be edited using a text editor
 IXF files contain structural information
• Can be used to rebuild database objects
Moving Data in DB2 UDB for LUW
Worksheet Format Files (WSF)
 Used to extract or import data by Lotus
1-2-3 and Symphony products
 Not used to move data from one DB2
table to another
 Cannot be edited using a text editor
Moving Data in DB2 UDB for LUW
Data Movement Utilities and File Formats
Format LOAD IMPORT EXPORT
ASC Yes Yes No
DEL Yes Yes Yes
WSF No Yes Yes
IXF Yes Yes Yes
Moving Data in DB2 UDB for LUW
Export
 Used to extract data from tables and write
into an external file
 Data can be extracted in different file
formats
• IXF
• DEL
• WSF
 Files can then be used by the DB2 Load or
Import utilities or other external products
Moving Data in DB2 UDB for LUW
Export
 EXPORT uses SQL syntax to select data
from the database
 SQL can be very versatile and may
• reference views and aliases
• include joins
• filter rows using where clause
• use columnar functions
• use group by and order by clauses
Moving Data in DB2 UDB for LUW
Export – minimum requirements
2. Path and file name
3. File type (IXF, DEL,
or WSF)
1. SELECT statement
of del
select * from f1team
f1team.delexport to
Moving Data in DB2 UDB for LUW
Export – example
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
1 Ferrari 1 Maranello Italy
2 McLaren 2 Woking Britain
3 Williams 3 Didcot Britain
export to f1team.del
of del
select * from f1team
1,”Ferrari”,1,”Maranello”,”Italy”
2,”McLaren”,2,”Woking”,”Britain”
3,”Williams”,3,”Didcot”,”Britain”
f1team.del
Moving Data in DB2 UDB for LUW
Export – optional requirements
 Message file name to capture all error and
warning messages
 New column names when exporting to IXF
or WSF file formats
 File type modifier for additional formatting of
DEL and WSF files
 File names and paths for exporting LOB
columns
Moving Data in DB2 UDB for LUW
Export
 Must have SYSADM, or DBADM, or
CONTROL or SELECT on table(s)
 Default date format for DEL and WSF files is
yyyymmdd. Can be changed to ISO
representation yyyy-mm-dd by specifying
DATEISO
 Default delimiter for DEL format is double
quote (‘’). To override, use CHARDEL
 Use tools like Visual Explain to evaluate
performance of Select statement
Moving Data in DB2 UDB for LUW
Export – Derived Columns
 2 ways to force column rename for IXF
and WSF files:
1. Use the AS clause in SELECT
EXPORT … SELECT GROSS_PAY – TAXES
AS NET_PAY … FROM …
2. Use METHOD N option
EXPORT … METHOD N (‘NET_PAY’,…)
SELECT GROSS_PAY – TAXES, …
FROM …
Moving Data in DB2 UDB for LUW
Export – Large Objects
 Can include 2GB of LOB data in the target
file
 Store each LOB value in it’s own file
EXPORT TO mydata.del of DEL LOBS TO
E:datalobs1, E:datalobs2 LOBFILE mypics …
MODIFIED BY LOBSINFILE SELECT * FROM
mydata
E:datalobs1
mypics.001
E:datalobs1
mypics.002
E:datalobs2
mypics.323
Moving Data in DB2 UDB for LUW
Import
 Used to move data from an external file into
a table or a view
 Data can be imported from various file
formats
• IXF
• DEL
• ASC
• WSF
Moving Data in DB2 UDB for LUW
 The IMPORT utility uses the SQL
processor to bulk load data
 Faster than application programs for
large insert volumes
 Triggers are fired and constraints
validated
Import
Moving Data in DB2 UDB for LUW
Import – minimum requirements
2. Path and file name
3. File type (IXF, DEL,
ASC, or WSF)
1. Import type
of del
insert into
f1team.del
4. Name or alias of
table or view where
data is to be imported
f1team
import from
Moving Data in DB2 UDB for LUW
Import – optional requirements
 Message file name to capture all error and
warning messages
 Number or rows to insert before committing
changes to table
 Number of records to skip from file before
beginning import
 Names of table or view columns into which
data will be inserted
Moving Data in DB2 UDB for LUW
Import – Insert Mode (1)
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
1 Ferrari 1 Maranello Italy
import from f1team.del
of del
insert into f1team
2,”McLaren”,2,”Woking”,”Britain”
3,”Williams”,3,”Didcot”,”Britain”
f1team.del
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
1 Ferrari 1 Maranello Italy
2 McLaren 2 Woking Britain
3 Williams 3 Didcot Britain
Moving Data in DB2 UDB for LUW
Import – Insert Mode (2)
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
import from f1team.del
of del
insert into f1team
(hq_city,country,team_id,name,principal)
”Maranello”,”Italy”, 1,”Ferrari”,1
”Woking”,”Britain”,2,”McLaren”,2
”Didcot”,”Britain”, 3,”Williams”,3
f1team.del
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
1 Ferrari 1 Maranello Italy
2 McLaren 2 Woking Britain
3 Williams 3 Didcot Britain
Moving Data in DB2 UDB for LUW
Import – Insert_Update Mode
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
1 Ferrari 1 Maranello Italy
import from f1team.del
of del
insert_update into f1team
1,”Ferrari”,1,”Rome”,”Italy”
2,”McLaren”,2,”Woking”,”Britain”
3,”Williams”,3,”Didcot”,”Britain”
f1team.del
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
1 Ferrari 1 Rome Italy
2 McLaren 2 Woking Britain
3 Williams 3 Didcot Britain
Moving Data in DB2 UDB for LUW
Import – Replace Mode
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
1 Ferrari 1 Maranello Italy
import from f1team.del
of del
replace into f1team
2,”McLaren”,2,”Woking”,”Britain”
3,”Williams”,3,”Didcot”,”Britain”
f1team.del
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
2 McLaren 2 Woking Britain
3 Williams 3 Didcot Britain
Note: Replace mode is not
valid if primary key of F1TEAM
is referenced by a foreign key
in another table
Moving Data in DB2 UDB for LUW
Import – Replace_Create Mode (1)
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
1 Ferrari 1 Maranello Italy
import from f1team.del
of ixf
replace_create into
f1team
2,”McLaren”,2,”Woking”,”Britain”
3,”Williams”,3,”Didcot”,”Britain”
f1team.del
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
2 McLaren 2 Woking Britain
3 Williams 3 Didcot Britain
Note: Replace_Create mode is not
valid if primary key of F1TEAM
is referenced by a foreign key
in another table
Note: Only valid for IXF format
Moving Data in DB2 UDB for LUW
Import – Replace_Create Mode (2)
import from f1team.del
of ixf
replace_create into
f1team
2,”McLaren”,2,”Woking”,”Britain”
3,”Williams”,3,”Didcot”,”Britain”
f1team.del
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
2 McLaren 2 Woking Britain
3 Williams 3 Didcot Britain
Note: Replace_Create mode is not
valid if primary key of F1TEAM
is referenced by a foreign key
in another table
Note: Only valid for IXF format
Moving Data in DB2 UDB for LUW
Import – Create Mode
import from f1team.del
of ixf
create into
f1team
2,”McLaren”,2,”Woking”,”Britain”
3,”Williams”,3,”Didcot”,”Britain”
f1team.del
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
2 McLaren 2 Woking Britain
3 Williams 3 Didcot Britain
Note: Only valid for IXF format
Moving Data in DB2 UDB for LUW
Importing into a specific tablespace
 A target tablespace can be specified
using the CREATE option
IMPORT FROM tabddl.ixf OF IXF
CREATE INTO newtab
IN mytbsp
INDEX IN myindextbsp
LONG IN mylongtbsp
 All three tablespaces must be DMS if
INDEX or LONG options are used
Moving Data in DB2 UDB for LUW
Import – Usage Considerations
 Commit frequency can be tuned
IMPORT … COMMITCOUNT 100 …
 A failed import can be restarted
IMPORT … RESTARTCOUNT 200 …
 Large objects can be imported into a table
from lob files created by the Export utility
IMPORT FROM mydata.del of DEL
LOBS FROM E:datalobs1, E:datalobs2
MODIFIED BY LOBSINFILE … INTO mydata …
Moving Data in DB2 UDB for LUW
Import – Method L
 Used to import data from ASC files
 Start and end position of each column need
to be specified
F1TEAM
TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY
Char(3) Varchar(20) Char(3) Varchar(20) Varchar(20)
import from f1team.asc
of asc
method L (1 3, 4 23, 24 26, 27 46, 47 66)
insert into f1team
Moving Data in DB2 UDB for LUW
Import – Method P
 Column numbers used to select columns
from data file
 File type should be DEL or IXF
import from f1team.del
of del
method P (1,2,5)
insert into f1team
1,”Ferrari”,1,”Maranello”,”Italy”
2,”McLaren”,2,”Woking”,”Britain”
3,”Williams”,3,”Didcot”,”Britain”
f1team.del F1TEAM
TEAM_ID NAME COUNTRY
Moving Data in DB2 UDB for LUW
Creating an identical table with
Export and Import
 Export zero rows from the existing table
into an IXF file
EXPORT TO tabddl.ixf OF IXF
SELECT *
FROM tab
WHERE 1 < 0;
IMPORT FROM tabddl.ixf OF IXF
REPLACE_CREATE INTO newtab;
 Import the IXF file into a new table with
the REPLACE_CREATE option
Moving Data in DB2 UDB for LUW
Load
 Bypasses SQL processing to improve
performance
 Pre-formats data pages and populates the
table one extent at a time
 Does not fire triggers, invoke constraints or
check referential integrity
 Utility can collect statistics and take a
backup during LOAD processing
 Requires SYSADM or DBADM or LOAD
authorities
Moving Data in DB2 UDB for LUW
Load – minimum requirements
2. Path and file name
3. File type (IXF, DEL,
ASC, or CURSOR)
1. Load type
of del
insert into
f1team.del
4. Name of table where
data is to be loaded
f1team
load from
Moving Data in DB2 UDB for LUW
Load – usage considerations
 Inserting new data
LOAD FROM mydata.ixf OF IXF …
INSERT INTO mytable …
 Replacing data
LOAD FROM mydata.ixf OF IXF …
REPLACE INTO mytable …
 Terminating a Load operation
LOAD FROM mydata.ixf OF IXF …
TERMINATE INTO mytable …
Moving Data in DB2 UDB for LUW
Load – usage considerations
 Generating consistency points
LOAD FROM … SAVECOUNT 200 …
 Restarting a failed Load
LOAD FROM mydata.ixf OF IXF …
RESTART INTO mytable …
 Forcing Load to fail on warning
LOAD FROM … WARNINGCOUNT 1 …
 Specifying a file for rejected rows (only valid
for DEL and ASC file types)
LOAD FROM … OF DEL …
MODIFIED BY DUMPFILE=C:mydump.del
Moving Data in DB2 UDB for LUW
LOAD from CURSOR
 You can now LOAD from a SELECT
• New file type – CURSOR
• Supports arbitrary SELECT statements – single tables,
joins, nicknames, etc.
• CLP: Need to declare cursor, and cursor name provided
as the input file name to LOAD
 DECLARE mycursor CURSOR FOR select *
from t1
 LOAD FROM mycursor OF CURSOR
INSERT INTO t2 ALLOW READ ACCESS
Moving Data in DB2 UDB for LUW
LOAD from CURSOR – Example
 Table t2 in database DB2
 DECLARE mycursor CURSOR database DB2
user user1 using pwd1 FOR select * from t2
 LOAD FROM mycursor OF CURSOR
INSERT INTO t1 ALLOW READ ACCESS
The Information Management Specialists
Unit 7
7 © 2010 IBM Corporation
Information Management
Archival Logging
■ Enable with LOGARCHMETH1 database configuration parameter
■ History of log files is maintained, in order to allow roll forward recovery
and online backup
■ Logs can be optionally archived to an archive location when no longer
active to avoid exhaustion of log directory
Archive Log Directory Active Log Directory
ACTIVE – Contains information
for non-committed transactions.
When all preallocated log files are
filled,more log files are allocated
and used.
Filled log files may be moved to a
different storage location
ONLINE ARCHIVE
Contains information for
committed transactions.
Stored in the ACTIVE
log subdirectory.
24 © 2010 IBM Corporation
Information Management
Logging Configuration Parameters
■ LOGPRIMARY
– Controls the number of primary log files that are allowed in the active log directory.
■ LOGSECOND
– Controls the number of secondary log files that are allowed in the active log
directory.
■ LOGBUFSZ (Log Buffer Size)
– Amount of memory to use as a buffer for log records before writing these records to
disk
– Log records are written to disk when a commit is issued or log buffer is full or
internal database request (every 1 second)
■ LOGFILSIZ (Log File Size)
– Size of each configured log file in 4K pages
■ LOGPATH and NEWLOGPATH
– LOGPATH is the default active log directory
– Changed to a user defined location using NEWLOGPATH.
■ FAILARCHPATH (Failover log archive path)
– Specifies a third target to archive log files if the primary and secondary archival
paths fail
8 © 2010 IBM Corporation
Information Management
Infinite Logging
■ Infinite logging provides infinite active log space
–Enabled by setting LOGSECOND to -1
■ Secondary log files are allocated until the unit of work commits or
storage is exhausted
■ Archived logs can hinder performance for rollback and crash
recovery
■ Database must be configured to use archival logging
■ Up to 256 log files (primary + secondary)
■ Control parameters
–NUM_LOG_SPAN – number of log files an active transaction can
span
–MAX_LOG – Percentage of active primary log file space that a
single transaction could consume
9 © 2010 IBM Corporation
Information Management
Database Backup
■ Copy of a database or table space
–User data
–DB2 catalogs
–All control files, e.g. buffer pool files,
table space file, database configuration
file
■ Backup modes:
–Offline Backup
• Does not allow other applications or processes to access
the database
• Only option when using circular logging
–Online Backup
• Allows other applications or processes to access the
database
• Available to users during backup
• Can backup to disk, tape, TSM and other storage vendors
10 © 2010 IBM Corporation
Information Management
Database Backup – Syntax
db2 backup database <db_name> <online> to <dest_path>
Online backup example
db2 backup database mydb online to /home/db2inst1/backups
Offline backup example
db2 backup database mydb to /home/db2inst1/backups
13 © 2010 IBM Corporation
Information Management
Table space Backup
■ Enables user to backup a subset of database
■ Multiple table spaces can be specified
■ Database must be using archival logging
■ Table space backup can run in both online and offline backup
■ Table space can be restored from either a database backup or
table space backup of the given table space
■ Use the keyword TABLESPACE to specify table spaces
db2 backup database mydb1 TABLESPACE (TBSP1) ONLINE to
/home/db2inst1/backup
DB2 Administration for LUW – Part 2
Backup of Tablespaces – Usage
Considerations
 Backup of tablespaces should be done
together if they contain:
• Tables which have data, indexes, and LOBs
spilt across DMS tablespaces
• Tables related by referential constraints
• Summary and underlying table in different
tablespaces
• Tables related by triggers
14 © 2010 IBM Corporation
Information Management
Incremental Backups
■ Incremental (a.k.a. cumulative) - Backup of all database data that has changed since the
most recent, successful, full backup operation
■ Incremental Delta - Backup of all database data that has changed since the last
successful backup (full, incremental, or delta) operation.
■ Need to have TRACKMOD database configuration parameter ON
■ Supports both database and table space backups.
■ Suitable for large databases, considerable savings by only backing up incremental
changes.
Delta BackupsFul
l
Ful
l
Ful
l
Ful
l
Cumulative Backups
Sunday SundayMon Tue Wed Thu Fri Sat
15 © 2010 IBM Corporation
Information Management
Database Backup – Compression
■ DB2 backups can now be automatically compressed
– Significantly reduce backup storage costs
■ Performance characteristics
– CPU costs typically increased (due to compression computation)
– Media I/O time typically decreased (due to decreased image size)
– Overall backup/restore performance can increase or decrease; depending
on whether CPU or media I/O is a bottleneck
Example:
db2 backup database DS2 to /home/db2inst1/backups compress
DB2 Administration for LUW – Part 2
Backup – enhancements – V8.2
 Logs in backup images
• Logs can now be included in the online
backup
• Supports all types of online backups such as
database, table space, incremental, and
compressed
• All logs that are needed to restore the backup
and roll forward to the time corresponding to
the end of the backup are placed in the
backup image
16 © 2010 IBM Corporation
Information Management
Automatic Database Backup
■ Simplifies database backup management tasks for the DBA
by always ensuring that a recent full backup of the database
is performed as needed
■ To configure automatic backup
–Graphical user interface tools
• Configure Automatic Maintenance
wizard
–Command line interface
• auto_db_backup
• auto_maint
–Stored procedure
• AUTOMAINT_SET_POLICY system stored procedure
17 © 2010 IBM Corporation
Information Management
Optimizing Backup Performance
■ DB2 automatically configures these parameters for performance
– Parallelism
• Number of table spaces backed up in parallel
– num_buffers
• Number of buffers used
• Use at least twice as many buffers as backup targets (or
sessions) to ensure that the backup target devices do not have to
wait for data.
– Buffer
• Backup buffer size
■ Allocate more memory to backup utility by increasing utility heap size
(UTIL_HEAP_SZ) configuration parameter.
■ Backup subset of data where possible:
– Table space backups
– Incremental backups
■ Use multiple target devices
© 2012 IBM Corporation
Information Management Technology System
21
DB2CKBKP – Check Backup
■ This utility can be used to test the integrity of a backup image
– determine whether the image can be restored.
– display the meta-data stored in the backup header.
$ db2ckbkp -h SAMPLE.0.moba.NODE0000.CATN0000.20041008013428.001
=====================
MEDIA HEADER REACHED:
=====================
Server Database Name -- SAMPLE
Server Database Alias -- SAMPLE
Client Database Alias -- SAMPLE
Timestamp -- 20041008013428
Database Partition Number -- 0
Instance -- moba
Sequence Number -- 1
Release ID -- A00
Database Seed -- 92DBF20F
DB Comment's Codepage (Volume)-- 0
DB Comment (Volume) --
DB Comment's Codepage (System)-- 0
DB Comment (System) --
Authentication Value -- 255
Backup Mode -- 1
Includes Logs -- 1
Compression -- 0
・・・(略)・・・
This backup is an online
backup with INCLUDE LOGS
option
0: Not included in the log file
1: contains log file
Backup is not
compressed
0: not compressed
1: compressed
18 © 2010 IBM Corporation
Information Management
Database Recovery
■ Recovery is the rebuilding of a database or
table space after a problem such as media
or storage failure, power interruption, or
application failure.
Types of Recovery
–Crash or restart recovery
• Protects the database from being left inconsistent (power
failure)
–Version recovery
• Restores a snapshot of the database
–Roll forward recovery
• Extends version recovery by using full database and table
space backup in conjunction with the database log files
■ Crash recovery and version recovery are enabled in DB2 by default
19 © 2010 IBM Corporation
Information Management
DB2 Restore Utility
■ Restore utility is the complement of backup utility
■ Restores database or table space from a previously taken
backup
■ TAKEN AT - Specify the time stamp of the database backup
image. Backup image timestamp is displayed after
successful completion of a backup
■ Without prompting – Overrides any warnings.
Example:
SAMPLE.0.DB2INST.NODE0000.CATN0000.20080718131210.001
RESTORE DATABASE dbalias FROM <db_path> TAKEN AT 20080718131210
20 © 2010 IBM Corporation
Information Management
Table space Restore Operation
■ Restored table space is in Roll Forward Pending state and can be either
rolled forward to End of Logs or a Point In Time.
– In case of Point in Time roll forward, table space must be rolled forward to
at least the minimum Point in Time
■ Minimum recovery time can be checked using
– db2 list tablespaces show detail
■ User table space must be in line with catalog table space
– e.g if catalog indicates table T1 exists in table space TSP1, table T1 must
exist in the TSP1 table space, otherwise database becomes inconsistent
■ Every time there is a DDL changed, minimum recovery time for the table
space is revised to indicate the last DDL change.
■ Recommended to take a table space backup after a table space has been
restore to a point in time.
■ Transactions that came after the point in time are lost, therefore take a
table space backup as new point of reference for future recoveries.
21 © 2010 IBM Corporation
Information Management
Incremental Restore
■ Restore a database with incremental backup images
■ AUTOMATIC (recomended) - All required backup images will be applied
automatically by restore utility
■ MANUAL – User applies the required backups manually
– db2ckrst can provide the sequence for applying backups
■ ABORT - aborts an in-progress manual cumulative restore
■ RESTORE DATABASE sample INCREMENTAL AUTOMATIC FROM /db2backup/dir1;
■ ROLLFORWARD DATABASE sample TO END OF LOGS AND COMPLETE;
DB2 Administration for LUW – Part 2
Restore Example 1
 Basic restore requires path and time
DB2 Administration for LUW – Part 2
Restore Example 2
RESTORE DATABASE FIDB
FROM ‘C:UBackupsF1DB’
TAKEN AT 20020726152238
REPLACE EXISTING;
DB2 Administration for LUW – Part 2
Restore Example 3
RESTORE DATABASE FIDB
FROM ‘C:UBackupsF1DB’
TAKEN AT 20020726152238
REPLACE EXISTING
WITHOUT ROLLING FORWARD;
Note: The WITHOUT ROLLING FORWARD
option can NOT be specified if the restore is
taking place from an online backed up
database or from a tablespace level backup
DB2 Administration for LUW – Part 2
Restore Example 4
RESTORE DATABASE FIDB
TABLESPACE (userspace1) ONLINE
FROM ‘C:UBackupsF1DB’
TAKEN AT 20020726152238
REPLACE EXISTING;
Note: ONLINE option can only be used for
tablespace or history file restores
DB2 Administration for LUW – Part 2
Restore Example 5
RESTORE DATABASE FIDB
HISTORY FILE ONLINE
FROM ‘C:UBackupsF1DB’
TAKEN AT 20020726152238
REPLACE EXISTING;
Note: ONLINE option can only be used for
tablespace or history file restores
DB2 Administration for LUW – Part 2
Restore Example 6
 How would you restore the database if there was a
crash after the backup taken on Thursday in each
case?
DB2 Administration for LUW – Part 2
Redirected Restore
 Restore fails if current containers missing
from backup
 May want to restore on new system which
may not have necessary containers defined
 Redirected Restore allows adding,
changing, or removing of tablespace
containers during a restore
 Better to take backup of tablespace
immediately after new containers are added
to the tablespace
DB2 Administration for LUW – Part 2
Redirected Restore Example
RESTORE DATABASE FIDB
FROM ‘C:UBackupsF1DB’
TAKEN AT 20020726152238
INTO NEWDB
REDIRECT
WITHOUT ROLLING FORWARD;
DB2 Administration for LUW – Part 2
Redirected Restore – defining new
containers
 Since containers cannot be shared between
databases, the RESTORE command will return a
SQL1277N error stating that “storage must be
defined” for the new containers
 Use LIST TABLESPACES to check state of
containers
 Define storage for containers using the SET
TABLESPACE CONTAINERS command
 Complete the redirected restore using RESTORE
DATABASE MYDB CONTINUE
DB2 Administration for LUW – Part 2
Restore Enhancements – Automatic
Storage
 It is now possible to choose the location of the
database path during a restore
 It is also possible to redefine storage paths
associated with a database  Excellent!
• RESTORE DATABASE TEST1
• RESTORE DATABASE TEST2 TO X:
• RESTORE DATABASE TEST3 DBPATH ON D:
• RESTORE DATABASE TEST3 ON /path1, /path2,
/path3
• RESTORE DATABASE TEST4 ON E:newpath1,
F:newpath2 DBPATH ON D:
DB2 Administration for LUW – Part 2
Roll Forward Example 1
ROLLFORWARD DATABASE FIDB
TO END OF LOGS
OVERFLOW LOG PATH (C:LOGS);
DB2 Administration for LUW – Part 2
Roll Forward Example 2
ROLLFORWARD DATABASE FIDB
TO 2002-07-26-15.22.38.000000
AND STOP;
DB2 Administration for LUW – Part 2
Roll Forward Example 3
ROLLFORWARD DATABASE FIDB
TO END OF LOGS AND COMPLETE
TABLESPACE (USERSPACE1) ONLINE;
DB2 Administration for LUW – Part 2
Roll Forward Query Status
 Roll forward status
• Working
• Pending
• In progress
• No roll forward pending
 Next log file to be read
 Log files processed
 Last committed transaction
DB2 Administration for LUW – Part 2
HADR – Scope
 Takes place at the database level
DB2 Administration for LUW – Part 2
StandbyActive
Client Reroute
Log
pages
Clients
HADR HADR
Active Connection Active Connection
db2 update alternate server for
database mydb using hostname
sbhost port sbport
Hostname sbhost and port sbport
automatically stored on client
HADR – Overview
db2 TAKEOVER HADR ON
DATABASE mydb
The Information Management Specialists
Unit 9
3 © 2012 IBM Corporation
Information Management Technology Ecosystem
Concurrency
Concurrency is the sharing of resources by multiple
interactive users or application programs at the same time
– Provides increased application throughput
– Increased responsiveness across the system
– Better resource utilization within the system
Need to be able to control the degree of concurrency:
–With proper amount of data stability
–Without loss of performance
Having multiple interactive users can lead to:
–Lost Update
–Uncommitted Read
–Non-repeatable Read
–Phantom Read
4 © 2012 IBM Corporation
Information Management Technology Ecosystem
Terminology in Concurrent Applications
Transaction
–Sequence of one or more SQL operations, grouped together
as a single unit
–Also known as a unit of work
Committed Data
–Using the COMMIT statement commits any changes made
during the transaction to the database
Uncommitted Data
–Changes during the transaction before the COMMIT
statement is executed
5 © 2012 IBM Corporation
Information Management Technology Ecosystem
Concurrency Issues
Lost Update
–Occurs when two transactions read and then attempt to
update the same data, the second update will overwrite the
first update before it is committed
1) Two applications, A and B, both read the same row and
calculate new values for one of the columns based on the
data that these applications read
2) A updates the row
3) Then B also updates the row
4) A's update lost
6 © 2012 IBM Corporation
Information Management Technology Ecosystem
Concurrency Issues
Uncommitted Read
–Occurs when uncommitted data is read during a transaction
–Also known as a Dirty Read
1) Application A updates a value
2) Application B reads that value before it is committed
3) A backs out of that update3) A backs out of that update
4) Calculations performed by B are based on the uncommitted
data
7 © 2012 IBM Corporation
Information Management Technology Ecosystem
Concurrency Issues
Non-repeatable Read
–Occurs when a transaction reads the same row of data twice
and returns different data values with each read
1) Application A reads a row before processing other
requests
2) Application B modifies or deletes the row and commits the
change
3) A attempts to read the original row again
4) A sees the modified row or discovers that the original
row has been deleted
8 © 2012 IBM Corporation
Information Management Technology Ecosystem
Concurrency Issues
Phantom Read
–Occurs when a search based on some criterion returns
additional rows after consecutive searches during a
transaction
1) Application A executes a query that reads a set of rows
based on some search criterion
2) Application B inserts new data that would satisfy
application A's query
3) Application A executes its query again, within the same
unit of work, and some additional phantom values are
returned
9 © 2012 IBM Corporation
Information Management Technology Ecosystem
Concurrency Control
Isolation Levels
–determine how data is locked or isolated from other
concurrently executing processes while the data is being
accessed
–are in effect while the transaction is in progress
There are four levels of isolation in DB2:
–Repeatable read (RR)
–Read stability (RS)
–Currently Committed (CC)
• Cursor stability (CS), default prior to DB2 9.7
–Uncommitted read (UR)
10 © 2012 IBM Corporation
Information Management Technology Ecosystem
Locking in DB2
Isolation levels are enforced by locks
– Locks limit or prevent data access by concurrent users or applications
– Before read/write data, transactions need to acquire the lock on the data
Locking Attributes
– objects which can be explicitly locked are databases, tables and table
spaces
– objects which can be implicitly locked are rows, index keys, and tables
– implicit locks are acquired by DB2 according to isolation level and
processing situations
– object being locked represents granularity of lock
– length of time a lock is held is called lock count and is affected by isolation
level
Database Configuration Parameters
– LOCKLIST: amount of memory allocated to the lock list
– MAXLOCKS: percentage of the lock list held by an application that must be
filled before the database manager performs lock escalation
– Both can be automatically managed by DB2's Self-Tuning Memory
Manager.
© 2012 IBM Corporation
Information Management Technology Ecosystem
Types of Locks
DB2 for LUW
– Locks are acquired for all operations to control how other applications
access the same resource.
Factors that affect locking:
– The type of processing that the application performs
– The data access method
– The values of various configuration parameters
Examples of Types of Locks in DB2
– Share (S)
• Owner and concurrent transactions are limited to read-only
– Update (U)
• Owner can read/write, but concurrent transactions are limited to read-
only operations
– Exclusive (X)
• Owner can read/write. Concurrent transactions cannot read/write. UR
application can still read the data.
12 © 2012 IBM Corporation
Information Management Technology Ecosystem
Deadlock
Deadlock Detector
–It monitors information about agents that are waiting on locks to
discover deadlock cycles
–Randomly selects one of the transactions involved to roll back and
terminate
• An SQL error code is sent to the chosen transaction
• Every lock it had acquired is released
–deadlock detector awakens at a frequency controlled by dlchktime,
a database configuration parameter
–Set the value of the diaglevel dbm configuration parameter to 4, for
more logging on deadlocks
© 2012 IBM Corporation
Information Management Technology Ecosystem
Isolation Level – Repeatable Read
Highest level of isolation
– No dirty reads, non-repeatable reads or phantom reads
Locks the entire table or view being scanned for a query
– Provides minimum concurrency
When to use Repeatable Read:
– Changes to the result set are unacceptable
– Data stability is more important than performance
SELECT *
FROM employee
WHERE id > 4
E09NRosenberg10
C70YSchneider9
C70NAssaf8
B15YTanaka7
B15NIvanov6
A10NKumar5
B15NRousseau4
E05YChen3
A01NMartinez2
A01YSmith1
DEPTMANAGERLASTNAMEID
Employee table
© 2012 IBM Corporation
Information Management Technology Ecosystem
Isolation Level – Read Stability
Similar to Repeatable Read but not as strict
– No dirty reads or non-repeatable reads
– Phantom reads can occur
Locks only the retrieved or modified rows in a table or view
When to use Read Stability:
– Application needs to operate in a concurrent environment
– Qualifying rows must remain stable for the duration of a transaction
– If the same query is issued more than once during a unit of work, the same
result set should not be required
SELECT *
FROM employee
WHERE id > 4
E09NRosenberg10
C70YSchneider9
C70NAssaf8
B15YTanaka7
B15NIvanov6
A10NKumar5
B15NRousseau4
E05YChen3
A01NMartinez2
A01YSmith1
DEPTMANAGERLASTNAMEID
Employee table
© 2012 IBM Corporation
Information Management Technology Ecosystem
Isolation Level – Cursor Stability
Default isolation level
– No dirty reads
– Non-repeatable reads and phantom reads can occur
Locks only the row currently referenced by the cursor
When to use Cursor Stability:
– Want maximum concurrency while seeing only committed data
SELECT *
FROM employee
WHERE id > 4
E09NRosenberg10
C70YSchneider9
C70NAssaf8
B15YTanaka7
B15NIvanov6
A10NKumar5
B15NRousseau4
E05YChen3
A01NMartinez2
A01YSmith1
DEPTMANAGERLASTNAMEID
Employee table
16 © 2012 IBM Corporation
Information Management Technology Ecosystem
Isolation Level – Uncommitted Read
■ Lowest level of isolation
– Dirty reads, non-repeatable reads and phantom reads can occur
■ Locks only rows being modified in a transaction involving DROP or ALTER
TABLE
– Provides maximum concurrency
■ When to use Uncommitted Read:
– Querying read-only tables
– Using only SELECT statements
– Retrieving uncommitted data is acceptable
SELECT *
FROM employee
WHERE id > 4
E09NRosenberg10
C70YSchneider9
C70NAssaf8
B15YTanaka7
B15NIvanov6
A10NKumar5
B15NRousseau4
E05YChen3
A01NMartinez2
A01YSmith1
DEPTMANAGERLASTNAMEID
Employee table
17 © 2012 IBM Corporation
Information Management Technology Ecosystem
DB2 Isolation Levels
Application Type High data stability
required
High data stability NOT
required
Read-write transactions Read Stability (RS) Cursor Stability (CS)
Read-only transactions Repeatable Read (RR) or
Read Stability (RS)
Uncommitted Read (UR)
Isolation Level Dirty Read Non-repeatable
Read
Phantom
Read
Repeatable Read (RR) - - -
Read Stability (RS) - - Possible
Cursor Stability (CS) - Possible Possible
Uncommitted read (UR) Possible Possible Possible
18 © 2012 IBM Corporation
Information Management Technology Ecosystem
Isolation Level – Currently Committed
Currently Committed is a variation on Cursor Stability
–Avoids timeouts and deadlocks
–Log based:
• No management overhead
Situation Result
Reader blocks Reader No
Reader blocks Writer Maybe
Writer blocks Reader Yes
Writer blocks Writer Yes
Situation Result
Reader blocks Reader No
Reader blocks Writer No
Writer blocks Reader No
Writer blocks Writer Yes
Cursor Stability Currently Committed
18 © 2010 IBM Corporation
Information Management
Transaction A Transaction B
update T1 set col1 = ? where col2
= 2
update T2 set col1 = ? where col2 = ?
select * from T2 where col2 >= ?
select * from T1 where col5 = ? and
col2 = ?
DEADLOCK!!
Waiting because is
reading uncommitted data
Waiting because is
reading uncommitted data
Example – Cursor Stability Semantics
19 © 2010 IBM Corporation
Information Management
No deadlocks, no timeouts in this scenario!
Example – Currently Committed Semantics
Transaction A Transaction B
update T1 set col1 = ? where col2
= 2
update T2 set col1 = ? where col2 = ?
select * from T2 where col2 >= ?
select * from T1 where col5 = ? and
col2 = ?
commit
commit
No locking
Reads last committed version
of the data
No locking
Reads last committed version
of the data
19 © 2012 IBM Corporation
Information Management Technology Ecosystem
Up to DB2 9.5
–Cursor Stability is the default isolation level
In DB2 10
–Currently Committed is the default for NEW databases
–Currently Committed is disabled for upgraded databases, i.e.,
Cursor Stability semantics are used instead
Applications that depend on the old behavior (writers blocking
readers) will need to update their logic or disable the Currently
Committed semantics
Isolation Level – Currently Committed
Available
since
DB2 9.7
© 2012 IBM Corporation
Information Management Technology Ecosystem
Currently Committed – How to use it?
cur_commit – database configuration parameter
– ON: default for new databases – CC semantics in place
– DISABLED: default value for existing databases prior to DB2
9.7 – old CS semantics in place
PRECOMPILE / BIND
– ConcurrentAccessResolution: Specifies the concurrent access
resolution to use for statements in the package.
• USE CURRENTLY COMMITTED
• WAIT FOR OUTCOME
The Information Management Specialists
DB2 References
• Getting to know the CLP
 http://www.ibm.com/developerworks/data/library/
techarticle/dm-0503melnyk/
• Data Studio – V3.1.1
 www.ibm.com/developerworks/downloads/im/data/
The Information Management Specialists
DB2 References
• Best Practices
 www.ibm.com/developerworks/data/bestpractices/
• DB2 Certification
 www.ibm.com/certify
 http://www.ibm.com/developerworks/views/data/librar
yview.jsp?sort_order=1&sort_by=Title&series_title_by=d
b2+10.1+fundamentals+certification+exam+610+prep
 http://www.channeldb2.com/video/db2-tech-talk-part-
one-certification-prep-for-db2-10-
 http://www.channeldb2.com/video/db2-tech-talk-part-
two-certification-prep-for-db2-10-for-linux-un
The Information Management Specialists
Redirected Restore – Generate Script
• db2 restore db test from /home/backups taken at
20121122090733 redirect generate script
red_restore.sql
• Modify red_restore.sql. You can modify:
 Restore options
 Automatic storage paths
 Container layout and paths
• Run the modified redirected restore script. For
example: db2 –tvf red_restore.sql
© 2010 IBM Corporation
Information Management
Example Comments
REORG TABLE purchaseOrders
ALLOW READ ACCESS ON DATA
PARTITION Apr2010
Reorganize a single partition
(Apr2010) while allowing read
access to it; all remaining partitions
available for read/write.
REORG TABLE purchaseOrders
ALLOW NO ACCESS ON DATA
PARTITION Mar2010;
REORG TABLE purchaseOrders
ALLOW NO ACCESS ON DATA
PARTITION Apr2010;
Reorganize two partitions
concurrently; no access is allowed
to either partition; all remaining
partitions available for read/write.
REORG INDEXES ALL FOR
TABLE purchaseOrders ALLOW
WRITE ACCESS ON DATA
PARTITION Apr2010;
Reorganize all local indexes for the
Apr2010 data partition.
Partition-level REORG with no global indexes

Mais conteúdo relacionado

Mais procurados

KoprowskiT_SPBizConference_2AMaDisasterJustBegan
KoprowskiT_SPBizConference_2AMaDisasterJustBeganKoprowskiT_SPBizConference_2AMaDisasterJustBegan
KoprowskiT_SPBizConference_2AMaDisasterJustBeganTobias Koprowski
 
My First 100 days with an Exadata (PPT)
My First 100 days with an Exadata (PPT)My First 100 days with an Exadata (PPT)
My First 100 days with an Exadata (PPT)Gustavo Rene Antunez
 
Teradata Partners 2011 - Utilizing Teradata Express For Development And Sandb...
Teradata Partners 2011 - Utilizing Teradata Express For Development And Sandb...Teradata Partners 2011 - Utilizing Teradata Express For Development And Sandb...
Teradata Partners 2011 - Utilizing Teradata Express For Development And Sandb...monsonc
 
12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource ManagementFahd Mirza Chughtai
 
SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)
SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)
SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)BT Akademi
 
Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero
 Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero
Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - faveroWillie Favero
 
Impact2014 session # 1523 performance optimization using ibm java on z and w...
Impact2014  session # 1523 performance optimization using ibm java on z and w...Impact2014  session # 1523 performance optimization using ibm java on z and w...
Impact2014 session # 1523 performance optimization using ibm java on z and w...Elena Nanos
 
What's So Special about the Oracle Database Appliance?
What's So Special about the Oracle Database Appliance?What's So Special about the Oracle Database Appliance?
What's So Special about the Oracle Database Appliance?O-box
 
Less17 moving data
Less17 moving dataLess17 moving data
Less17 moving dataAmit Bhalla
 
Less14 br concepts
Less14 br conceptsLess14 br concepts
Less14 br conceptsAmit Bhalla
 
Advantages of migrating to db2 v11.1
Advantages of migrating to db2 v11.1Advantages of migrating to db2 v11.1
Advantages of migrating to db2 v11.1Rajesh Pandhare
 
Ibm db2 10.5 for linux, unix, and windows installing ibm data server clients
Ibm db2 10.5 for linux, unix, and windows   installing ibm data server clientsIbm db2 10.5 for linux, unix, and windows   installing ibm data server clients
Ibm db2 10.5 for linux, unix, and windows installing ibm data server clientsbupbechanhgmail
 
Reasons to Love IBM Java and WebSphere Application Server on z System
Reasons to Love IBM Java and WebSphere Application Server on z SystemReasons to Love IBM Java and WebSphere Application Server on z System
Reasons to Love IBM Java and WebSphere Application Server on z SystemElena Nanos
 
Database & Technology 2 _ Marting Lambert _ Mixed Workloads Why and How.pdf
Database & Technology 2 _ Marting Lambert _ Mixed Workloads Why and How.pdfDatabase & Technology 2 _ Marting Lambert _ Mixed Workloads Why and How.pdf
Database & Technology 2 _ Marting Lambert _ Mixed Workloads Why and How.pdfInSync2011
 
All Flash is not Equal: Tony Pearson contrasts IBM FlashSystem with Solid-Sta...
All Flash is not Equal: Tony Pearson contrasts IBM FlashSystem with Solid-Sta...All Flash is not Equal: Tony Pearson contrasts IBM FlashSystem with Solid-Sta...
All Flash is not Equal: Tony Pearson contrasts IBM FlashSystem with Solid-Sta...Tony Pearson
 

Mais procurados (16)

KoprowskiT_SPBizConference_2AMaDisasterJustBegan
KoprowskiT_SPBizConference_2AMaDisasterJustBeganKoprowskiT_SPBizConference_2AMaDisasterJustBegan
KoprowskiT_SPBizConference_2AMaDisasterJustBegan
 
My First 100 days with an Exadata (PPT)
My First 100 days with an Exadata (PPT)My First 100 days with an Exadata (PPT)
My First 100 days with an Exadata (PPT)
 
Teradata Partners 2011 - Utilizing Teradata Express For Development And Sandb...
Teradata Partners 2011 - Utilizing Teradata Express For Development And Sandb...Teradata Partners 2011 - Utilizing Teradata Express For Development And Sandb...
Teradata Partners 2011 - Utilizing Teradata Express For Development And Sandb...
 
12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management
 
SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)
SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)
SQL Server 2014 New Features (Sql Server 2014 Yenilikleri)
 
Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero
 Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero
Universal Table Spaces for DB2 10 for z/OS - IOD 2010 Seesion 1929 - favero
 
Impact2014 session # 1523 performance optimization using ibm java on z and w...
Impact2014  session # 1523 performance optimization using ibm java on z and w...Impact2014  session # 1523 performance optimization using ibm java on z and w...
Impact2014 session # 1523 performance optimization using ibm java on z and w...
 
What's So Special about the Oracle Database Appliance?
What's So Special about the Oracle Database Appliance?What's So Special about the Oracle Database Appliance?
What's So Special about the Oracle Database Appliance?
 
Less17 moving data
Less17 moving dataLess17 moving data
Less17 moving data
 
Less14 br concepts
Less14 br conceptsLess14 br concepts
Less14 br concepts
 
Advantages of migrating to db2 v11.1
Advantages of migrating to db2 v11.1Advantages of migrating to db2 v11.1
Advantages of migrating to db2 v11.1
 
DB2 9.7 Overview
DB2 9.7 OverviewDB2 9.7 Overview
DB2 9.7 Overview
 
Ibm db2 10.5 for linux, unix, and windows installing ibm data server clients
Ibm db2 10.5 for linux, unix, and windows   installing ibm data server clientsIbm db2 10.5 for linux, unix, and windows   installing ibm data server clients
Ibm db2 10.5 for linux, unix, and windows installing ibm data server clients
 
Reasons to Love IBM Java and WebSphere Application Server on z System
Reasons to Love IBM Java and WebSphere Application Server on z SystemReasons to Love IBM Java and WebSphere Application Server on z System
Reasons to Love IBM Java and WebSphere Application Server on z System
 
Database & Technology 2 _ Marting Lambert _ Mixed Workloads Why and How.pdf
Database & Technology 2 _ Marting Lambert _ Mixed Workloads Why and How.pdfDatabase & Technology 2 _ Marting Lambert _ Mixed Workloads Why and How.pdf
Database & Technology 2 _ Marting Lambert _ Mixed Workloads Why and How.pdf
 
All Flash is not Equal: Tony Pearson contrasts IBM FlashSystem with Solid-Sta...
All Flash is not Equal: Tony Pearson contrasts IBM FlashSystem with Solid-Sta...All Flash is not Equal: Tony Pearson contrasts IBM FlashSystem with Solid-Sta...
All Flash is not Equal: Tony Pearson contrasts IBM FlashSystem with Solid-Sta...
 

Semelhante a DBA Basics guide

xTech2006_DB2onRails
xTech2006_DB2onRailsxTech2006_DB2onRails
xTech2006_DB2onRailswebuploader
 
IBM DB2 LUW UDB DBA Online Training by Etraining.guru
IBM DB2 LUW UDB DBA Online Training by Etraining.guruIBM DB2 LUW UDB DBA Online Training by Etraining.guru
IBM DB2 LUW UDB DBA Online Training by Etraining.guruRavikumar Nandigam
 
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools Update
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools UpdateDB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools Update
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools UpdateBaha Majid
 
IBM Analytics Accelerator Trends & Directions Namk Hrle
IBM Analytics Accelerator  Trends & Directions Namk Hrle IBM Analytics Accelerator  Trends & Directions Namk Hrle
IBM Analytics Accelerator Trends & Directions Namk Hrle Surekha Parekh
 
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle
IBM DB2 Analytics Accelerator  Trends & Directions by Namik Hrle IBM DB2 Analytics Accelerator  Trends & Directions by Namik Hrle
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle Surekha Parekh
 
DB2 Design for High Availability and Scalability
DB2 Design for High Availability and ScalabilityDB2 Design for High Availability and Scalability
DB2 Design for High Availability and ScalabilitySurekha Parekh
 
Emc sql server 2012 overview
Emc sql server 2012 overviewEmc sql server 2012 overview
Emc sql server 2012 overviewsolarisyougood
 
DB2 and PHP in Depth on IBM i
DB2 and PHP in Depth on IBM iDB2 and PHP in Depth on IBM i
DB2 and PHP in Depth on IBM iAlan Seiden
 
MT47 Modernize infrastructure for a modern data center
MT47 Modernize infrastructure for a modern data centerMT47 Modernize infrastructure for a modern data center
MT47 Modernize infrastructure for a modern data centerDell EMC World
 
Deploying MediaWiki On IBM DB2 in The Cloud Presentation
Deploying MediaWiki On IBM DB2 in The Cloud PresentationDeploying MediaWiki On IBM DB2 in The Cloud Presentation
Deploying MediaWiki On IBM DB2 in The Cloud PresentationLeons Petražickis
 
Db2 blu acceleration and more
Db2 blu acceleration and moreDb2 blu acceleration and more
Db2 blu acceleration and moreIBM Sverige
 
Migration DB2 to EDB - Project Experience
 Migration DB2 to EDB - Project Experience Migration DB2 to EDB - Project Experience
Migration DB2 to EDB - Project ExperienceEDB
 
70-410 Practice Test
70-410 Practice Test70-410 Practice Test
70-410 Practice Testwrailebo
 
Top 10 DB2 Support Nightmares #8
Top 10 DB2 Support Nightmares  #8Top 10 DB2 Support Nightmares  #8
Top 10 DB2 Support Nightmares #8Laura Hood
 
Top 10 DB2 Support Nightmares #8
Top 10 DB2 Support Nightmares  #8Top 10 DB2 Support Nightmares  #8
Top 10 DB2 Support Nightmares #8Carol Davis-Mann
 
Future of Power: PowerLinux - Jan Kristian Nielsen
Future of Power: PowerLinux - Jan Kristian NielsenFuture of Power: PowerLinux - Jan Kristian Nielsen
Future of Power: PowerLinux - Jan Kristian NielsenIBM Danmark
 
CV - Database Administrator ( English )
CV - Database Administrator ( English )CV - Database Administrator ( English )
CV - Database Administrator ( English )Franck VICTORIA
 
DB2 11 for z/OS Migration Planning and Early Customer Experiences
DB2 11 for z/OS Migration Planning and Early Customer ExperiencesDB2 11 for z/OS Migration Planning and Early Customer Experiences
DB2 11 for z/OS Migration Planning and Early Customer ExperiencesJohn Campbell
 

Semelhante a DBA Basics guide (20)

xTech2006_DB2onRails
xTech2006_DB2onRailsxTech2006_DB2onRails
xTech2006_DB2onRails
 
IBM DB2 LUW UDB DBA Online Training by Etraining.guru
IBM DB2 LUW UDB DBA Online Training by Etraining.guruIBM DB2 LUW UDB DBA Online Training by Etraining.guru
IBM DB2 LUW UDB DBA Online Training by Etraining.guru
 
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools Update
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools UpdateDB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools Update
DB2 Real-Time Analytics Meeting Wayne, PA 2015 - IDAA & DB2 Tools Update
 
IBM Analytics Accelerator Trends & Directions Namk Hrle
IBM Analytics Accelerator  Trends & Directions Namk Hrle IBM Analytics Accelerator  Trends & Directions Namk Hrle
IBM Analytics Accelerator Trends & Directions Namk Hrle
 
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle
IBM DB2 Analytics Accelerator  Trends & Directions by Namik Hrle IBM DB2 Analytics Accelerator  Trends & Directions by Namik Hrle
IBM DB2 Analytics Accelerator Trends & Directions by Namik Hrle
 
Db2exc guide 952_mac_x86_64
Db2exc guide 952_mac_x86_64Db2exc guide 952_mac_x86_64
Db2exc guide 952_mac_x86_64
 
DB2UDB_the_Basics
DB2UDB_the_BasicsDB2UDB_the_Basics
DB2UDB_the_Basics
 
DB2 Design for High Availability and Scalability
DB2 Design for High Availability and ScalabilityDB2 Design for High Availability and Scalability
DB2 Design for High Availability and Scalability
 
Emc sql server 2012 overview
Emc sql server 2012 overviewEmc sql server 2012 overview
Emc sql server 2012 overview
 
DB2 and PHP in Depth on IBM i
DB2 and PHP in Depth on IBM iDB2 and PHP in Depth on IBM i
DB2 and PHP in Depth on IBM i
 
MT47 Modernize infrastructure for a modern data center
MT47 Modernize infrastructure for a modern data centerMT47 Modernize infrastructure for a modern data center
MT47 Modernize infrastructure for a modern data center
 
Deploying MediaWiki On IBM DB2 in The Cloud Presentation
Deploying MediaWiki On IBM DB2 in The Cloud PresentationDeploying MediaWiki On IBM DB2 in The Cloud Presentation
Deploying MediaWiki On IBM DB2 in The Cloud Presentation
 
Db2 blu acceleration and more
Db2 blu acceleration and moreDb2 blu acceleration and more
Db2 blu acceleration and more
 
Migration DB2 to EDB - Project Experience
 Migration DB2 to EDB - Project Experience Migration DB2 to EDB - Project Experience
Migration DB2 to EDB - Project Experience
 
70-410 Practice Test
70-410 Practice Test70-410 Practice Test
70-410 Practice Test
 
Top 10 DB2 Support Nightmares #8
Top 10 DB2 Support Nightmares  #8Top 10 DB2 Support Nightmares  #8
Top 10 DB2 Support Nightmares #8
 
Top 10 DB2 Support Nightmares #8
Top 10 DB2 Support Nightmares  #8Top 10 DB2 Support Nightmares  #8
Top 10 DB2 Support Nightmares #8
 
Future of Power: PowerLinux - Jan Kristian Nielsen
Future of Power: PowerLinux - Jan Kristian NielsenFuture of Power: PowerLinux - Jan Kristian Nielsen
Future of Power: PowerLinux - Jan Kristian Nielsen
 
CV - Database Administrator ( English )
CV - Database Administrator ( English )CV - Database Administrator ( English )
CV - Database Administrator ( English )
 
DB2 11 for z/OS Migration Planning and Early Customer Experiences
DB2 11 for z/OS Migration Planning and Early Customer ExperiencesDB2 11 for z/OS Migration Planning and Early Customer Experiences
DB2 11 for z/OS Migration Planning and Early Customer Experiences
 

Último

2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfhans926745
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesBoston Institute of Analytics
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Enterprise Knowledge
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 

Último (20)

2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 

DBA Basics guide

  • 1. The Information Management Specialists DB2 10.1 Basic Database Administration Workshop for Linux, Unix and Windows – CL2X3GB Iqbal Goralwalla
  • 2. The Information Management Specialists Iqbal Goralwalla (iqbal@triton.co.uk) – About Me • IBM Gold Consultant • IBM Champion for Information Management • Head of DB2 on Midrange (LUW) at Triton Consulting • Experience of DB2 LUW since DB2 Common Server (V2) • IBM Certified Advanced Database Administrator • Worked at the IBM Toronto Software Lab developing DB2  Worked on V5, V6, and V8  Owner of 2 IBM patents on V8
  • 3. The Information Management Specialists Unit 1
  • 4. The Information Management Specialists DB2 is DB2 is DB2
  • 5. The Information Management Specialists DB2 TIMELINE DB2 9.7 2009 DB2 10.1 2012 PureScale 2009 DB2 6, 7, 8, 9.1 DB2 10.5 2013
  • 6. The Information Management Specialists DB2 Editions
  • 7. © 2012 IBM Corporation Information Management Technology Ecosystem 4 DB2 Database Product Editions • Storage Optimization • Continuous Data Ingest • Q-replication • Federation • Optim & InfoSphere tools DB2 AESESmall & Medium Businesses Enterprise Businesses Database Enterprise Developer Edition Allows developers to design, build, and prototype applications. The edition is a product bundle that includes many DB2 features.
  • 9. © 2012 IBM Corporation Information Management Technology Ecosystem 7 DB2 Key Features and Functionality by Edition YesYesYesYesYesYesYesTime Travel Query YesYesYesNoNoNoNoWorkload management YesYesYesYesYesNoNoTivoli® System Automation YesYesYesNoNoNoNoTable partitioning YesYesYesYesYesNoYesSQL Replication between DB2 LUW and Informix YesYesYesYesYesYesYesReplication tools YesYesYesNoNoNoYesQuery parallelism YesYesNoNoNoNoNoQ Replication with two other DB2 LUW servers YesYesYesYesYesYesYespureXML® storage YesYesYesYesYesYesYesOracle Compatibility YesYesYesYesYesNoNoOnline reorganization YesYesYesNoNoNoNoMulti-Temperature Storage YesYesYesNoNoNoYes Materialized query tables (MQTs) Multidimensional clustering (MDC) tables YesYesYesYesYesYesYesLBAC / RCAC YesYesNoNoNoNoNoIBM InfoSphere Optim Query Workload Tuner YesYes (10 licenses)NoNoNoNoNoIBM InfoSphere Data Architect YesYesNoNoNoNoNoIBM InfoSphere Optim Performance Manager Extended YesYesNoNoNoNoNoBM InfoSphere® Optim™ Configuration Manager YesYesYesYesYesYesYesIBM Data Studio YesYesYesYesYesNoNoHigh availability disaster recovery (HADR) YesYesNoNoNoNoNoFederation with DB2 LUW and Oracle YesYesYesYesYesYesYesFederation with DB2 LUW and Informix Data Server Yes DB2 pureScale Feature DB2 pureScale Feature Up to 16 cores and 64GB of total cluster size NoNoNoDB2 pureScale functionality YesYesNoNoNoNoNoContinuous Data Ingest YesYesYesYesYesYesNoCompression: backup YesYes DB2 Storage Optimization Feature NoNoNoNoAdaptive Compression and classic row compression YesYesYesYesYesNoNoAdvanced Copy Services Enterprise Developer Advanced Enterprise Server Enterprise ServerWorkgroup Server Express (incl. FTL) DB2 Express-C DB2 Personal Functionality
  • 10. © 2012 IBM Corporation Information Management Technology Ecosystem 10 Licensing – Metrics and Summary Windows, Linux, AIX, Solaris, HP- UX Windows, Linux, AIX, Solaris, HP- UX Windows, Linux, Solaris (x64) Windows, Linux, Solaris (x64) Windows, LinuxPlatforms supported UnlimitedDB2 throttles itself to use a maximum of 64GB DB2 throttles itself to use a maximum of 8 GB DB2 throttles itself to use maximum of 4 GB N/AMemory limit UnlimitedDB2 throttles itself to use maximum of 16 cores and 4 sockets DB2 throttles itself to use maximum of 4 cores DB2 throttles itself to use maximum of 2 cores N/AProcessor limit Authorized Users (minimum of 25 per 100 PVUs) or PVUs Eligible for Sub- capacity pricing Authorized Users (minimum of 5 per socket) or Per Socket Authorized Users (minimum of 5 per server) or Per Server Free Download (Unsupported) Per install (Assumes one user) Pricing metric Enterprise / Advanced WorkgroupExpressExpress-CPersonal
  • 11. © 2012 IBM Corporation Information Management Technology Ecosystem 11 DB2 Installation • New in DB2 10: – You can install the IBM® DB2 pureScale Feature while installing DB2 Enterprise Server Edition, DB2 Workgroup Server Edition, and DB2 Advanced Enterprise Server Edition. – You can now install IBM Data Studio from the DB2 Launchpad. Installation Windows UNIX db2setup Wizard db2_install command Response file Installation Methods Deprecated in DB2 10!
  • 12. © 2012 IBM Corporation Information Management Technology Ecosystem 12 DB2 Installation – DB2 Users (non-pureScale) On Linux or UNIX, three users and groups are created for a root installation On Windows, the following user accounts are required: – Installation user account • Used to perform installation, normally a member of the Windows Administrators group – (Optional) one or more setup user accounts • DB2 instance user • DB2 Administration Server (DAS) user Instance Owner The instance owner home directory is where the DB2 instance will be created db2inst1 Fenced User Used to run UDF's and stored procedures outside of the address space used by the DB2 database db2fenc1 DB2 Administration Server User The user ID is used to run the DB2 administration server on the system dasusr1 Administration Server has been deprecated in DB2 9.7!
  • 13. © 2012 IBM Corporation Information Management Technology Ecosystem 13 DB2 Installation – Directory Structure Windows Binaries: db2.exe, db2start.exe, db2stop.exe, db2cmd.exe, etc. Directory for databases, starts with instance owning name Partition number Database ID (directory for SAMPLE database) Default LOG directory Automatic Storage directory (for SAMPLE database) SYSCATSPACE table space (always created) TEMPSPACE1 table space (always created) USERSPACE1 table space (always created) Default DB2 install location DMS table space data file (if not using automatic storage) db2 program files node000 IBM SAMPLE T00000000 T00000001 T00000002 sqL0001 SQLLOGDIR my_dms_ts.dat my_sms_ts bin sqllib
  • 14. © 2012 IBM Corporation Information Management Technology Ecosystem 14 DB2 Installation – Directory Structure Linux / UNIX (Automatic Storage) Main DB2 software directories Linux/UNIX instance owner’s home directory DB2 Instance directory Stored Procedure Directory – External and Internal Automatic Storage directory (for SAMPLE database) Default DB2 install location Instance software directories linked to main DB2 software DB2 diagnostic logs and other logs Audit and Security information Initialization profile for Unix shell Instance configuration parameters binary file System Database directory – Catalogs are kept here Local Database directory Databases are created under this node Database ID (directory for SAMPLE database) T0000000, T0000001, T0000002 – System, Temporary, User table spaces / /home/db2inst1 /sqllib /bin /opt/ibm/db2/V9.7 /lib /java /bnd /conv /include /function /db2dump /security db2profile db2systm /sqldbdir /sqldbdir /SQL0001 /SAMPLE /NODE0000 /bin /lib /java /bnd /conv
  • 15. The Information Management Specialists Unit 2
  • 16. © 2012 IBM Corporation3 Information Management Technology Ecosystem Discontinued Tools in DB2 10 Control Center and related components are replaced by a new set of GUI tools: IBM Data Studio and IBM InfoSphere Optim tools – Note: Replication Center is still available and it is now a standalone tool IBM Data Studio is the new main tool replacing Control Center. – It provides an IDE for maintaining databases and developing database applications Optim Performance Manager is a performance analysis and tuning tool for DB2 systems Data StudioUser Interface to Spatial Extender Data StudioVisual Explain Optim Performance ManagerActivity Monitor, Event Analyzer Optim Performance ManagerQuery Patroller Center Optim Performance ManagerMemory Visualizer Data Studio / Data Studio Web Console Optim Perfomance Manager Health Center Data StudioWizards in Control Center Data StudioControl Center Data StudioCommand Editor IBM InfoSphere Optim ToolsDiscontinued Tools
  • 17. © 2012 IBM Corporation4 Information Management Technology Ecosystem What is IBM Data Studio? Comprehensive data management tool – An integrated environment for managing databases and developing database applications Replaces Control Center in DB2 10 Built on the popular Eclipse framework Support for Red Hat Linux, SUSE Linux, Windows 2 packaging options: – Full client: integrated development environment for database administration and routine and Java application development – Administration client: smaller foot-print, non-Java routine development Optional extra component – Data Studio Web console: health and availability monitoring FREE to download!
  • 18. © 2012 IBM Corporation5 Information Management Technology Ecosystem Data Lifecycle Management Develop Design Administer Monitor Tune Data Models Applications - Data Modeling - SQL and XQuery editor - Routines development - Debugger - Database Object Management - Schema Changes - Administrative Tasks - Data Access Control - Visual Explain - Statistics Advisor - Health Monitor - Job Manager
  • 19. © 2012 IBM Corporation6 Information Management Technology Ecosystem Past and Future IBM Data Studio 2.2 Optim Development Studio 2.2 Optim Database Administrator 2.2 IBM Data Studio 3.1 • Merges the functionality of all three tools into a single product • Improved usability for DB administration • Supports set of discontinued functions from Control Center Oct/2011 IBM Data Studio 3.1.1 • Supports DB2 10 specific features • RCAC • Multi-temperature storage • Adaptive compression • Time travel tables • and more! 2012 NEW NEW
  • 20. © 2012 IBM Corporation7 Information Management Technology Ecosystem Installation Install Data Studio full client or administration client: – Installation Manager wizard – Silent install using a response file – Migrating or upgrading existing installation is not supported in version 3.1 – Saved workspace information is unaffected in the installation process Install Data Studio web console: – Can be installed running the installation wizard, installing in console mode, or installing silently – Upgrading from earlier versions is supported • database connections, alert settings, and user authentication settings stored locally or in the repository database are retained during upgrade
  • 21. The Information Management Specialists Unit 3
  • 22. © 2012 IBM Corporation Information Management Technology Ecosystem 16 DB2 Environment – Instances ■ A DB2 instance is a logical database manager that serves as the access point to the databases structures ■ All instances share the same executable binary files ■ Each instance has − its own configuration (dbm cfg) − multiple Engine Dispatchable Units (EDUs) shared among the databases in that instance Upgrades an instance to the current release. It replaces “db2imgr”, discontinued in DB2 10 db2iupgrade Command Description Example db2start Start the default instance db2start db2stop Stop the current instance db2stop -f db2icrt Create an instance db2icrt –u db2fenc1 db2inst1 db2idrop Drop an instance db2idrop –f db2inst1 db2ilist List all instances db2ilist db2iupdt Update an instance after installation of a fix pack db2iupdt –u db2fenc1 db2inst1 Instance myinst Instance level profile registry dmg cfg files System db directory Node directory DCS directory Database MYDB1 bufferpool(s) logs db logs Syscatspace Tablespace1 Userspace1 MyTablespace1 TableX TableY MyTablespace2 TableZ IndexZ Database MYDB1 bufferpool(s) logs db logs Syscatspace Tablespace1 Userspace1 MyTablespace1 Table1 Table2 MyTablespace2 Table3 Index3
  • 23. © 2012 IBM Corporation Information Management Technology Ecosystem 18 DB2 Process Model Single process and multithreaded model – System controller: db2sysc (UNIX) or db2syscs.exe (Windows) – Threads: Engine Dispatchable Units (EDU) DB2 Agents (db2agent) – Special type of EDU to handle application requests – The DB2 engine keeps a pool of agents available to service requests – An application is mapped to a coordinator agent DB2 has firewall to protect databases and DBM – Application runs on different address space to prevent application errors leading to corruption of DBM files or internal buffer
  • 24. The Information Management Specialists DB2 Process Model
  • 25. The Information Management Specialists DB2 Process Model
  • 26. The Information Management Specialists Listing OS threads example $ ps -fu lpham UID PID PPID C STIME TTY TIME CMD lpham 25996 25946 0 12:19 pts/12 00:00:00 -ksh lpham 26567 26552 0 12:19 pts/12 00:00:00 ksh lpham 27688 27676 0 12:21 pts/12 00:01:46 db2sysc lpham 27716 27676 0 12:21 pts/12 00:00:00 db2acd lpham 27995 27994 0 12:24 pts/13 00:00:00 -ksh lpham 29321 26567 0 12:30 pts/12 00:00:00 ps -fu lpham $ps -lLfp 27688 (try ps -m -o THREAD -p 27688 on AIX) F S UID PID PPID LWP C NLWP PRI NI ADDR SZ WCHAN STIME TTY TIME CMD 5 S lpham 27688 27676 27688 0 21 76 0 - 264903 msgrcv 12:21 pts/12 00:00:01 db2sysc 1 S lpham 27688 27676 27694 0 21 75 0 - 264903 schedu 12:21 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 27695 0 21 76 0 - 264903 semtim 12:21 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 27696 0 21 79 0 - 264903 schedu 12:21 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 27697 0 21 76 0 - 264903 msgrcv 12:21 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 27714 0 21 76 0 - 264903 schedu 12:21 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 27827 1 21 75 0 - 264903 semtim 12:21 pts/12 00:00:06 db2sysc 1 S lpham 27688 27676 27943 27 21 77 0 - 264903 schedu 12:22 pts/12 00:01:39 db2sysc 1 S lpham 27688 27676 28150 0 21 75 0 - 264903 schedu 12:25 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 28153 0 21 76 0 - 264903 schedu 12:25 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 28156 0 21 75 0 - 264903 schedu 12:25 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 30290 0 21 76 0 - 264903 schedu 12:36 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 30291 0 21 75 0 - 264903 schedu 12:36 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 30292 0 21 76 0 - 264903 semtim 12:36 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 30293 0 21 76 0 - 264903 schedu 12:36 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 30295 0 21 77 0 - 264903 semtim 12:36 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 30296 0 21 77 0 - 264903 semtim 12:36 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 30297 0 21 77 0 - 264903 semtim 12:36 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 30298 0 21 76 0 - 264903 msgrcv 12:36 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 30299 0 21 76 0 - 264903 msgrcv 12:36 pts/12 00:00:00 db2sysc 1 S lpham 27688 27676 30300 0 21 76 0 - 264903 msgrcv 12:36 pts/12 00:00:00 db2sysc
  • 27. The Information Management Specialists Listing DB2 threads example $ db2pd -edus >>>> List of all EDUs for database partition 0 <<<< db2sysc PID: 27688 db2wdog PID: 27676 db2acd PID: 27716 EDU ID TID Kernel TID EDU Name =========================================================================================== 60 183282690400 30300 db2pfchr (TESTDB) 59 183278496096 30299 db2pfchr (TESTDB) 58 183291079008 30298 db2pfchr (TESTDB) 57 183295273312 30297 db2pclnr (TESTDB) 56 183286884704 30296 db2pclnr (TESTDB) 55 183299467616 30295 db2pclnr (TESTDB) 54 183307856224 30293 db2dlock (TESTDB) 53 183320439136 30292 db2lfr (TESTDB) 52 183303661920 30291 db2loggw (TESTDB) 51 183316244832 30290 db2loggr (TESTDB) 50 183257524576 28156 db2evmli (DB2DETAILDEADLOCK) 49 183261718880 28153 db2taskd (TESTDB) 46 183274301792 28150 db2wlmd (TESTDB) 26 183312050528 27943 db2stmm (TESTDB) 17 183324633440 27827 db2agent (TESTDB) 16 183328827744 27714 db2resync 15 183333022048 27697 db2ipccm 14 183337216352 27696 db2licc 13 183341410656 27695 db2thcln 12 183345604960 27694 db2alarm 1 183085558112 27688 db2sysc
  • 28. The Information Management Specialists DB2 Memory Model
  • 29. The Information Management Specialists DB2 Memory Usage • db2pd -dbptnmem • select * from table(admin_get_dbp_mem_usage()) • db2mtrk  -i (instance)  -d (database)  -a (applications)  -p (agents)
  • 30. The Information Management Specialists DB2 Memory Usage
  • 31. © 2012 IBM Corporation Information Management Technology Ecosystem 17 DB and DBM Configurations Description Example View Database Manager Settings db2 get dbm cfg show detail Change a Database Manager Setting db2 update dbm cfg using health_mon off Description Example View Database Settings db2 get db cfg for testdb db2 connect to testdb db2 get db cfg show detail Change a DB Setting db2 update db cfg using logprimary 10 Connection Management Memory Tuning Monitoring Define user authentication type Set communication protocols Instance Management Set sort limits Set hash limits Set utility impact limits Share memory resources among the databases Instance memory Get database snapshots Check database health and performance Control instance services Enable federation Set diagnostic log level Authorization user groups ■ Examples of what can be changed using DB and DBM configuration
  • 32. The Information Management Specialists Unit 4
  • 33. © 2012 IBM Corporation Information Management Technology Ecosystem 3 DB2 Storage Model Buffer Pools Storage Groups Physical Disks SG_A Table 1 Table 2 Table 3 New Table Spaces BP1 Database■ Database – Contains a set of objects used to store, manage, and access data ■ Buffer Pool – Area of main memory for the purpose of caching data as it is read from disk ■ Table Space – Logical space used to store data objects such as tables and indexes ■ Storage Group – Set of storage paths configured to represent different classes of storage in the database system, where table spaces are stored ■ Physical Disk – Physical location used to store data
  • 34. © 2012 IBM Corporation Information Management Technology Ecosystem 5 Table Spaces Container 2 (Files, directories, raw devices) Round-robin data distribution Container 0 Container 1 extents Database Container 2 Container 3Container 0 Container 1 ■ A layer of abstraction between logical and physical data ■ Allows assignment of data to particular logical devices or portions thereof ■ All tables, indexes, and other data are stored in a table space ■ Associated to a specific buffer pool ■ Managed in three different ways: SMS, DMS and Automatic Storage ■ An Automatic Storage table space is associated to a Storage Group, that defines the set of containers HUMANRES tbsp Employee table Department table SCHED tbsp Project table
  • 35. © 2012 IBM Corporation Information Management Technology Ecosystem 6 Types of Table Spaces User Temporary Table SpaceUser Table Space System Catalog Table Space System Temporary Table Space ■ 1 required ■ Default: SYSCATSPACE ■ Catalog tables with metadata ■ Must exist! ■ 1 required ■ Default: TEMPSPACE1 ■ System temporary area for operation like join and sorts. ■ 1 or more required ■ Default: USERSPACE1 ■ Default user table space ■ Can be deleted ■ Stores all user defined tables ■ 1 required ■ Default: USERTEMPSPACE ■ Store temp data from global temporary tables
  • 36. © 2012 IBM Corporation Information Management Technology Ecosystem 21 Multi-Temperature Data Management ■ Provides the ability to assign priority to data (hot, warm, cool, cold) and dynamically assign it to different classes of storage – Data temperature signifies priority of the data defined by business – Data temperature is inversely proportional to volume • Small portion of hot data vs. large portion of warm/cold data ■ Data can change temperature – As data ages – As business criteria behind temperature changes Data Volume Age Data Volume Sales data of this month = most frequent Sales data of this quarter = less frequent Sales data of previous quarters = rarely accessed Sales data of past years = historical data Age HOT WARM COLD DORMANT Usage Reduces TCO
  • 37. © 2012 IBM Corporation Information Management Technology Ecosystem 22 Storage Groups ■ Storage Groups allow the flexibility to implement Multi-temperature Data Management in Automatic Storage table spaces ■ Different Storage Groups can represent different classes of storage – Hot data assigned to storage groups with fast devices – Warm or Cold data assigned to slower devices ■ Easy maintenance when data ages and needs to be moved to a different storage class Store data based on priority of accessibility Reduced TCO Easy and flexible maintenance
  • 38. © 2012 IBM Corporation Information Management Technology Ecosystem 26 Multi-temperature Storage – A Sample Scenario ■ GOAL: Reduce warehouse storage costs while meeting the desired Quality of Service requirements for access to last 3 quarters of data ■ Step 1: Create two storage groups to reflect the 2 tiers of storage This would result in transfer rate, overhead, etc being programmatically computed at the storage group level. ■ Step 2: Assign table spaces to storage groups CREATE STOGROUP sg_hot ON '/ssd/path1', '/ssd/path2’ DATA TAG 1 CREATE STOGROUP sg_warm ON '/hdd/path1', '/hdd/path2' DATA TAG 5 Data tags represent business priority of the data and is used by the optimizer CREATE TABLESPACE q1_2011_tbsp USING STOGROUP sg_warm CREATE TABLESPACE q2_2011_tbsp USING STOGROUP sg_warm DATA TAG 3 CREATE TABLESPACE q3_2011_tbsp USING STOGROUP sg_hot
  • 39. © 2012 IBM Corporation Information Management Technology Ecosystem 27 Multi-temperature Storage – A Sample Scenario ■ Create a new table space and change storage group for Q3 table space – Q4 table space will reside on hot storage – Q3 data will be moved and rebalanced across slower storage ■ Data Tag changed to allow optimizer to consider the changed data priority CREATE TABLESPACE q4_2011_tbsp USING STOGROUP sg_hot ALTER TABLESPACE q3_2011_tbsp USING STOGROUP sg_warm DATA TAG 3 ALTER TABLESPACE q2_2011_tbsp DATA TAG 5 • Only the most frequently accessed data resides on high-end expensive storage and meets the QoS requirements for that data access • The bulk of the data resides on less expensive storage. • Provides easy management by DBA’s … A New Quarter Begins
  • 40. The Information Management Specialists Unit 5
  • 41. © 2012 IBM Corporation Information Management Technology Ecosystem Allows a single logical table to be broken up into multiple separate physical storage objects (a.k.a. data partitions) – Up to 32K data partitions – Each partition defines a range of values – A partition will only contain rows that match its range of values Parallel table scans and index scans Table Partitioning 10 Partitioned table pay_1 tbsp1 pay_2 tbsp2 pay_3 tbsp3 pay_4 Payments Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Partition 1 Partition 2 Partition 3 Partition 4 Applications see a single table Payments Large Table Applications see a single table Non-partitioned table tbsp1 Payments
  • 42. © 2012 IBM Corporation Information Management Technology Ecosystem Benefits of Table Partitioning 11 Fast dataFast data rollroll--inin rollroll--outout LargerLarger tabletable capacitycapacity GreaterGreater indexindex placementplacement flexibilityflexibility BetterBetter optimizationoptimization of storageof storage costscosts IncreasedIncreased queryquery performanceperformance through datathrough data partitionpartition eliminationelimination TableTable PartitioningPartitioning
  • 43. © 2012 IBM Corporation Information Management Technology Ecosystem Partitioning Columns – Must be base types (No LOBS, LONG VARCHAR) – Accepts multiple columns and generated columns – MINVALUE and MAXVALUE can be used to specify open boundaries It only accepts values for the defined ranges – SQL0327N is raised if no range matches the data being inserted Table Partitioning - Syntax 12 pay_1 tbsp1 pay_2 tbsp2 pay_3 tbsp3 pay_4 Payments Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Partition 1 Partition 2 Partition 3 Partition 4 Applications see a single table CREATE TABLE payments(id INT, paydate DATE, ...) IN tbsp1, tbsp2, tbsp3 PARTITION BY RANGE (paydate) (STARTING '1/1/2009' ENDING '12/31/2009' EVERY 3 MONTHS) Short Form Long Form CREATE TABLE payments(id INT, paydate DATE, …) PARTITION BY RANGE(paydate) (PARTITION pay1_09 STARTING '1/1/2009' IN tbsp1, PARTITION pay2_09 STARTING '4/1/2009' IN tbsp2, PARTITION pay3_09 STARTING '7/1/2009' IN tbsp3, PARTITION pay4_09 STARTING '10/1/2009' IN tbsp1 ENDING ‘12/31/2009')
  • 44. © 2012 IBM Corporation Information Management Technology Ecosystem Data Partition Elimination Ability to determine that only a subset of the data partitions in a table are necessary to answer a query DB2 EXPLAIN – Provides detailed information about which data partition are used when a query is run – db2exfmt provides details from EXPLAIN statement 13 SELECT * FROM PAYMENTS WHERE paydate BETWEEN '02/03/2009' AND '30/05/2009' Better response time! Improving the performance! Better response time! Improving the performance!
  • 45. © 2012 IBM Corporation3 Information Management Technology Ecosystem Use Cases of Temporal Data Management Track and analyze changes in your business – Easily compare data from two points in time – Accuracy in time-based reporting Effectively perform and trace data corrections – Easily make data changes in the past, i.e. effective as a past in point time, and record when the change was made Auditing and compliance – Ability to show past data for any point in time – Ability to show which information got changed in the same transaction and when, up to pico-second precision
  • 46. © 2012 IBM Corporation5 Information Management Technology Ecosystem Built into DB2 – automatic and transparent Three types of temporal tables 5 Temporal Tables – Types System-period temporal tables (STTs) DB2 automatically maintains historical versions of the rows in the history table You can query the past state of your data Example Employees who have left the company You assign a date range to each row, indicating the period when the data is valid in the real world Valid periods can be in the past, present, or future Example •Insurance policy valid from Jan 1 to June 30 •4% interest rate is effective from Nov 1 to 20 Combination of STT and ATT Keep application-based period information as well as system-based historical information Application-period temporal tables (ATTs) Bitemporal Tables
  • 47. © 2012 IBM Corporation8 Information Management Technology Ecosystem Add new trips: Amazonia, departing on 10/28/2011 & Ski Heavenly Valley, departing on 3/1/2011 8 Insert Data into a System-Period Temporal Table INSERT INTO travel VALUES ('Amazonia','Brazil','10/28/2011',1000.00); INSERT INTO travel VALUES (‘Ski Heavenly Valley','California','03/01/2011',400.00); Current Date = January 1, 2011 trip_name destination departure_ date price sys_start sys_end Amazonia Brazil 10/28/2011 1000.00 01/01/2011 12/30/9999 Ski Heavenly Valley California 03/01/2011 400.00 01/01/2011 12/30/9999 System validity period (inclusive, exclusive) Both SYS_START and SYS_END columns are inserted by DB2, not the application. For simplicity, they are represented here as DATEs, rather than TIMESTAMPs TRAVEL
  • 48. © 2012 IBM Corporation9 Information Management Technology Ecosystem 9 Destination name is not explicit enough. Alter the DESTINATION column to make it longer Update the destination column for Ski Heavenly Valley to make it clearer: DB2 automatically inserted row into history table and supplied sys_start and sys_end dates Alter and Update a System-Period Temporal Table trip_name destination departure_date price sys_start sys_end Amazonia Brazil 10/28/2011 1000.00 01/01/2011 12/30/9999 Ski Heavenly Valley Lake Tahoe, CA 03/01/2011 400.00 02/15/2011 12/30/9999 Current Date = February 15, 2011 ALTER TABLE travel ALTER COLUMN destination SET DATA TYPE VARCHAR(50); UPDATE travel SET destination = 'Lake Tahoe, CA' WHERE trip_name = 'Ski Heavenly Valley‘; **History table is automatically modified trip_name destination departure_date price sys_start sys_end Ski Heavenly Valley California 03/01/2011 400.00 01/01/2011 02/15/2011 New sys_start date TRAVEL TRAVEL_HISTORY
  • 49. © 2012 IBM Corporation10 Information Management Technology Ecosystem We are no longer offering the Ski Heavenly Valley trip – delete it. DB2 automatically inserted row into history table and supplied sys_start and sys_end dates 10 Delete from a System-Period Temporal Table trip_name destination departure_date price sys_start sys_end Amazonia Brazil 10/28/2011 1000.00 01/01/2011 12/30/9999 Current Date = April 1, 2011 DELETE FROM travel WHERE trip_name = 'Ski Heavenly Valley'; trip_name destination departure_date price sys_start sys_end Ski Heavenly Valley California 03/01/2011 400.00 01/01/2011 02/15/2011 Ski Heavenly Valley Lake Tahoe, CA 03/01/2011 400.00 02/15/2011 04/01/2011 System validity period (inclusive, exclusive) Ski Heavenly Valley has been removed from base table TRAVEL TRAVEL_HISTORY
  • 50. © 2012 IBM Corporation15 Information Management Technology Ecosystem Add new trip: Manu Wilderness, departing on 08/02/2011 15 Insert Data into a Application-Period Temporal Table Current Date = May 1, 2011 trip_name destination departure_ date price bus_start bus_end Manu Wilderness Peru 08/02/2011 1500.00 05/01/2011 01/01/2012 BUSINESS_TIME period (inclusive, exclusive) bus_start and bus_end columns are inserted by the application, not DB2 INSERT INTO travel VALUES ('Manu Wilderness', 'Peru', '08/02/2011',1500.00,'05/01/2011', '01/01/2012'); Application-period time entries are independent of the current date **
  • 51. © 2012 IBM Corporation16 Information Management Technology Ecosystem 16 Update an Application-Period Temporal Table Manu Wilderness trip isn’t selling well, so we’ll offer a special price of $1000.00 for the month of June. Current Date = May 15, 2011 trip_name destination departure_date price bus_start bus_end Manu Wilderness Peru 08/02/2011 1500.00 05/01/2011 06/01/2011 Manu Wilderness Peru 08/02/2011 1000.00 06/01/2011 07/01/2011 Manu Wilderness Peru 08/02/2011 1500.00 07/01/2011 01/01/2012 BUSINESS_TIME period (inclusive, exclusive) DB2 inserted 2 rows and updated 1 row. UPDATE travel FOR PORTION OF BUSINESS_TIME FROM '06/01/2011' TO '07/01/2011' SET price = 1000.00 WHERE trip_name = 'Manu Wilderness'; trip_name destination departure_date price bus_start bus_end Manu Wilderness Peru 08/02/2011 1500.00 05/01/2011 01/01/2012 Before (Prior to Update) After (Updated Table)
  • 52. © 2012 IBM Corporation Information Management Technology Ecosystem 8 888 Row Compression – Classic Also referred to as static row compression Uses a table-level compression dictionary (1 dictionary per table) to compress data by row, across multiple columns Dictionary is used to map repeated byte patterns to smaller symbols. These smaller symbols replace long patterns in table rows. After dictionary is created, data is compressed as it is inserted/updated in the table. – DB2 automatically creates the dictionary when enough the table has enough data for sampling Name Dept Salary City ST ZIP Bob smpo 30000 Dallas TX 75063 John smpo 25000 Dallas TX 75063 Bob smpo 30000 Dallas TX 75063 John smpo 25000 Dallas TX 75063 etc. Bob (01) 30000 (02) John (01) 25000 (02) etc. Dictionary (01) smpo (02) Dallar, TX, 75063
  • 53. © 2012 IBM Corporation Information Management Technology Ecosystem 6 Row Compression Also known as deep compression Uses a dictionary-based compression algorithm to replace recurring strings with shorter symbols within rows Continuous enhancement since it was introduced in DB2 9.1 Two types available: – Classic (static) row compression – Adaptive row compression • An enhancement to classic row compression to provide extra storage savings Included in DB2 Storage Optimization Feature New in DB2 10 DB2 9.1 DB2 9.5 DB2 9.7 DB2 10 - Row Compression* - Automatic Dictionary Creation (ADC)* - XML compression* - Temporary table compression* - Index compression* - LOB inlining - Adaptive compression*
  • 54. © 2012 IBM Corporation Information Management Technology Ecosystem 12 Data Warehouse Compression Results 230GB raw size - Most of the data in a single table Graph – Storage Savings Increase in savings by Adaptive Compression – 3x Compression with Static Compression using reorg – 5.6x Compression with Automatic dictionary creation and Adaptive Compression – 7.4x Compression with Adaptive Compression and full reorg Compressionfactor (higherisbetter)
  • 55. © 2012 IBM Corporation Information Management Technology Ecosystem 13 Real Customer Results with Adaptive Compression Customer top 5 tables – DB2 9.7 – compression rates between 3X and 6X – DB2 10 – compression rates between 4X and 10X Sum of all tables DB2 9.7 delivered 5X compression Sum of all tables DB2 10 delivered 7X compression
  • 56. © 2012 IBM Corporation Information Management Technology Ecosystem 14 How to enable row compression? – Must have DB2 Storage Optimization Feature – To enable classic row compression – To enable adaptive row compression – To disable compression Data is compressed after the table dictionary is created. – INSERT/UPDATE/LOAD/IMPORT can trigger the automatic dictionary creation – Classic REORG with RESETDICTIONARY option will always generate a new dictionary and compress all table data Row Compression – Enablement & Tools CREATE TABLE / ALTER TABLE … COMPRESS YES STATIC CREATE TABLE / ALTER TABLE … COMPRESS YES Adaptive is the default in DB2 10 CREATE TABLE / ALTER TABLE … COMPRESS NO
  • 57. © 2012 IBM Corporation Information Management Technology Ecosystem 15 Row Compression - Example Scenarios 1) Compressing data for new table CREATE TABLE Sales (<columns definition>) COMPRESS YES Load data… Automatic Dictionary Creation (ADC) will kick off and create compression dictionary. Once dictionary is built, new data put into the table is compressed: LOAD FROM file OF DEL REPLACE INTO NewSale 2) Compressing data in existing tables ALTER TABLE Sales COMPRESS YES Data is still un-compressed. Explicitly compress data via REORG: REORG TABLE Sales 3) Recreating the dictionary to optimize compression (Classic Row Compression) Data has changed a lot so current dictionary is not so effective anymore. Use REORG to recreate dictionary and re-compress data: REORG TABLE Sales RESETDICTIONARY 4) Uncompressing your data Disable compression: ALTER TABLE Sales COMPRESS NO Uncompress data: REORG TABLE Sales Adaptive Compression greatly reduces the need for REORGs to maintain the compression ratio.
  • 58. © 2012 IBM Corporation Information Management Technology Ecosystem 16 Row Compression – Enablement & Tools Estimating storage savings – ADMIN_GET_TAB_COMPRESS_INFO_V97 – Instead use: ADMIN_GET_TAB_COMPRESS_INFO and ADMIN_GET_TAB_DICTIONARY_INFO SELECT SUBSTR(TABNAME,1 ,10) tabname, OBJECT_TYPE, ROWCOMPMODE, PCTPAGESSAVED_CURRENT current, PCTPAGESSAVED_STATIC with_static, PCTPAGESSAVED_ADAPTIVE with_adaptive FROM TABLE(SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO('DB2INST1','CUSTOMERS')) AS T; TABNAME OBJECT_TYPE ROWCOMPMODE CURRENT WITH_STATIC WITH_ADAPTIVE ---------- ------------ ------------ ------- ----------- ------------- CUSTOMERS DATA S 60 68 81 CUSTOMERS XML S 58 62 62 Deprecated in DB2 10!
  • 59. The Information Management Specialists Unit 6
  • 60. Moving Data in DB2 UDB for LUW Utilities  DB2 provides three utilities for mass data movement • EXPORT • IMPORT • LOAD  LOAD executed at the table level  IMPORT/EXPORT may use views, joins etc (in certain circumstances)
  • 61. Moving Data in DB2 UDB for LUW File Formats  Determine how data is physically stored in external files  Five different file formats supported by data movement utilities • ASC (non-delimited ASCII files) • DEL (delimited ASCII files) • WSF (Work Sheet Format files) • IXF (Integrated Exchange Format files) • CURSOR (V8.1)
  • 62. Moving Data in DB2 UDB for LUW Delimited ASCII Files (DEL)  Used extensively in RDBMS  Makes use of delimiters • Row delimiter • Column delimiter • Character 100,”Joe”,”Joe Street” 200,”Foo”,”Foo Street” 300,”Moo”,”Moo Street”
  • 63. Moving Data in DB2 UDB for LUW Non-Delimited ASCII Files (ASC)  Fixed-length ASCII files  Row delimiter  No column or character delimiters  All column values are of fixed length • Variable length character columns are padded with blanks 100JoeJoe Street 200FooFoo Street 300MooMoo Street
  • 64. Moving Data in DB2 UDB for LUW Integrated Exchange Format Files (IXF)  Consist of unbroken sequence of variable length records • Numeric values stored as packed decimal or binary • Character values stored as ASCII  Cannot be edited using a text editor  IXF files contain structural information • Can be used to rebuild database objects
  • 65. Moving Data in DB2 UDB for LUW Worksheet Format Files (WSF)  Used to extract or import data by Lotus 1-2-3 and Symphony products  Not used to move data from one DB2 table to another  Cannot be edited using a text editor
  • 66. Moving Data in DB2 UDB for LUW Data Movement Utilities and File Formats Format LOAD IMPORT EXPORT ASC Yes Yes No DEL Yes Yes Yes WSF No Yes Yes IXF Yes Yes Yes
  • 67. Moving Data in DB2 UDB for LUW Export  Used to extract data from tables and write into an external file  Data can be extracted in different file formats • IXF • DEL • WSF  Files can then be used by the DB2 Load or Import utilities or other external products
  • 68. Moving Data in DB2 UDB for LUW Export  EXPORT uses SQL syntax to select data from the database  SQL can be very versatile and may • reference views and aliases • include joins • filter rows using where clause • use columnar functions • use group by and order by clauses
  • 69. Moving Data in DB2 UDB for LUW Export – minimum requirements 2. Path and file name 3. File type (IXF, DEL, or WSF) 1. SELECT statement of del select * from f1team f1team.delexport to
  • 70. Moving Data in DB2 UDB for LUW Export – example F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 1 Ferrari 1 Maranello Italy 2 McLaren 2 Woking Britain 3 Williams 3 Didcot Britain export to f1team.del of del select * from f1team 1,”Ferrari”,1,”Maranello”,”Italy” 2,”McLaren”,2,”Woking”,”Britain” 3,”Williams”,3,”Didcot”,”Britain” f1team.del
  • 71. Moving Data in DB2 UDB for LUW Export – optional requirements  Message file name to capture all error and warning messages  New column names when exporting to IXF or WSF file formats  File type modifier for additional formatting of DEL and WSF files  File names and paths for exporting LOB columns
  • 72. Moving Data in DB2 UDB for LUW Export  Must have SYSADM, or DBADM, or CONTROL or SELECT on table(s)  Default date format for DEL and WSF files is yyyymmdd. Can be changed to ISO representation yyyy-mm-dd by specifying DATEISO  Default delimiter for DEL format is double quote (‘’). To override, use CHARDEL  Use tools like Visual Explain to evaluate performance of Select statement
  • 73. Moving Data in DB2 UDB for LUW Export – Derived Columns  2 ways to force column rename for IXF and WSF files: 1. Use the AS clause in SELECT EXPORT … SELECT GROSS_PAY – TAXES AS NET_PAY … FROM … 2. Use METHOD N option EXPORT … METHOD N (‘NET_PAY’,…) SELECT GROSS_PAY – TAXES, … FROM …
  • 74. Moving Data in DB2 UDB for LUW Export – Large Objects  Can include 2GB of LOB data in the target file  Store each LOB value in it’s own file EXPORT TO mydata.del of DEL LOBS TO E:datalobs1, E:datalobs2 LOBFILE mypics … MODIFIED BY LOBSINFILE SELECT * FROM mydata E:datalobs1 mypics.001 E:datalobs1 mypics.002 E:datalobs2 mypics.323
  • 75. Moving Data in DB2 UDB for LUW Import  Used to move data from an external file into a table or a view  Data can be imported from various file formats • IXF • DEL • ASC • WSF
  • 76. Moving Data in DB2 UDB for LUW  The IMPORT utility uses the SQL processor to bulk load data  Faster than application programs for large insert volumes  Triggers are fired and constraints validated Import
  • 77. Moving Data in DB2 UDB for LUW Import – minimum requirements 2. Path and file name 3. File type (IXF, DEL, ASC, or WSF) 1. Import type of del insert into f1team.del 4. Name or alias of table or view where data is to be imported f1team import from
  • 78. Moving Data in DB2 UDB for LUW Import – optional requirements  Message file name to capture all error and warning messages  Number or rows to insert before committing changes to table  Number of records to skip from file before beginning import  Names of table or view columns into which data will be inserted
  • 79. Moving Data in DB2 UDB for LUW Import – Insert Mode (1) F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 1 Ferrari 1 Maranello Italy import from f1team.del of del insert into f1team 2,”McLaren”,2,”Woking”,”Britain” 3,”Williams”,3,”Didcot”,”Britain” f1team.del F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 1 Ferrari 1 Maranello Italy 2 McLaren 2 Woking Britain 3 Williams 3 Didcot Britain
  • 80. Moving Data in DB2 UDB for LUW Import – Insert Mode (2) F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY import from f1team.del of del insert into f1team (hq_city,country,team_id,name,principal) ”Maranello”,”Italy”, 1,”Ferrari”,1 ”Woking”,”Britain”,2,”McLaren”,2 ”Didcot”,”Britain”, 3,”Williams”,3 f1team.del F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 1 Ferrari 1 Maranello Italy 2 McLaren 2 Woking Britain 3 Williams 3 Didcot Britain
  • 81. Moving Data in DB2 UDB for LUW Import – Insert_Update Mode F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 1 Ferrari 1 Maranello Italy import from f1team.del of del insert_update into f1team 1,”Ferrari”,1,”Rome”,”Italy” 2,”McLaren”,2,”Woking”,”Britain” 3,”Williams”,3,”Didcot”,”Britain” f1team.del F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 1 Ferrari 1 Rome Italy 2 McLaren 2 Woking Britain 3 Williams 3 Didcot Britain
  • 82. Moving Data in DB2 UDB for LUW Import – Replace Mode F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 1 Ferrari 1 Maranello Italy import from f1team.del of del replace into f1team 2,”McLaren”,2,”Woking”,”Britain” 3,”Williams”,3,”Didcot”,”Britain” f1team.del F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 2 McLaren 2 Woking Britain 3 Williams 3 Didcot Britain Note: Replace mode is not valid if primary key of F1TEAM is referenced by a foreign key in another table
  • 83. Moving Data in DB2 UDB for LUW Import – Replace_Create Mode (1) F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 1 Ferrari 1 Maranello Italy import from f1team.del of ixf replace_create into f1team 2,”McLaren”,2,”Woking”,”Britain” 3,”Williams”,3,”Didcot”,”Britain” f1team.del F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 2 McLaren 2 Woking Britain 3 Williams 3 Didcot Britain Note: Replace_Create mode is not valid if primary key of F1TEAM is referenced by a foreign key in another table Note: Only valid for IXF format
  • 84. Moving Data in DB2 UDB for LUW Import – Replace_Create Mode (2) import from f1team.del of ixf replace_create into f1team 2,”McLaren”,2,”Woking”,”Britain” 3,”Williams”,3,”Didcot”,”Britain” f1team.del F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 2 McLaren 2 Woking Britain 3 Williams 3 Didcot Britain Note: Replace_Create mode is not valid if primary key of F1TEAM is referenced by a foreign key in another table Note: Only valid for IXF format
  • 85. Moving Data in DB2 UDB for LUW Import – Create Mode import from f1team.del of ixf create into f1team 2,”McLaren”,2,”Woking”,”Britain” 3,”Williams”,3,”Didcot”,”Britain” f1team.del F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY 2 McLaren 2 Woking Britain 3 Williams 3 Didcot Britain Note: Only valid for IXF format
  • 86. Moving Data in DB2 UDB for LUW Importing into a specific tablespace  A target tablespace can be specified using the CREATE option IMPORT FROM tabddl.ixf OF IXF CREATE INTO newtab IN mytbsp INDEX IN myindextbsp LONG IN mylongtbsp  All three tablespaces must be DMS if INDEX or LONG options are used
  • 87. Moving Data in DB2 UDB for LUW Import – Usage Considerations  Commit frequency can be tuned IMPORT … COMMITCOUNT 100 …  A failed import can be restarted IMPORT … RESTARTCOUNT 200 …  Large objects can be imported into a table from lob files created by the Export utility IMPORT FROM mydata.del of DEL LOBS FROM E:datalobs1, E:datalobs2 MODIFIED BY LOBSINFILE … INTO mydata …
  • 88. Moving Data in DB2 UDB for LUW Import – Method L  Used to import data from ASC files  Start and end position of each column need to be specified F1TEAM TEAM_ID NAME PRINCIPAL HQ_CITY COUNTRY Char(3) Varchar(20) Char(3) Varchar(20) Varchar(20) import from f1team.asc of asc method L (1 3, 4 23, 24 26, 27 46, 47 66) insert into f1team
  • 89. Moving Data in DB2 UDB for LUW Import – Method P  Column numbers used to select columns from data file  File type should be DEL or IXF import from f1team.del of del method P (1,2,5) insert into f1team 1,”Ferrari”,1,”Maranello”,”Italy” 2,”McLaren”,2,”Woking”,”Britain” 3,”Williams”,3,”Didcot”,”Britain” f1team.del F1TEAM TEAM_ID NAME COUNTRY
  • 90. Moving Data in DB2 UDB for LUW Creating an identical table with Export and Import  Export zero rows from the existing table into an IXF file EXPORT TO tabddl.ixf OF IXF SELECT * FROM tab WHERE 1 < 0; IMPORT FROM tabddl.ixf OF IXF REPLACE_CREATE INTO newtab;  Import the IXF file into a new table with the REPLACE_CREATE option
  • 91. Moving Data in DB2 UDB for LUW Load  Bypasses SQL processing to improve performance  Pre-formats data pages and populates the table one extent at a time  Does not fire triggers, invoke constraints or check referential integrity  Utility can collect statistics and take a backup during LOAD processing  Requires SYSADM or DBADM or LOAD authorities
  • 92. Moving Data in DB2 UDB for LUW Load – minimum requirements 2. Path and file name 3. File type (IXF, DEL, ASC, or CURSOR) 1. Load type of del insert into f1team.del 4. Name of table where data is to be loaded f1team load from
  • 93. Moving Data in DB2 UDB for LUW Load – usage considerations  Inserting new data LOAD FROM mydata.ixf OF IXF … INSERT INTO mytable …  Replacing data LOAD FROM mydata.ixf OF IXF … REPLACE INTO mytable …  Terminating a Load operation LOAD FROM mydata.ixf OF IXF … TERMINATE INTO mytable …
  • 94. Moving Data in DB2 UDB for LUW Load – usage considerations  Generating consistency points LOAD FROM … SAVECOUNT 200 …  Restarting a failed Load LOAD FROM mydata.ixf OF IXF … RESTART INTO mytable …  Forcing Load to fail on warning LOAD FROM … WARNINGCOUNT 1 …  Specifying a file for rejected rows (only valid for DEL and ASC file types) LOAD FROM … OF DEL … MODIFIED BY DUMPFILE=C:mydump.del
  • 95. Moving Data in DB2 UDB for LUW LOAD from CURSOR  You can now LOAD from a SELECT • New file type – CURSOR • Supports arbitrary SELECT statements – single tables, joins, nicknames, etc. • CLP: Need to declare cursor, and cursor name provided as the input file name to LOAD  DECLARE mycursor CURSOR FOR select * from t1  LOAD FROM mycursor OF CURSOR INSERT INTO t2 ALLOW READ ACCESS
  • 96. Moving Data in DB2 UDB for LUW LOAD from CURSOR – Example  Table t2 in database DB2  DECLARE mycursor CURSOR database DB2 user user1 using pwd1 FOR select * from t2  LOAD FROM mycursor OF CURSOR INSERT INTO t1 ALLOW READ ACCESS
  • 97. The Information Management Specialists Unit 7
  • 98.
  • 99. 7 © 2010 IBM Corporation Information Management Archival Logging ■ Enable with LOGARCHMETH1 database configuration parameter ■ History of log files is maintained, in order to allow roll forward recovery and online backup ■ Logs can be optionally archived to an archive location when no longer active to avoid exhaustion of log directory Archive Log Directory Active Log Directory ACTIVE – Contains information for non-committed transactions. When all preallocated log files are filled,more log files are allocated and used. Filled log files may be moved to a different storage location ONLINE ARCHIVE Contains information for committed transactions. Stored in the ACTIVE log subdirectory.
  • 100. 24 © 2010 IBM Corporation Information Management Logging Configuration Parameters ■ LOGPRIMARY – Controls the number of primary log files that are allowed in the active log directory. ■ LOGSECOND – Controls the number of secondary log files that are allowed in the active log directory. ■ LOGBUFSZ (Log Buffer Size) – Amount of memory to use as a buffer for log records before writing these records to disk – Log records are written to disk when a commit is issued or log buffer is full or internal database request (every 1 second) ■ LOGFILSIZ (Log File Size) – Size of each configured log file in 4K pages ■ LOGPATH and NEWLOGPATH – LOGPATH is the default active log directory – Changed to a user defined location using NEWLOGPATH. ■ FAILARCHPATH (Failover log archive path) – Specifies a third target to archive log files if the primary and secondary archival paths fail
  • 101. 8 © 2010 IBM Corporation Information Management Infinite Logging ■ Infinite logging provides infinite active log space –Enabled by setting LOGSECOND to -1 ■ Secondary log files are allocated until the unit of work commits or storage is exhausted ■ Archived logs can hinder performance for rollback and crash recovery ■ Database must be configured to use archival logging ■ Up to 256 log files (primary + secondary) ■ Control parameters –NUM_LOG_SPAN – number of log files an active transaction can span –MAX_LOG – Percentage of active primary log file space that a single transaction could consume
  • 102. 9 © 2010 IBM Corporation Information Management Database Backup ■ Copy of a database or table space –User data –DB2 catalogs –All control files, e.g. buffer pool files, table space file, database configuration file ■ Backup modes: –Offline Backup • Does not allow other applications or processes to access the database • Only option when using circular logging –Online Backup • Allows other applications or processes to access the database • Available to users during backup • Can backup to disk, tape, TSM and other storage vendors
  • 103. 10 © 2010 IBM Corporation Information Management Database Backup – Syntax db2 backup database <db_name> <online> to <dest_path> Online backup example db2 backup database mydb online to /home/db2inst1/backups Offline backup example db2 backup database mydb to /home/db2inst1/backups
  • 104. 13 © 2010 IBM Corporation Information Management Table space Backup ■ Enables user to backup a subset of database ■ Multiple table spaces can be specified ■ Database must be using archival logging ■ Table space backup can run in both online and offline backup ■ Table space can be restored from either a database backup or table space backup of the given table space ■ Use the keyword TABLESPACE to specify table spaces db2 backup database mydb1 TABLESPACE (TBSP1) ONLINE to /home/db2inst1/backup
  • 105. DB2 Administration for LUW – Part 2 Backup of Tablespaces – Usage Considerations  Backup of tablespaces should be done together if they contain: • Tables which have data, indexes, and LOBs spilt across DMS tablespaces • Tables related by referential constraints • Summary and underlying table in different tablespaces • Tables related by triggers
  • 106. 14 © 2010 IBM Corporation Information Management Incremental Backups ■ Incremental (a.k.a. cumulative) - Backup of all database data that has changed since the most recent, successful, full backup operation ■ Incremental Delta - Backup of all database data that has changed since the last successful backup (full, incremental, or delta) operation. ■ Need to have TRACKMOD database configuration parameter ON ■ Supports both database and table space backups. ■ Suitable for large databases, considerable savings by only backing up incremental changes. Delta BackupsFul l Ful l Ful l Ful l Cumulative Backups Sunday SundayMon Tue Wed Thu Fri Sat
  • 107. 15 © 2010 IBM Corporation Information Management Database Backup – Compression ■ DB2 backups can now be automatically compressed – Significantly reduce backup storage costs ■ Performance characteristics – CPU costs typically increased (due to compression computation) – Media I/O time typically decreased (due to decreased image size) – Overall backup/restore performance can increase or decrease; depending on whether CPU or media I/O is a bottleneck Example: db2 backup database DS2 to /home/db2inst1/backups compress
  • 108. DB2 Administration for LUW – Part 2 Backup – enhancements – V8.2  Logs in backup images • Logs can now be included in the online backup • Supports all types of online backups such as database, table space, incremental, and compressed • All logs that are needed to restore the backup and roll forward to the time corresponding to the end of the backup are placed in the backup image
  • 109. 16 © 2010 IBM Corporation Information Management Automatic Database Backup ■ Simplifies database backup management tasks for the DBA by always ensuring that a recent full backup of the database is performed as needed ■ To configure automatic backup –Graphical user interface tools • Configure Automatic Maintenance wizard –Command line interface • auto_db_backup • auto_maint –Stored procedure • AUTOMAINT_SET_POLICY system stored procedure
  • 110. 17 © 2010 IBM Corporation Information Management Optimizing Backup Performance ■ DB2 automatically configures these parameters for performance – Parallelism • Number of table spaces backed up in parallel – num_buffers • Number of buffers used • Use at least twice as many buffers as backup targets (or sessions) to ensure that the backup target devices do not have to wait for data. – Buffer • Backup buffer size ■ Allocate more memory to backup utility by increasing utility heap size (UTIL_HEAP_SZ) configuration parameter. ■ Backup subset of data where possible: – Table space backups – Incremental backups ■ Use multiple target devices
  • 111. © 2012 IBM Corporation Information Management Technology System 21 DB2CKBKP – Check Backup ■ This utility can be used to test the integrity of a backup image – determine whether the image can be restored. – display the meta-data stored in the backup header. $ db2ckbkp -h SAMPLE.0.moba.NODE0000.CATN0000.20041008013428.001 ===================== MEDIA HEADER REACHED: ===================== Server Database Name -- SAMPLE Server Database Alias -- SAMPLE Client Database Alias -- SAMPLE Timestamp -- 20041008013428 Database Partition Number -- 0 Instance -- moba Sequence Number -- 1 Release ID -- A00 Database Seed -- 92DBF20F DB Comment's Codepage (Volume)-- 0 DB Comment (Volume) -- DB Comment's Codepage (System)-- 0 DB Comment (System) -- Authentication Value -- 255 Backup Mode -- 1 Includes Logs -- 1 Compression -- 0 ・・・(略)・・・ This backup is an online backup with INCLUDE LOGS option 0: Not included in the log file 1: contains log file Backup is not compressed 0: not compressed 1: compressed
  • 112. 18 © 2010 IBM Corporation Information Management Database Recovery ■ Recovery is the rebuilding of a database or table space after a problem such as media or storage failure, power interruption, or application failure. Types of Recovery –Crash or restart recovery • Protects the database from being left inconsistent (power failure) –Version recovery • Restores a snapshot of the database –Roll forward recovery • Extends version recovery by using full database and table space backup in conjunction with the database log files ■ Crash recovery and version recovery are enabled in DB2 by default
  • 113. 19 © 2010 IBM Corporation Information Management DB2 Restore Utility ■ Restore utility is the complement of backup utility ■ Restores database or table space from a previously taken backup ■ TAKEN AT - Specify the time stamp of the database backup image. Backup image timestamp is displayed after successful completion of a backup ■ Without prompting – Overrides any warnings. Example: SAMPLE.0.DB2INST.NODE0000.CATN0000.20080718131210.001 RESTORE DATABASE dbalias FROM <db_path> TAKEN AT 20080718131210
  • 114. 20 © 2010 IBM Corporation Information Management Table space Restore Operation ■ Restored table space is in Roll Forward Pending state and can be either rolled forward to End of Logs or a Point In Time. – In case of Point in Time roll forward, table space must be rolled forward to at least the minimum Point in Time ■ Minimum recovery time can be checked using – db2 list tablespaces show detail ■ User table space must be in line with catalog table space – e.g if catalog indicates table T1 exists in table space TSP1, table T1 must exist in the TSP1 table space, otherwise database becomes inconsistent ■ Every time there is a DDL changed, minimum recovery time for the table space is revised to indicate the last DDL change. ■ Recommended to take a table space backup after a table space has been restore to a point in time. ■ Transactions that came after the point in time are lost, therefore take a table space backup as new point of reference for future recoveries.
  • 115. 21 © 2010 IBM Corporation Information Management Incremental Restore ■ Restore a database with incremental backup images ■ AUTOMATIC (recomended) - All required backup images will be applied automatically by restore utility ■ MANUAL – User applies the required backups manually – db2ckrst can provide the sequence for applying backups ■ ABORT - aborts an in-progress manual cumulative restore ■ RESTORE DATABASE sample INCREMENTAL AUTOMATIC FROM /db2backup/dir1; ■ ROLLFORWARD DATABASE sample TO END OF LOGS AND COMPLETE;
  • 116. DB2 Administration for LUW – Part 2 Restore Example 1  Basic restore requires path and time
  • 117. DB2 Administration for LUW – Part 2 Restore Example 2 RESTORE DATABASE FIDB FROM ‘C:UBackupsF1DB’ TAKEN AT 20020726152238 REPLACE EXISTING;
  • 118. DB2 Administration for LUW – Part 2 Restore Example 3 RESTORE DATABASE FIDB FROM ‘C:UBackupsF1DB’ TAKEN AT 20020726152238 REPLACE EXISTING WITHOUT ROLLING FORWARD; Note: The WITHOUT ROLLING FORWARD option can NOT be specified if the restore is taking place from an online backed up database or from a tablespace level backup
  • 119. DB2 Administration for LUW – Part 2 Restore Example 4 RESTORE DATABASE FIDB TABLESPACE (userspace1) ONLINE FROM ‘C:UBackupsF1DB’ TAKEN AT 20020726152238 REPLACE EXISTING; Note: ONLINE option can only be used for tablespace or history file restores
  • 120. DB2 Administration for LUW – Part 2 Restore Example 5 RESTORE DATABASE FIDB HISTORY FILE ONLINE FROM ‘C:UBackupsF1DB’ TAKEN AT 20020726152238 REPLACE EXISTING; Note: ONLINE option can only be used for tablespace or history file restores
  • 121. DB2 Administration for LUW – Part 2 Restore Example 6  How would you restore the database if there was a crash after the backup taken on Thursday in each case?
  • 122. DB2 Administration for LUW – Part 2 Redirected Restore  Restore fails if current containers missing from backup  May want to restore on new system which may not have necessary containers defined  Redirected Restore allows adding, changing, or removing of tablespace containers during a restore  Better to take backup of tablespace immediately after new containers are added to the tablespace
  • 123. DB2 Administration for LUW – Part 2 Redirected Restore Example RESTORE DATABASE FIDB FROM ‘C:UBackupsF1DB’ TAKEN AT 20020726152238 INTO NEWDB REDIRECT WITHOUT ROLLING FORWARD;
  • 124. DB2 Administration for LUW – Part 2 Redirected Restore – defining new containers  Since containers cannot be shared between databases, the RESTORE command will return a SQL1277N error stating that “storage must be defined” for the new containers  Use LIST TABLESPACES to check state of containers  Define storage for containers using the SET TABLESPACE CONTAINERS command  Complete the redirected restore using RESTORE DATABASE MYDB CONTINUE
  • 125. DB2 Administration for LUW – Part 2 Restore Enhancements – Automatic Storage  It is now possible to choose the location of the database path during a restore  It is also possible to redefine storage paths associated with a database  Excellent! • RESTORE DATABASE TEST1 • RESTORE DATABASE TEST2 TO X: • RESTORE DATABASE TEST3 DBPATH ON D: • RESTORE DATABASE TEST3 ON /path1, /path2, /path3 • RESTORE DATABASE TEST4 ON E:newpath1, F:newpath2 DBPATH ON D:
  • 126. DB2 Administration for LUW – Part 2 Roll Forward Example 1 ROLLFORWARD DATABASE FIDB TO END OF LOGS OVERFLOW LOG PATH (C:LOGS);
  • 127. DB2 Administration for LUW – Part 2 Roll Forward Example 2 ROLLFORWARD DATABASE FIDB TO 2002-07-26-15.22.38.000000 AND STOP;
  • 128. DB2 Administration for LUW – Part 2 Roll Forward Example 3 ROLLFORWARD DATABASE FIDB TO END OF LOGS AND COMPLETE TABLESPACE (USERSPACE1) ONLINE;
  • 129. DB2 Administration for LUW – Part 2 Roll Forward Query Status  Roll forward status • Working • Pending • In progress • No roll forward pending  Next log file to be read  Log files processed  Last committed transaction
  • 130. DB2 Administration for LUW – Part 2 HADR – Scope  Takes place at the database level
  • 131. DB2 Administration for LUW – Part 2 StandbyActive Client Reroute Log pages Clients HADR HADR Active Connection Active Connection db2 update alternate server for database mydb using hostname sbhost port sbport Hostname sbhost and port sbport automatically stored on client HADR – Overview db2 TAKEOVER HADR ON DATABASE mydb
  • 132. The Information Management Specialists Unit 9
  • 133. 3 © 2012 IBM Corporation Information Management Technology Ecosystem Concurrency Concurrency is the sharing of resources by multiple interactive users or application programs at the same time – Provides increased application throughput – Increased responsiveness across the system – Better resource utilization within the system Need to be able to control the degree of concurrency: –With proper amount of data stability –Without loss of performance Having multiple interactive users can lead to: –Lost Update –Uncommitted Read –Non-repeatable Read –Phantom Read
  • 134. 4 © 2012 IBM Corporation Information Management Technology Ecosystem Terminology in Concurrent Applications Transaction –Sequence of one or more SQL operations, grouped together as a single unit –Also known as a unit of work Committed Data –Using the COMMIT statement commits any changes made during the transaction to the database Uncommitted Data –Changes during the transaction before the COMMIT statement is executed
  • 135. 5 © 2012 IBM Corporation Information Management Technology Ecosystem Concurrency Issues Lost Update –Occurs when two transactions read and then attempt to update the same data, the second update will overwrite the first update before it is committed 1) Two applications, A and B, both read the same row and calculate new values for one of the columns based on the data that these applications read 2) A updates the row 3) Then B also updates the row 4) A's update lost
  • 136. 6 © 2012 IBM Corporation Information Management Technology Ecosystem Concurrency Issues Uncommitted Read –Occurs when uncommitted data is read during a transaction –Also known as a Dirty Read 1) Application A updates a value 2) Application B reads that value before it is committed 3) A backs out of that update3) A backs out of that update 4) Calculations performed by B are based on the uncommitted data
  • 137. 7 © 2012 IBM Corporation Information Management Technology Ecosystem Concurrency Issues Non-repeatable Read –Occurs when a transaction reads the same row of data twice and returns different data values with each read 1) Application A reads a row before processing other requests 2) Application B modifies or deletes the row and commits the change 3) A attempts to read the original row again 4) A sees the modified row or discovers that the original row has been deleted
  • 138. 8 © 2012 IBM Corporation Information Management Technology Ecosystem Concurrency Issues Phantom Read –Occurs when a search based on some criterion returns additional rows after consecutive searches during a transaction 1) Application A executes a query that reads a set of rows based on some search criterion 2) Application B inserts new data that would satisfy application A's query 3) Application A executes its query again, within the same unit of work, and some additional phantom values are returned
  • 139. 9 © 2012 IBM Corporation Information Management Technology Ecosystem Concurrency Control Isolation Levels –determine how data is locked or isolated from other concurrently executing processes while the data is being accessed –are in effect while the transaction is in progress There are four levels of isolation in DB2: –Repeatable read (RR) –Read stability (RS) –Currently Committed (CC) • Cursor stability (CS), default prior to DB2 9.7 –Uncommitted read (UR)
  • 140. 10 © 2012 IBM Corporation Information Management Technology Ecosystem Locking in DB2 Isolation levels are enforced by locks – Locks limit or prevent data access by concurrent users or applications – Before read/write data, transactions need to acquire the lock on the data Locking Attributes – objects which can be explicitly locked are databases, tables and table spaces – objects which can be implicitly locked are rows, index keys, and tables – implicit locks are acquired by DB2 according to isolation level and processing situations – object being locked represents granularity of lock – length of time a lock is held is called lock count and is affected by isolation level Database Configuration Parameters – LOCKLIST: amount of memory allocated to the lock list – MAXLOCKS: percentage of the lock list held by an application that must be filled before the database manager performs lock escalation – Both can be automatically managed by DB2's Self-Tuning Memory Manager.
  • 141. © 2012 IBM Corporation Information Management Technology Ecosystem Types of Locks DB2 for LUW – Locks are acquired for all operations to control how other applications access the same resource. Factors that affect locking: – The type of processing that the application performs – The data access method – The values of various configuration parameters Examples of Types of Locks in DB2 – Share (S) • Owner and concurrent transactions are limited to read-only – Update (U) • Owner can read/write, but concurrent transactions are limited to read- only operations – Exclusive (X) • Owner can read/write. Concurrent transactions cannot read/write. UR application can still read the data.
  • 142. 12 © 2012 IBM Corporation Information Management Technology Ecosystem Deadlock Deadlock Detector –It monitors information about agents that are waiting on locks to discover deadlock cycles –Randomly selects one of the transactions involved to roll back and terminate • An SQL error code is sent to the chosen transaction • Every lock it had acquired is released –deadlock detector awakens at a frequency controlled by dlchktime, a database configuration parameter –Set the value of the diaglevel dbm configuration parameter to 4, for more logging on deadlocks
  • 143. © 2012 IBM Corporation Information Management Technology Ecosystem Isolation Level – Repeatable Read Highest level of isolation – No dirty reads, non-repeatable reads or phantom reads Locks the entire table or view being scanned for a query – Provides minimum concurrency When to use Repeatable Read: – Changes to the result set are unacceptable – Data stability is more important than performance SELECT * FROM employee WHERE id > 4 E09NRosenberg10 C70YSchneider9 C70NAssaf8 B15YTanaka7 B15NIvanov6 A10NKumar5 B15NRousseau4 E05YChen3 A01NMartinez2 A01YSmith1 DEPTMANAGERLASTNAMEID Employee table
  • 144. © 2012 IBM Corporation Information Management Technology Ecosystem Isolation Level – Read Stability Similar to Repeatable Read but not as strict – No dirty reads or non-repeatable reads – Phantom reads can occur Locks only the retrieved or modified rows in a table or view When to use Read Stability: – Application needs to operate in a concurrent environment – Qualifying rows must remain stable for the duration of a transaction – If the same query is issued more than once during a unit of work, the same result set should not be required SELECT * FROM employee WHERE id > 4 E09NRosenberg10 C70YSchneider9 C70NAssaf8 B15YTanaka7 B15NIvanov6 A10NKumar5 B15NRousseau4 E05YChen3 A01NMartinez2 A01YSmith1 DEPTMANAGERLASTNAMEID Employee table
  • 145. © 2012 IBM Corporation Information Management Technology Ecosystem Isolation Level – Cursor Stability Default isolation level – No dirty reads – Non-repeatable reads and phantom reads can occur Locks only the row currently referenced by the cursor When to use Cursor Stability: – Want maximum concurrency while seeing only committed data SELECT * FROM employee WHERE id > 4 E09NRosenberg10 C70YSchneider9 C70NAssaf8 B15YTanaka7 B15NIvanov6 A10NKumar5 B15NRousseau4 E05YChen3 A01NMartinez2 A01YSmith1 DEPTMANAGERLASTNAMEID Employee table
  • 146. 16 © 2012 IBM Corporation Information Management Technology Ecosystem Isolation Level – Uncommitted Read ■ Lowest level of isolation – Dirty reads, non-repeatable reads and phantom reads can occur ■ Locks only rows being modified in a transaction involving DROP or ALTER TABLE – Provides maximum concurrency ■ When to use Uncommitted Read: – Querying read-only tables – Using only SELECT statements – Retrieving uncommitted data is acceptable SELECT * FROM employee WHERE id > 4 E09NRosenberg10 C70YSchneider9 C70NAssaf8 B15YTanaka7 B15NIvanov6 A10NKumar5 B15NRousseau4 E05YChen3 A01NMartinez2 A01YSmith1 DEPTMANAGERLASTNAMEID Employee table
  • 147. 17 © 2012 IBM Corporation Information Management Technology Ecosystem DB2 Isolation Levels Application Type High data stability required High data stability NOT required Read-write transactions Read Stability (RS) Cursor Stability (CS) Read-only transactions Repeatable Read (RR) or Read Stability (RS) Uncommitted Read (UR) Isolation Level Dirty Read Non-repeatable Read Phantom Read Repeatable Read (RR) - - - Read Stability (RS) - - Possible Cursor Stability (CS) - Possible Possible Uncommitted read (UR) Possible Possible Possible
  • 148. 18 © 2012 IBM Corporation Information Management Technology Ecosystem Isolation Level – Currently Committed Currently Committed is a variation on Cursor Stability –Avoids timeouts and deadlocks –Log based: • No management overhead Situation Result Reader blocks Reader No Reader blocks Writer Maybe Writer blocks Reader Yes Writer blocks Writer Yes Situation Result Reader blocks Reader No Reader blocks Writer No Writer blocks Reader No Writer blocks Writer Yes Cursor Stability Currently Committed
  • 149. 18 © 2010 IBM Corporation Information Management Transaction A Transaction B update T1 set col1 = ? where col2 = 2 update T2 set col1 = ? where col2 = ? select * from T2 where col2 >= ? select * from T1 where col5 = ? and col2 = ? DEADLOCK!! Waiting because is reading uncommitted data Waiting because is reading uncommitted data Example – Cursor Stability Semantics
  • 150. 19 © 2010 IBM Corporation Information Management No deadlocks, no timeouts in this scenario! Example – Currently Committed Semantics Transaction A Transaction B update T1 set col1 = ? where col2 = 2 update T2 set col1 = ? where col2 = ? select * from T2 where col2 >= ? select * from T1 where col5 = ? and col2 = ? commit commit No locking Reads last committed version of the data No locking Reads last committed version of the data
  • 151. 19 © 2012 IBM Corporation Information Management Technology Ecosystem Up to DB2 9.5 –Cursor Stability is the default isolation level In DB2 10 –Currently Committed is the default for NEW databases –Currently Committed is disabled for upgraded databases, i.e., Cursor Stability semantics are used instead Applications that depend on the old behavior (writers blocking readers) will need to update their logic or disable the Currently Committed semantics Isolation Level – Currently Committed Available since DB2 9.7
  • 152. © 2012 IBM Corporation Information Management Technology Ecosystem Currently Committed – How to use it? cur_commit – database configuration parameter – ON: default for new databases – CC semantics in place – DISABLED: default value for existing databases prior to DB2 9.7 – old CS semantics in place PRECOMPILE / BIND – ConcurrentAccessResolution: Specifies the concurrent access resolution to use for statements in the package. • USE CURRENTLY COMMITTED • WAIT FOR OUTCOME
  • 153. The Information Management Specialists DB2 References • Getting to know the CLP  http://www.ibm.com/developerworks/data/library/ techarticle/dm-0503melnyk/ • Data Studio – V3.1.1  www.ibm.com/developerworks/downloads/im/data/
  • 154. The Information Management Specialists DB2 References • Best Practices  www.ibm.com/developerworks/data/bestpractices/ • DB2 Certification  www.ibm.com/certify  http://www.ibm.com/developerworks/views/data/librar yview.jsp?sort_order=1&sort_by=Title&series_title_by=d b2+10.1+fundamentals+certification+exam+610+prep  http://www.channeldb2.com/video/db2-tech-talk-part- one-certification-prep-for-db2-10-  http://www.channeldb2.com/video/db2-tech-talk-part- two-certification-prep-for-db2-10-for-linux-un
  • 155. The Information Management Specialists Redirected Restore – Generate Script • db2 restore db test from /home/backups taken at 20121122090733 redirect generate script red_restore.sql • Modify red_restore.sql. You can modify:  Restore options  Automatic storage paths  Container layout and paths • Run the modified redirected restore script. For example: db2 –tvf red_restore.sql
  • 156. © 2010 IBM Corporation Information Management Example Comments REORG TABLE purchaseOrders ALLOW READ ACCESS ON DATA PARTITION Apr2010 Reorganize a single partition (Apr2010) while allowing read access to it; all remaining partitions available for read/write. REORG TABLE purchaseOrders ALLOW NO ACCESS ON DATA PARTITION Mar2010; REORG TABLE purchaseOrders ALLOW NO ACCESS ON DATA PARTITION Apr2010; Reorganize two partitions concurrently; no access is allowed to either partition; all remaining partitions available for read/write. REORG INDEXES ALL FOR TABLE purchaseOrders ALLOW WRITE ACCESS ON DATA PARTITION Apr2010; Reorganize all local indexes for the Apr2010 data partition. Partition-level REORG with no global indexes