SlideShare uma empresa Scribd logo
1 de 46
Baixar para ler offline
T
- otechmag.com - Fall 2013 -
1
Fall 2013
A brand new magazine
The art of content security
Pick the right
integration
infrastructure component
PL/SQL Function Statistics
Database 12c for Developers
Succesfully combining UX and
ADF
echMagazine
T
WebLogic on the Oracle Database
Appliance
And more...
T
- otechmag.com - Fall 2013 -
2
This adventure started about half a year ago. It started as just a crazy idea. ‘What about releasing a
magazine’. Loads of people declared me an idiot (a general type of remark about most of my ideas, so I
didn’t really listen to those). But almost just a large amount of people thought it was just simply cool.
And because of both groups of people this magazine is here.
And what an adventure it has been so far. In the beginning it was just about offering something else. I
wanted to create something other that the usual Oracle Magazine (Oracle telling how good Oracle is)
or the usual blog (consultants telling how good they are).
In the small world around the technology that bounds us – Oracle – there are quite a lot of very great
personalities. The knowledge is extensive and I wanted to offer them a platform that really has some-
thing to offer. A magazine, I imagined.
People working with Oracle software want good information. Independent information. But indepen-
dent and well-written information is pretty hard to come by. At least in the world that’s called Oracle.
When I started working on this magazine I had no idea if it would work – as a matter of fact I still don’t,
you are the judge of that – but the enthusiastic response of a lot of highly regarded professionals in the
Oracle scene made me work hard on this first issue.
This magazine is – for now – just fun and games. It started just as a hobby. If this first edition will catch
on there will be a second. If the second catches on a third.
Without content there is no magazine. Therefore I would like to express my deepest gratitude to the
authors of this magazine. Troy Allen, Lucas Jellema, Billy Cripe, Sander Haaksma, Marcel Maas, Simon
Haslam, Peter Paul van de Beek, Michael Rosenblum and Lonneke Dikmans: thank you so much for
participating with me on this adventure!
Cheers!
Douwe Pieter van den Bos
September 2013
A brand new magazine
Editorial
T
- otechmag.com - Fall 2013 -
3
The Picture Desk
T
- otechmag.com - Fall 2013 -
4
Contents
Blending BI and Social
Competitive Intelli-
gence for deep insight
into your business
Page 20
Database 12c for
Developers
Page 12
The art of content
security
Page 7
The Book Club
Page 23
Successfully combining
UX with ADF
Page 25
T
- otechmag.com - Fall 2013 -
5
Contents
Pick the right
integration
infrastructure compo-
nent
Page 33
Oracle PL/SQL Function
Statistics
Page 37
Stop generating your
UI, Start designing
IT!
Page 41
WebLogic on the Ora-
cle Database Appliance
Virtual Platform
Page 29
T
- otechmag.com - Fall 2013 -
6
The Picture Desk
T
- otechmag.com - Fall 2013 -
7
WebCenter Content
Over the years, I’ve had opportunities to work with many organizations,
ranging from very small to some of the most recognizable brands in the
world, and each one of them had the same requirements and questions:
“I need to lock down our IP (intellectual property).” “We can’t have people
digging through our files as they please.” “I only want my department to
see our stuff, except for those others who need to see it.” “How can I
restrict access to our files?” “Do I need a separate security model for each
department?”
In most cases, businesses under-
stand that they need to secure
their information in some fashion,
but have no idea where to begin
at an enterprise level. Security
tends to be left in the hands of
department managers, which will
often lead to silos of information
repositories and duplication of
effort, and content, across the
entire company. Additionally, or-
ganizations find themselves with
an over-kill of content security (leading to making it too difficult to work
with their repositories) or virtually no security at all (leaving the organiza-
tion at risk for data loss and corruption).
Before someone can pick up paints, a brush, and a canvas to re-create a
Picasso, they need to have a good idea of what they want to create and
have an understanding of the tools they will use and how to mix and blend
colors to get the desired results. Creating a content security strategy
abides by the same requirements: know what needs to be accomplished
and understand the tool and how security elements blend to get the
desired results.
Every content management application on the market provides some
level of security and a defined set of elements to control user access and
permissions to content. This article focuses on the Oracle WebCenter
Content application (with the principles applying also to WebCenter
Records Management), but the overall strategies outlined here can be
applied to other repository tools.
Learning to use the Brushes
Oracle WebCenter Content (WCC) utilizes Security Groups, Roles, Accounts
and Access Control Levels (ACLs) to control contribution (the ability to add
new or edit existing content), consump-
tion (the ability to search for and utilize
or view content), and management (the
control of back-end processes of
content including designing and
managing workflows) of content.
Security Groups act like storage contain-
ers within WCC. Content must be assigned to a Security Group, but it can
only be assigned to one at a time. WCC utilizes Roles like a set of keys to
grant users with permissions to the storage containers, or Security Groups.
Roles provide users with specific permissions (Read, Write, Delete, and Ad-
ministrate) to groups of content. Users
can be assigned to more than one, and
Roles can grant permissions to more
than one Security Group.
Many legacy WCC customers still only
utilize Security Groups and Roles to
secure their content and have faced a
situation where the number of Secu-
rity Groups and Roles that have to be
created to manage their implementation become unmanageable or they
simply cannot get to the level of granularity that is required. (As a side
note, Oracle recommended no more than 50 Security Groups prior to
the WCC 11g release. From an operational standpoint, this is still a good
best-practice to keep in mind).
In order to meet the demands of more complex security requirements,
Accounts had been introduced to provide granular control ofcontent. If
we visualize Security Groups as a filing cabinet, then accounts would be
the folders that are held within it. Sometimes you have content that isn’t
in a File folder, but is in the filing cabinet drawer; hence, a piece of content
is being submitted to a Security Group without having an account applied
to it. In physical filing cabinets, File folders can often contain more File
folders providing a hierarchy of storage – Cabinet drawer, File folder, File
folder, and then content.
WCC Accounts can be difficult to grasp at first, but make perfect sense
once the proverbial “light bulb” turns on. Account structures are identical
across Security Groups. To put it another way, File folders are organized
the exact same way across all cabinet drawers. Another point to remem-
ber is that Accounts are hierarchical in nature.
Another way to think about Accounts is
to visualize a set of stairs that you are
walking down. The Account structure
or “stairs” has a top level of “Employee”,
as an example, with the next step down
being “Marketing”. We can continue to add
more steps down, or sub-accounts, such
as Employee/Marketing/Creative/ArtDept.
Any user set at the top of the “Stairs”, or in-
serted into the top Account and given Read
access, will have Read access all the way
down the stairs or Account structure. The user would have Read access to
Employee and all the accounts down to, and including, Employee/Market-
ing/Creative/ArtDept.
Continuing on the previous Account
example, Employee, Employee/
Marketing, Employee/Marketing/Cre-
ative, and Employee/Marketing/Cre-
ative/ArtDept would exist in both the
Public and Secure Security Groups.
In order to see content in Employee/
Marketing under the Public Security
Group, Bob would have to at least
be assigned to Public_Consumer
Role AND either Read to the Account
Employee, or Read to the Account
Employee/Marketing. Mary would
likewise need Secure_Consumer Role to the Secure Security Group AND
at least Read to Employee, Employee/Marketing, or Employee/Marketing/
Creative in order to see content in the Secure Security Group stored under
the Account Employee/Marketing/Creative.
WCC evaluates the Role and Account assignments of each user to deter-
mine what the actual combined permission set is for any given content
item.
When evaluating a user’s Roles, permissions between Roles that grant
different access rights to the same Security Group will result in the user
receiving the greatest permission between the two Roles.
The lost art of content security Troy Allen
TekStream Solutions
T
- otechmag.com - Fall 2013 -
8
WCC performs a similar operation when
evaluating the permissions granted between
Account assignments for a user. This becomes
a bit more complex given that Accounts are hi-
erarchical. For example, a user given Read and
Write access for the Account Employee AND
given Read access for the Account Employee/
Marketing, will actually have Read and Write
access to Employee/Marketing. Since Employ-
ee is a higher level Account than Employee/
Marketing, and the Access rights granted are also greater than those for
Employee/Marketing, Read Write
access prevails.
If we changed permissions so that the
user only has Read access to Account
Employee and Read Write access to
Account Employee/Marketing, then
the user still retains the greater per-
mission for the Account Employee/
Marketing. However, the user’s access
rights to the Account Employee do not
change.
If you recall the correlation of Accounts to being like stairs, then you also
need to visualize that you can only go one direction on them….down. An
administrator can assign the user entry to an Account structure anywhere
within its levels, but the permissions granted are ONLY valid for that entry
point and below. A user being assigned to Employee/Marketing with Read
access would NOT be able to see content that was assigned to the higher
level Account Employee.
As mentioned earlier on, Accounts are
designed to provide a level of granularity
to content security controls by providing
a deeper level of control to Roles and
Security Groups. Due to the fact that every
content item has to be assigned to one,
and only one, Security Group, users must
have at least one Role that grants access to
that Security Group in order to see content.
A user may have an Account assignment
with Read, or greater permissions, but may still not be able to have access
to the content if they cannot enter into the Security Group it has been
assigned to.
WCC evaluates the rights a user has been assigned to a Security Group
through the user’s Role assignment. WCC then evaluates the permissions
assigned to the user’s Account assignment. This evaluation produces the
combined permission that the user ultimately has on the content item. If
the user’s Role grants Read, Write, Delete, and Administrate and his or her
Account assignment is Read and Write, then WCC intersects between the
two resulting in Read Write access for this particular case.
WCC Access Control Lists (ACLs) provide another layer of controls for se-
curing content. However, ACLs are counter intuitive in how they work and
how most people expect them to work.
ACLs are assigned to individual users, to groups of users (utilizing WCC’s
Aliases), or to defined WCC
Roles. Any combination of these
may be applied to control users
across multiple levels.
Most users would expect that
when they assign an ACL to
content or a folder, that the
permission assignment would
“overrule” the security normally
granted to other users for that
piece of content. ACLs actually
work in the same fashion that
WCC Roles and Security Groups
work with WCC Accounts. The actual permission granted to a user for a
piece of content or a folder when ACLs are applied is an intersection of the
overall permissions evaluated from Roles, Accounts, and ACLs.
For example, Jill has Read Write
access to all content assigned
to the WCC Security Group
Public through her WCC Role
of Public_Contributor. Anoth-
er user checks-in a piece of
content, assigns it to the Public
Security Group, and then direct-
ly assigns “R” permission to Jill.
The intersection between her
WCC Role of Read Write and the
applied ACL of Read is Read for this particular content item.
A more complex version, including Accounts, Roles, and ACL means that
WCC has to evaluate all the cre-
dentials and determine the final
security for the user on a specified
content item or folder. A user with
Read Write to a Security Group,
Read Write Delete to an account,
and an ACL assignment of Read
Write Delete Admin to a piece of
content with the same Security
Group and Account will receive
Read Write permission to the
document.
Painting the Pictures
An artist wanting to paint a picture of the ocean can do it in an infinite
number of ways and styles. An administrator wanting to apply security to
their enterprise content can do it in many different ways based on several
different approaches. The toughest part, for the artist and the administra-
tor, is to determine exactly what they want to paint or what they want to
secure and how.
In terms of working with the “enterprise”, it can be difficult to determine
what is needed since so many people and groups can be impacted. In my
experience, I’ve seen a majority of organizations who have two types of
content, Company Public and Departmental Restricted. I also see a rising
need for collaborative security (usually driven by organizations that are
familiar with departmental collaboration and want to apply a group level
security across an enterprise). A final model that will be discussed is the
Exception model which provides a structured approach to departmental
security and managed exceptions to permit non-departmental access or
restrictions within a department.
Company Public and Departmental Restricted (CPDR)
In a CPDR model, companies see the majority of their content as being
company public domain, but should have controls for who can edit it.
This model also addresses the need to have content that is restricted to
departmental use.
“What happens if an employee performs a search and finds a document
that was presumed to be secure or receives a link to a document that
was presumed to be secure and is able to open the link and view the
document?” In most organizations, and for the majority of content, the
answer is “as long as they cannot change it, it is okay.” It seems that many
companies view the targeting of content as the same as content security
(for this article, we will only focus on security). Occasionally, the answer
is, “that would be very bad, that is a classified document or intended
for a certain management level.” The second answer is usually given for
legal, financial, human resources, or other controlled departments. CPDR
model allows for both answers by maintaining a minimum set of Security
Groups and Roles and relying on two types of Account structures.
CPDR utilizes two primary Account structures, one for providing Read
access to all public content while controlling who can edit it, and another
for restricting both the consumption and contribution of content to a de-
partment structure. It also utilizes two primary Security Groups for Public
content and Controlled or Restricted content with Roles for each to con-
trol Read access, Read Write access, and Read Write Delete Admin access.
By doing so, organizations can submit content for general consumption to
the Security Group Public and assign an account under the Employee tree
to provide global consumption but controlled contribution. Automatically
assigning all employees the Public_Consumer Role and Read to the root
of the Employee account structure makes this possible. Content that is
specific to a particular department, and should only be accessed by that
department, can be assigned either to the Public or Restricted Security
Groups and then to the appropriate level within a Department Account
structure. Utilizing the Restricted Security Group allows departments
to have Departmental Public content as well as “hidden or restricted”
departmental content that only a select few within each department can
view.
T
- otechmag.com - Fall 2013 -
9
The following outlines the elements of a CPDR modeled security imple-
mentation:
This model lends itself to large organizations taking an enterprise-wide
view of their content. This model is highlighted by how it:
• Minimizes the overall number of Roles and Accounts (which relate to 	
LDAP Groups) that must be managed
• Minimizes the overall number of Role and Account assignments typical 	
users must be assigned
• Provides the ability to designate enterprise content or department spe	
cific content
Collaborative Security
Some content management applications are based on the concept that
groups of users collaborate on content and need to add participants in a
more fluid manner. While this strategy works well on a small scale, it rare-
ly lends itself to supporting “published” or finalized content that needs
to be shared across the enterprise. In most cases, organizations will have
multiple instances or installs of these types of systems to support each
department.
To make Collaborative Security work at the enterprise level, it is recom-
mended that organizations start with the CPDR Model as a foundation.
By adding WCC ACLs (Access Control Lists) into the mix, companies can
specify a security model to support the ad hoc assignment of permissions
to users, groups of users, or even WCC Roles. As described earlier in this
article, ACLs act as an additional filter to the security that has already
been assigned to users. Based on that, additional Security Groups and
Accounts may be required.
In most instances, adding a “Collaboration” Security Group will suffice
with a single role of Collaboration that grants Read, Write, Delete, and
Admin without having to add any additional Accounts (unless the compa-
ny wants to have strict limits on participation). All users should have the
default Role of Collaboration assigned to them.
When a user creates a Folder for their collaboration efforts, they need to
select the Collaboration security group, and then assign a specific set of
users with defined ACL rights, groups of users with defined ACL rights,
and/or WCC Roles with ACL rights. The same will hold true for content
items as well. The net result of this can be illustrated by the following
example:
“All users of the Company-x repository have been assigned the LDAP
Group Collaboration (which maps to the WCC Role of Collaboration) grant-
ing them Read, Write, Delete, and Admin (RWDA) privileges to all content
and folders assigned to the Collaboration WCC Security Group. Bob, a
project manager, needs to create a collaboration folder in the repository
and wants to assign specific rights to users. He creates the folder and
assigns it the WCC Security Group of Collaboration. Bob then assigns him-
self as a direct user with RWDA ACL permissions. (It is best practice for the
creator of folders and content in the Collaboration model to assign them-
selves the RWDA permissions so that someone else cannot inadvertently
override their access). Bob also directly assigns Mary RW ACL permissions,
Beth RWD permissions, and the Finance Group (controlled by WCC Aliases)
R permission.
“Mary is new to Company-x and has not been assign to the LDAP Group
Collaboration yet. Because WCC evaluates the intersection of Roles to
Roles, Roles to Accounts, Roles to ACLs, Accounts to ACLs, and Roles to
Accounts to ACLs, Mary will not be able to access the folder or any of the
content Bob created. Her permission intersection is null (since she has no
rights to the Security Group Collaboration, there is nothing to intersect
with her ACL permissions of RW). Beth on the other hand does have the
LDAP Group Collaboration and her intersection of permissions gives her
RWD to the folder and its content. Assuming that the users assigned to
the Finance Alias Group have the LDAP Group of Collaboration, their inter-
section of permissions is Read for the folder and its content”
The following outlines the elements of a Collaboration modeled security
implementation:
This model lends itself to large organizations taking an enterprise wide
view of their content while supporting collaboration groups for content.
This model is highlighted by how it:
• Minimizes the overall number of Roles and Accounts (which relate to 	
LDAP Groups) that must be managed
• Minimizes the overall number of Role and Account assignments typical 	
users must be assigned
• Provides the ability to designate enterprise content or department spe 	
cific content
• Provides flexibility to assign permissions to content and folders by users 	
without having to extend or add Roles and Accounts for each new proj 	
ect or collaboration group.
Exception Model
Some organizations have always managed their content by departments
or divisions, with very little public sharing of content. In many ways, it
is like the DPDR model minus the Account tree to provide global access
with controlled contribution. In addition, many organizations take a very
controlled and structured approach to assign security privileges and do
not permit the ad hoc assignments that ACLs bring to the table.
Managing exceptions comes from having folder structures in which a
division or department expects their own people to see the structures and
content, not let other departments or divisions have access, but also hide
folders within a department or division from some of that department
or division’s own people. The model also assumes that the hierarchical
nature of accounts is being utilized for access to the folders.
A company might have a divi-
sion that contains a folder for
all management documents.
The Account for that folder
may be assigned to DIVXMGT
(division xmanagement ac-
count). It is expected that all
managers will have access to
the management folder and
all of its subfolders and cot-
nent EXCEPT for the Employee
Files folder which only certain people should be able to access. If ACLs
were used, then a manager within Division X could assign specific users
to that folder. The downside to doing that, is that any one on the ACL list
with at least Read and Write permission could grant other users access
to that folder and its content. The company in this example realizes that
ACLs could cause serious issues with ad hoc granting of security, so they
have decided to use the Exception model instead. To protect the Employ-
ee Files folder and all of its content, they have assigned it to an excpetion
account EDIVXMGTEFILES (E to denote an exception Account tree,
DIVX for the division, MGT for the management level access, and EFILES
to secure employee files). If the division had assigned it to the normal
DIVXMGT account of the parent folder, then anyone who could access the
Management folder could see the Employee Files folder. By assigning the
exception account, user must have specific rights to the account in oder to
see it at all.
Utilizing Exception Accounts will increase the number of Accounts that
need to be managed and added to LDAP as groups, but it does provide a
highly restricted security model with tight controls for granting permis-
sion to users.
T
- otechmag.com - Fall 2013 -
10
The following outlines the elements of Exception modeled security imple-
mentation:
This model lends itself to large organizations taking a strict departmen-
tal/division view of their content while supporting the ability to manage
access exceptions. This model is highlighted by how it:
• Provides the ability to designate department specific content
• Provides flexibility to assign permissions for content and folders to spe	
cific LDAP groups to support exception access controls
Striking the Balance
Designing a security model to meet the needs of any or-
ganization is a balancing act between providing the right
amount of access and permissions and limiting the amount
of administration required to support it. While there are
many different approaches, the three listed in this article
are the most common models and seem to fit the needs of
a wide variety of requirements and needs of my clients. My
preference is to use either the CPDR model or the Collabora-
tion model when possible. That said, there are times when
the Exception model is the right approach.
The following matrix provides some guidelines on when to use which
model:
Special Note for the Reader
In all the examples of Accounts utilized by this article, I have shown a
full, or almost full, account name such as Employee/Marketing/Creative
to illustrate the types of accounts and structures utilized by each mod-
el. This is not practical in a true implementation due to limitations of
the dAccount field size in WCC. In most cases, an abbreviation method
or numbering sequence is used to represent the accounts. For example,
Employee would be the Account 01, the next level of Marketing would be
01 (and another entry at that level like Finance, would be 02), Creative
would be 01, and ArtDept would be 01. The full Account value for Employ-
ee/Marketing/Creative/ArtDept would be 01010101 and the Account value
for Employee/Finance/Receivables/Management would be 01020101. It
is common practice to provide a display name that makes sense to users
and a storage value and LDAP group name based off of numbering or
abbreviations to stay under the field size limits.
There are many different ways that organizations can model security, and
each company will have their own specific requirements. For many, it is a
daunting task and may be difficulty to determine where to start. There are
many firms that have years of experience in designing, developing, and
deploying security models and companies engaging in modeling security
for their content needs should seek a firm that has specific experience in
the content management application and how to ensure a security model
which will fit the needs of the enterprise. Leveraging external resources
brings their years of experience and best practices to your initiatives.
Troy Allen is Director Web-
Center Solutions and Train-
ing at the Atlata, USA based
TekStream Solutions.
T
- otechmag.com - Fall 2013 -
11
The Picture Desk
T
- otechmag.com - Fall 2013 -
12
Database
The initial release of Oracle Database 12c has been available since late
June 2013. A long anticipated release – it took almost four years since the
previous major database version – that is characterized first and foremost
by the multitenant architecture with pluggable databases.
This major architectural change – the biggest one since Oracle V6 was the
first to support parallelism - is impressive and potentially has great impact
from the administration point of view. For database developers however,
this mechanism is entirely transparent. The question for developers now
becomes: what is the big news for this 12c release – what is in it for me?
This article will introduce a number of features that are introduced or
enhanced in Oracle Database 12c that are – or could/should be – of rele-
vance for application development. Features that make things possible
that were formerly either impossible or very hard to do efficiently, fea-
tures that make life easier for developers and features that currently may
appear like a solution looking for a problem.
After reading this article, as a developer you should have a good notion
of what makes 12c of interest to you and what functionality you probably
should take a closer look at in order to benefit from 12c when it arrives in
your environment.
To very succinctly list some highlights: SQL Pagination, Limit and Top-N
Query; SQL Pattern Matching; In-line PL/SQL Functions, Flashback im-
provements, revised Default definition, Data Masking, Security enhance-
ments and miscellaneous details.
SQL Translation
The use case: an application sends SQL to the Oracle Database that is less
than optimal. Before 12c, we were able to use Plan Management, to force
the optimizer to apply an execution plan of our own design. This allowed
interference in a non-functional way – typically to improve performance.
This new 12c SQL Translation framework brings similar functionality at a
more functional level. We can create policies that instruct the database to
replace specific SQL statements received from application with al-
ternative SQL statements. These alternative statements can make
use of the same bind parameters that are used in the original state-
ment. The alternative statement is expected – obviously – to return
a result set with the same structure as the statement it replaces.
This mechanism allows us to make an application run on SQL that
is optimized for our database – for example using optimized Oracle
SQL for an application that runs only generic SQL or using queries
that contain additional join or filter conditions that make sense in
our specific environment. Especially when 3rd party COTS applica-
tions are used or when frameworks are applied in .NET, SOA, Java
and other middleware applications that generate SQL for accessing
the database, the SQL Translator framework is an option to ensure
that only desirable SQL is executed.
A simple example of using the SQL Translation framework:
BEGIN
DBMS_SQL_TRANSLATOR.REGISTER_SQL_TRANSLATION(
profile_name => ‘ORDERS_APP_PROFILE’,
sql_text => ‘select count(*) from or-
ders’,
translated_text => ‘select count(*) from or-
ders_south’
);
END;
The result of this statement is that a mapping is registered in the named
ORDERS_APP_PROFILE that specifies that when the query ‘select count(*)
from orders’ is submitted, the database will in fact execute the statement
‘select count(*) from orders_south’. This profile can have many such map-
pings associated with it. A profile is typically created for each application
for which SQL statements need to be translated.
Before you can create such a profile, the schema needs to have been
granted the create sql translation profile privilege. In the session in which
we want a profile to be applied, we need to explicitly alter the session and
set the sql_translation_profile. Finally, the 10601 system event must be
set.
See for example this blog article for more details on SQL Translation:
https://blogs.oracle.com/dominicgiles/entry/sql_translator_profiles_
in_oracle. This article is also very useful: http://kerryosborne.oracle-guy.
com/2013/07/sql-translation-framework/.
SQL Pattern Matching
Analytical functions were introduced in Oracle SQL in the 8i release and
extended in 9i and to a small extent in 10g and 11g (LISTAGG). These func-
tions added the ability to SQL to determine the outcome of a result row
using other result rows, for example using LAG and LEAD to explicitly refer-
ence other rows in the result set. This ability provided tremendous oppor-
tunities to calculate aggregates, compare rows, spot fixed
row patterns and more in an elegant, efficient manner.
The 12c release adds SQL Pattern Matching functionality
to complement the analytical functionality. It is also used
to analyze between multiple rows in the result set and
specifically to spot occurrences of patterns between these
rows. However, pattern matching goes beyond analytical
functions in its ability to find ‘dynamic’ and ‘fuzzy’ patterns
instead of only predefined, fixed patterns.
A simple example of this apparently subtle distinction
would be, using the following table with ‘color events’:
Fixed Pattern: find all occurrences of three subsequent
records with the
payload values
‘red’, ‘yellow’ and
‘blue’
Variable Pattern:
find all occurrenc-
es of subsequent
records, starting
with one or more
Overview of Oracle Database 12c Application
Development facilities
Lucas Jellema
AMIS Services
T
- otechmag.com - Fall 2013 -
13
‘red’ records, followed by one or more ‘yellow’ records, followed by one or
more ‘blue’ records.
Both patterns result in the famous color combination ‘red, yellow and
blue’. However, the variable pattern is much more flexible. Using analyti-
cal functions, the fixed pattern is easily queried for. The variable pattern
however is much harder – if doable at all. The new SQL Pattern Matching
functionality is perfectly equipped to tackle this kind of challenge.
The SQL for this particular task would be like this:
SELECT *
FROM events
MATCH_RECOGNIZE
(
ORDER BY seq
MEASURES RED.seq AS redseq
, MATCH_NUMBER() AS match_num
ALL ROWS PER MATCH
PATTERN (RED+ YELLOW+ BLUE+)
DEFINE
RED AS RED.payload =’red’,
YELLOW AS YELLOW.payload =’yellow’,
BLUE AS BLUE.payload =’blue’
) MR
ORDER
BY MR.redseq
, MR.seq;
The core of this statement is the PATTERN that is to be found. This pattern
is in fact a regular expression that refers to occurrences labeled RED, YEL-
LOW and BLUE. These occurrences are defined as ‘a record with a payload
value of ‘red’, ‘yellow’ and ‘blue’ respectively. The conditions used to de-
fine occurrences can be a lot more complex than these ones; they can for
example include references to other rows in the candidate pattern, using
keywords PREV and NEXT (similar in function to LAG and LEAD).
A somewhat more involved example uses a table of observations:
In the collection of observations, we try to find the
longest sequence of the same observations – the
longest stretch of A or B values. However, we have de-
cided to allow for a single interruption. So AAABAAAA
would count as a sequence with length 8, despite
the interruption with a single B value. The sequence
AAABBAAAA however is not a single sequence – it con-
sists of three sequences: AAA,BB and AAAA.
The SQL statement for this challenge uses SQL Pattern Matching and can
be written like this:
SELECT substr(section_category,1,1) cat
, section_start
, seq
FROM observations
MATCH_RECOGNIZE
( ORDER BY seq
MEASURES SAME_CATEGORY.category as section_cate-
gory
, FIRST(SAME_CATEGORY.seq) as section_
start
, seq as seq
ONE ROW PER MATCH
AFTER MATCH SKIP TO NEXT ROW -- a next row in the
current match may be
-- start of a next
string
PATTERN (SAME_CATEGORY+ DIFFERENT_CATEGORY{0,1}
SAME_CATEGORY* )
DEFINE
SAME_CATEGORY AS SAME_CATEGORY.category =
FIRST(SAME_CATEGORY.category)
, DIFFERENT_CATEGORY AS DIFFERENT_CATEGORY.
category !=
SAME_CATEGORY.category
) MR
order
by rows_in_section desc
Note: the MATCH_RECOGNIZE syntax is virtually the same as the syntax
used in CQL or Continuous Query Language. CQL is used in Oracle Event
Processor (fka Complex Event Processor) to process a continuous stream
of events to identify trends and patterns, find outliers and spot missing
events.
This blog article gives an example of using the SQL Pattern Match to find
the most valuable player in a football match: http://technology.amis.
nl/2013/07/24/oracle-database-12c-find-most-valuable-player-using-
match_recognize-in-sql/ . A more general introduction to Pattern Match-
ing in Oracle Database 12c is given in this article: http://technology.amis.
nl/2013/06/27/oracle-database-12c-pattern-matching-through-match_
recognize-in-sql/ .
In-line PL/SQL Functions
In Oracle Database 9i, the select
statement was changed quite dra-
matically: the WITH clause through
which inline views could be defined
as introduced, meaning that a select
statement could start with WITH:
Inline views proved a very powerful instrument for SQL developers – mak-
ing the creation of complex SQL queries much easier. In 12c, another big
step is taken with the SQL statement through the introduction of the inline
PL/SQL function or procedure.
An example:
WITH
procedure increment( operand in out number
, incsize in number)
is
begin
operand:= operand + incsize;
end;
FUNCTION inc(value number) RETURN number IS
l_value number(10):= value;
BEGIN
increment(l_value, 100);
RETURN l_value;
end;
SELECT inc(sal)
from emp
Here we see a simple select statement (select inc(sal) from emp). The
interesting bit is that PL/SQL function INC is defined inside this very
SQL statement. The DBA will never be bothered with a DDL script for the
creation of this function INC; in fact, that function is available only during
the executing of the SQL statement and does not require any administra-
tion effort. Another important aspects of inline PL/SQL functions: these
functions do not suffer from the regular SQL <> PL/SQL context switch that
adds so much overhead to interaction between SQL and PL/SQL. Inline
PL/SQL functions are compiled ‘in the SQL way’ and therefore do not
require the context switch. Note that by adding the PRAGMA UDF switch to
any stand-alone PL/SQL Program Unit, we can also make it compiled the
SQL way, meaning that it can be invoked from SQL without context switch
overhead. When such a program unit is invoked from regular PL/SQL units,
these calls will suffer from a context switch.
Inline PL/SQL functions and procedures can invoke each other and them-
selves (recursively). Dynamic PL/SQL can be used – EXECUTE IMMEDIATE.
The following statement is legal – if not particularly good programming:
WITH
FUNCTION EMP_ENRICHER(p_operand varchar2) RETURN
varchar2 IS
l_sql_stmt varchar2(500);
l_job varchar2(500);
BEGIN
l_sql_stmt := ‘SELECT job FROM emp WHERE ename =
:param’;
EXECUTE IMMEDIATE l_sql_stmt INTO l_job USING p_
operand;
RETURN ‘ has job ‘||l_job;
END;
SELECT ename || EMP_ENRICHER(ename)
from emp
T
- otechmag.com - Fall 2013 -
14
Some details on Inline PL/SQL Functions are described in this blog article:
http://technology.amis.nl/2013/06/25/oracle-database-12c-in-line-plsql-
functions-in-sql-queries/.
Flashback for application developers
A major new feature in Oracle Database
9i was the introduction of the notion of
flashback. Based on the UNDO data that
has been leveraged in the Oracle Data-
base since time immemorial to produce
multi version read concurrency and long
running query read consistency, flash-
back was both spectacular and quite
straightforward. The past of our data as it
existed in some previous point in time is
still available in the database, ready to be
unleashed. And unleashed it was, through
Flashback Table and Database – for fine
grained point in time recovery – as well
as through Flashback Query and Flashback Versions (10g) in simple SQL
queries.
Before the 11g release of the database the usability of flashback was
somewhat limited for application developers because there really was
not much guarantee as to exactly how much history would be available
for a particular data set. Would we be able to go back in time for a week,
a month or hardly two hours? It depended on that single big pile of UNDO
data where all transactions dumped their undo stuff.
The Flashback Data Archive was introduced in the 11g release – touted
as the Total Recall option. It made flashback part of database design: per
table can be specified if and how much history should be retained. This
makes all the difference: if the availability of history is assured, we can
start to base application functionality on that fact.
A couple of snags still existed with the 11g situation:
• 11g Flashback Data Archive requires the Database Enterprise Edition EE 	
with Advanced Compression database option
• In 11g, the history of the data is kept but not the meta-history of the 	
transactions so the flashback data archive does not tell you who made a 	
change
• In 11g, the start of time in your flashback data archive is the moment 	
at which the table is associated with the archive; therefore: your history 	
starts ‘today’
Now for the good news on Flashback in Oracle Database 12c – good news
that comes in three parts
1. As of 12c – Flashback will capture the session context of transactions.
To set the user context level (determining how much user context is to
be saved), use the DBMS_FLASHBACK_ARCHIVE.SET_CONTEXT_LEVEL
procedure. To access the context information, use the DBMS_FLASHBACK_
ARCHIVE.GET_SYS_CONTEXT function. (The DBMS_FLASHBACK_ARCHIVE
package is described in Oracle Database PL/SQL Packages and Types
Reference.)
2. As of 12c – you can construct and manipulate the contents of the
Flashback Data Archive. In other words: you can create your own histo-
ry. Which means that a flashback query can travel back in time to way
beyond the moment you turned on the FDA. In fact, it can go to before the
introduction of the Flashback feature in the Oracle Database and even
before the launch of the Oracle RDBMS product. It is in your hands! Import
and export of history using DBMS_FLASHBACK_ARCHIVE procedures to
create a temporary history table, and then later importing that table into
the designated history table after loading that table with the desired
history data. The temporary history table can be loaded using a variety
of methods, including Data Pump. Support is also included for importing
user-generated history. If you have been maintaining history using some
other mechanism, such as triggers, you can import that history into Flash-
back Data Archive.
3. As of 12c, Flashback Data Archive is available in every edition of the
database (XE, SE, SE One, EE).
All of the above means that any application developer developing an ap-
plication that will run against an Oracle Database 12c instance can benefit
from flashback in queries. Fine grained flashback based on flashback data
archives defined per table can be counted on. These archives can be pop-
ulated with custom history data - for example taken from existing, custom
journaling tables. Finally flashback can be configured to keep track from
the session context at the time of each transaction to capture for example
the client identifier of the real end user on whose behalf the transaction is
executed.
The syntax for flashback queries and flashback versions queries are the
same in 12c as in earlier releases.
SQL Temporal Validity aka Effective Data Modeling
The SQL 2011 standard – which Oracle helps create and uphold – intro-
duced a fairly new concept called ‘temporal database’, associated with
terms such as Valid Time and Effective Date. This concept is explained in
some detail in Wikipedia: http://en.wikipedia.org/wiki/SQL:2011#Tempo-
ral_support. The short story is that a substantial number of records in our
databases are somehow associated with time periods. Such records have
a certain start date or time and a certain end timestamp. Between these
two points in time, the record is valid or effective and outside that period
it is not. Examples are price, discount, membership, allocation, subscrip-
tion, employment, life in general. In a temporal database or one that sup-
ports temporal validity, the database itself is aware of the effective date:
it knows when records are valid from a business perspective. This knowl-
edge can be translated into more efficient execution plans, enforcement
of constraints related to the time based validity of the data and business
validity related
Flashback queries based on transaction time return records as they exist-
ed in the database at the requested timestamp, regardless of what their
logical status was at that time. Flashback queries based on valid date look
at the valid time period for each record and uses that to determine wheth-
er the record ‘logically existed’ at the requested timestamp.
Take a look at this example: Table EMP that has been extended with a
FIREDATE column and an effective time period based on HIREDATE and
FIREDATE
CREATE TABLE EMP
( employee_number NUMBER
, salary NUMBER
, department_id NUMBER
, name VARCHAR2(30)
, hiredate TIMESTAMP
, firedate TIMESTAMP
, PERIOD FOR employment (hiredate, firedate)
);
We can now execute the following flashback query, based on the effective
date as indicated by the employment period, to find all employees that
were active at June 1st 2013:
SELECT *
FROM EMP AS OF PERIOD FOR employment
TO_TIMESTAMP(‘01-JUN-2013
12.00.01 PM’)
Just like we can go back in time in a session for transaction based flash-
back using the dbms_flashback package, we can do the same thing for
effective time based flashback:
EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time
( ‘ASOF’
, TO_TIMESTAMP(‘29-JUL-13 12.00.01 PM’)
);
Any query executed in that session after this statement has been executed
will only return data that is either not associated with a valid time period
or that is valid on the 29th of July 2013.
A similar statement ensures that we will always see the records that are
currently valid:
EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_
time(‘CURRENT’);
And the default of course is that we will always see all records, regardless
of whether they are valid or not:
EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_
time(‘ALL’);
These first steps in the 12.1 release of the Oracle Database on the road
towards full temporal database support, are likely to be followed by a lot
of additional functionality in upcoming releases. The SQL 2011 standard
defines a number of facilities in SQL and around database design that are
likely to make their way into the Oracle Database at some point in the not
too distant future.
These could include:
• Valid time aware DML - Update and deletion of application time rows
with automatic time period splitting
• Temporal primary keys incorporating application time periods with op-
tional non-overlapping constraints via the WITHOUT OVERLAPS clause
• Temporal referential constraints that take into account the valid-time
T
- otechmag.com - Fall 2013 -
15
during which the rows exist: Child needs to have a valid Master at any time
during its own validity
• Application time tables are queried using regular query syntax or using
new temporal predicates for time periods including CONTAINS, OVER-
LAPS, EQUALS, PRECEDES, SUCCEEDS, IMMEDIATELY PRECEDES, and
IMMEDIATELY SUCCEEDS
• Temporal Aggregation - group or order by valid-time
• Normalization - coalescing rows which are in adjacent or overlapping
time periods
• Temporal joins – joins between tables with valid-time semantics based
on ‘simultaneous validity’
• Use the Valid Time information for Information Lifecycle Management
(ILM) to assess records to move
The support for valid time modeling is potentially far reaching. If valid
period related data is common in your database, it might be a good idea
to study the theory and reference cases and keep a close watch on what
Oracle’s next moves are going to be.
Inspecting the PL/SQL Call Stack
In Oracle Database 10g, the package dbms_utility was made available
with two procedures (DBMS_UTILITY.FORMAT_ERROR_BACKTRACE and
DBMS_UTILITY.format_call_stack) that helped inspect the call stack
during PL/SQL execution. This provides insight into the program units that
have been invoked to get to the current execution (or exception) point.
The output of these units is formatted for human consumption and is not
very useful for automated processing.
In this 12c release, PL/SQL developers get a new facility that makes call
stack information available in a more structured fashion that can be used
programmatically. The new PL/SQL Package UTL_CALL_STACK provides
API for inspecting the PL/SQL Callstack.
The following helper procedure demonstrates how utl_call_stack can be
accessed to get information about the current call stack:
procedure tell_on_call_stack
is
l_prg_uqn UTL_CALL_STACK.UNIT_QUALIFIED_NAME;
begin
dbms_output.put_line(‘==== TELL ON CALLSTACK ====
‘
||UTL_CALL_STACK.DYNAMIC_
DEPTH );
for i in 1..UTL_CALL_STACK.DYNAMIC_DEPTH loop
l_prg_uqn := UTL_CALL_STACK.SUBPROGRAM(i);
dbms_output.put_line( l_prg_uqn(1)
||’ line ‘||UTL_CALL_STACK.UNIT_LINE(i)
||’ ‘
||UTL_Call_Stack.Concatenate_Subprogram
( UTL_Call_Stack.Subprogram(i))
);
end loop;
end tell_on_call_stack;
When this helper procedure is used from a simple PL/SQL fragment that
performs a number of nested calls:
create or replace package body callstack_demo
as
function b( p1 in number, p2 in number) return num-
ber is
l number:=1;
begin
tell_on_call_stack;
return l;
end b;
procedure a ( p1 in number, p2 out number) is
begin
tell_on_call_stack;
for i in 1..p1 loop p2:= b(i, p1); end loop;
end a;
function c( p_a in number) return number is
l number;
begin
tell_on_call_stack;
a(p_a, l);
return l;
end c;
end callstack_demo;
The output is as follows:
This output gives insight in how the anonymous PL/SQL call to package
CALLSTACK_DEMO was processed. The initial call from the anonymous
block got to line 50 in procedure c. One level deeper, from line 51 in C, a
call had been made to procedure A. One level deeper still, a call from line
40 in A had been made to B.
Package UTL_CALL_stack contains several other units that help with the
call stack inspection. See for example this article for some examples:
http://technology.amis.nl/2013/06/26/oracle-database-12c-plsql-pack-
age-utl_call_stack-for-programmatically-inspecting-the-plsql-call-stack/.
Default Column value
Specifying a default value for a column has been possible in the Oracle
Database for a very long time now. It’s fairly simple: you specify in the
column definition which value the database should apply automatically
to the column in a newly inserted record if the insert statement does not
reference that particular column. When an application provides a NULL
for the column, the default value is not applied; only when the column is
missing completely from the insert statement will the default kick in.
One typical example of a default value is the assignment of a primary key
value based on a database sequence. However, that particular use case
was never supported by the Oracle Database because a default value
could only be either a constant, a reference to a pseudo-function such as
systimestamp or an application context.
In 12c, things have changed for the column default. We can now specify
that a default value should be applied also when the insert statement
provides NULL for a column. A column default can now also be based on
a sequence – obviating the use of a before row insert trigger to retrieve
the value from the sequence and assign it to the column. A column can
even be created as an Identity column – meaning that the column is the
primary key with its value automatically maintained (using an implicitly
maintained system sequence). Finally – especially of interest to adminis-
trators – a column can be added to a table with a meta data only default
value; this means that the default value for the column is not explicitly set
for every record, but is retrieved instead from the meta data definition;
this means a huge savings in time and storage.
The syntax for creating a default that is applied when a NULL is inserted:
alter table emp
modify ( sal number(10,2)
DEFAULT ON NULL 1000
)
And the syntax for basing the default value on a sequence:
alter table emp
modify ( empno number(5) NOT NULL
DEFAULT ON NULL EMPNO_SEQ.NEXTVAL
)
T
- otechmag.com - Fall 2013 -
16
Data Masking aka Data Redaction
Ideally, testing of applications can be done using production-like data.
However, generating such data is usually not a realistic option. Using the
real production data in a testing environment seems the better alterna-
tive. However, this data set may contain sensitive data – financial, medi-
cal, personal – that for various reasons should be a visible outside the pro-
duction environment. The Data Redaction feature in Oracle Database 12c
supports policies that can be defined on individual tables. These policies
specify how data in the table should be ‘redacted’ before being returned –
in order to ensure that unauthorized users cannot view the sensitive data.
Redaction is selective and on-the-fly – not interfering with the data as it is
stored but only with the way the data is returned.
The next figure
illustrates the
data redaction
process: a normal
SQL statement
is submitted and
executed. When
the results are
prepared, the data
redaction policies
are applied and
the actual results
are transformed
through conversion, randomizing and masking. This approach is very simi-
lar to the way Virtual Private Database (fine grained access policies) can
be used to mask data – records or column values).
Redaction can be conditional, based on different factors that are tracked
by the database or passed to the database by applications such as user
identifiers, application identifiers, or client IP addresses. Redaction can
apply to specific columns only – and act in specific ways on the values
returned for those columns.
Here is an example of a redaction policy:
BEGIN
DBMS_REDACT.ADD_POLICY(
object_schema => ‘scott’,
object_name => ‘emp’,
column_name => ‘hiredate’,
policy_name => ‘partially mask hiredate’,
expression => ‘SYS_CONTEXT(‘’USERENV’’,’’SES-
SION_USER’’)!= ‘’GOD’’’,
function_type => DBMS_REDACT.PARTIAL,
function_parameters =>
‘m1d31YHMS’,
expression => ‘1=1’
);
END;
This policy ensures that values from col-
umn HIREDATE are redacted for any user
except GOD. The Month and Day parts of
values for this column are all set to 1 and
31 respectively; the other data compo-
nents – Year, Hour, Minute and Second – are all untouched. The query
results from a query against table EMP will be masked.
Data redaction seems most useful for ensuring that an export from the
production database does not contain sensitive, unredacted data. A
second use could be to ensure that application administrators can do
their job in a production environment, working with all required records
without being able to see the actual values of sensitive columns.
More details on Data Redaction are for example in this White Paper: http://
www.oracle.com/technetwork/database/options/advanced-security/
advanced-security-wp-12c-1896139.pdf. An straightforward example
is in this blog article: http://blog.contractoracle.com/2013/06/ora-
cle-12c-new-features-data-re-
daction.html.
SQL Pagination, Limit
and Top-N Query
A common mistake made by
inexperienced Oracle SQL devel-
opers is the misinterpretation
of what [filtering on] ROWNUM
will do. The assumption that this
next query will return the top
three earning employees is so
easily made:
Oracle does not have – at least
not before 12c – a simple SQL
syntax to return the first few
records from an ordered row
selection. You need to
resort to inline views –
for example like this:
Perhaps this is not a big deal to you. Anyway, Oracle decided to provide
a simple syntax in 12c SQL to return the first X records from query, after
the filtering and sorting has been completed. In our case, this statement
would be used for the Top-3 earning employees:
select *
from emp
order
by sal desc
FETCH FIRST 3 ROWS ONLY;
Slightly more interesting I think is the simple support for row pagination
that is introduced in this fashion. Many applications and services require
the ability to query for records and then show the first set [or page] of
maybe 20 records and then allow the next batch [or page] of 20 records to
be returned. The new SQL syntax for retrieving a subset of records out of a
larger collection looks like this:
select *
from emp
order
by sal desc
OFFSET 20 FETCH NEXT 20 ROWS ONLY;
Here we specify to select all records from emp, sort them by salary in
descending order and then return the 21st through 40th record (if that
many are available). The syntax also support fetching a certain percentage
of records rather than a specific number. It does not have special support
for ‘bottom-n’ queries. Note: checking the explain plan output for que-
ries with the pagination or top-n queries is interesting: the SQL that gets
executed uses familiar analytical functions such as ROW_NUMBER() to
return the correct records – no new kernel functionality was added for this
functionality.
T
- otechmag.com - Fall 2013 -
17
Security
Many improvements were introduced in this 12c release in the area of se-
curity. Some are primarily of interest to the administrator and others are
quite relevant to application developers.
Capture privilege usage
One of these security related new features is the ‘capture privilege usage’
– a facility through which you can inspect the privileges that are actu-
ally required by users to run applications. This feature is introduced to
strengthen the security of the database to enforce the principle of least
privilege: it tells you which privileges are used in a certain period of time
by a certain user. When you compare these privileges with the privileges
that have actually been granted to the user, there may some privileges
that have been granted but are not actually required and should probably
be revoked. Also see http://bijoos.com/oraclenotes/oraclenotes/2013/92
Invoker Rights View
In addition to the Invoker Rights package, that has been around for a long
time already, now finally there also is an invoker rights view – although its
specific syntax uses the term bequeath current_user:
create or replace view view_name
( col1, col2,….)
BEQUEATH CURRENT_USER
as
select …
from table1 join table2 …
This statement specifies that privileged users can reuse the view’s SQL
definition but only have the SQL applied to database objects owned by
or granted explicitly to the user that invokes the view. Before 12c, anyone
who has the select privilege on the view can query data from that view
leveraging the select privileges of the view’s owner on the all objects refer-
enced from the view.
Inherit Privileges
In the same area of invoker rights definitions, the database before 12c
contains something of a loophole: when a user invokes an invoker rights
package, anything done by the package is done using the authorizations
of the invoking user – that is after all the whole idea. However, this means
that the code in the package can do things based on the invoking user’s
privileges and channel results to the user who owns the invoker rights
package:
In this example, the owner of the invoker rights program unit has added
code to the procedure that leverages the invoking user’s select privilege
on the special table to retrieve data that it then writes to its own TAB_TA-
BLE on which it has granted public access. In previous releases, the invok-
ing user had no control over who could have leverage his or her access
privileges when he or she runs an invoker’s rights procedure.
Starting with 12c, invoker’s rights procedure calls only can run with the
privileges of the invoker if the procedure’s owner has the INHERIT PRIVI-
LEGES privilege on the invoker or if the procedure’s owner has the INHER-
IT ANY PRIVILEGES privilege. This gives invoking users control over who
has access to their privileges when they run invoker’s rights procedures or
query BEQUEATH CURRENT_USER views. Any user can grant or revoke the
INHERIT PRIVILEGES privilege on themselves to the user whose invoker’s
rights procedures they want to run.
SYS_SESSION_ROLES
A new built-in namespace, SYS_SESSION_ROLES, allows you to deter-
mine if a specified role is enabled in the current session. For example, the
following example determines if the HRM_ADMIN role is enabled for the
current user:
SELECT SYS_CONTEXT(‘SYS_SESSION_ROLES’, ‘HRM_AD-
MIN’)
FROM DUAL;
This query returns either ‘TRUE’ or ‘FALSE’.
Attach Roles to Program Units
In 12c, you can attach database roles to program units functions, pro-
cedures, packages, and types. The role then becomes enabled during
execution of the program unit (but not during compilation of the program
unit). This feature enables you to temporarily escalate privileges in the PL/
SQL code without granting the role directly to the user. The benefit of this
feature is that it increases security for applications and helps to enforce
the principle of least privilege.
The syntax is quite straightforward:
GRANT hrm_admin TO procedure scott.process_salaries
If the execute privilege on procedure process_salaries is granted to some
user JOHN_D, then during a call to process_salaries by JOHN_D an in-
spection using SYS_SESSION_ROLES into the role
HRM_ADMIN being enabled would return that the
role is indeed enabled – even though that role has
not been granted to JOHN_D.
This blog article by Tom Kyte shows more de-
tails on this facility: http://tkyte.blogspot.
nl/2013/07/12c-code-based-access-control-cbac-
part.html.
White List on Program Units
In 12c, we can indicate through a white list which program units are al-
lowed to access a certain package or procedure. If a white list is specified,
only a program unit on the list for a certain object can access the object.
In the next figure, this has been illustrated. A, B, C as well as P, q, r and s
are all PL/SQL program units in the same database schema. Units q and
s have been associated with a white list. Unit s can only be invoked by
object P and unit q is accessible only from P and r. This means for example
that A, even though it is in the same schema as unit s, it cannot invoke
unit s. If it would try to do so, it would run into an PLS-00904: insufficient
privilege to access object s error.
This white list mechanism can be used for example to restrict access to
certain units in a very fine grained way. In the example above, it is al-
most like a ‘blue module’ is created in the schema of which object P is
the public interface and that contains private objects q and s that are for
module-internal use only.
The syntax for adding a white list to a PL/SQL program unit consists of the
keywords accessible by followed by a list of one or more program units.
create package s accessible by (p)
is
procedure …;
end s;
Note that the actual accessibility is checked at run time, not compile time.
This means that you will be able to compile packages that reference pro-
gram units with white lists in which they do not appear and that they will
not be able to successfully access at run time.
This blog article by Tom Kyte explains PL/SQL white lists very clearly:
http://tkyte.blogspot.nl/2013/07/12c-whitelists.html.
T
- otechmag.com - Fall 2013 -
18
Conclusion
The essence of the 12c release of the Oracle Database does not lie in appli-
cation development facilities. Having said that, there is of course an inter-
esting next step in the evolution of what database developers can do with
the database. SQL and PL/SQL have evolved, allowing for more elegant,
better performing and easier to write programs. Some facilities – for ex-
ample SQL Temporal Validity and Flashback – are potentially far reaching
and may lead to different designs of data models and applications.
The compilation in this article is obviously quite incomplete. I have
mentioned some of the most striking – in my eyes – new and improved
features. Some glaring omissions are in the next list – which is of course
equally incomplete:
• Lateral Inline Views
• (Cross and Outer) Apply for joining with Collections
• VARCHAR2(32k)
• XQuery improvements and other XMLDB extensions
• Java in Database Java 6 or 7
• Export View as Table
• DICOM support for WebCenter Content
• New package dbms_monitor for fine grained trace collection
• DBMS_UTILITY.EXPAND_SQL_TEXT for full query discovery
Browsing through the Oracle Documentation on Oracle Database 12c -
http://www.oracle.com/pls/db121/homepage - and browsing the internet
for search terms such as ‘oracle database 12c new feature sql pl/sql’ are
a pretty obvious way of getting more inspiration on what the next gener-
ation of Oracle’s flagship product has to offer. Hopefully this article has
contributed to that exploration as well.
Lucas Jellema is CTO of the Dutch based
company AMIS.
T
- otechmag.com - Fall 2013 -
19
The Picture Desk
T
- otechmag.com - Fall 2013 -
20
Business intelligence tells you what happened at work. Good business
intelligence tells you what is happening now. Competitive intelli-
gence tells you what your competitors and the market did. Good
competitive intelligence tells you where they’re headed. Both BI
and CI crunch big data to deliver answers to the questions, “what
happened?” and “how did that happen?” Only a social and deep web
competitive intelligence framework can answer the most important
question, “so what?”
In order to be actionable, intelligence must answer the “so what?” ques-
tion. The answer to “so what?” describes the impact of the information.
It describes assumed and presupposed context. It fills in the rest of the
statement that starts out, “we care about this because…”
Social competitive intelligence is a new discipline. It is emerging now and
will continue to grow over the next decade. Some solutions exist already.
But they and their marketing cousins – social media management soft-
ware - are still largely focused on listening to and tracking social mentions
and sentiment activity. While these are important, the solutions today
are overly simplistic. They can list changes to competitor websites, report
on where competitor PPC (pay per click) ads are run and measure generic
brand sentiment. However, rather than exerting a contextualizing force on
the already massive volumes of available business and social information,
they add to the data tsunami. When your cup is already running over, it
makes little sense to put it under a faster faucet.
The better solution is to put into place a framework that gathers, filters,
synthesizes and analyzes social competitive intelligence and deep-web
analytics (i.e. the web beyond Google indexes). Then use that framework
as a lens through which to view your existing BI and CI data. Then you will
be able to answer the all-important, “so what?” question.
Here are several real-world examples.
The Call Center Cost Hole
BI reporting shows increasing costs and increasing churn in your call
center; overall a bad trend. If that were the only information you had, the
“so what?” answer would be to kick up call center recruiting a fill the gaps
and some stricter MBOs for call center managers on employee retention.
However, with a social competitive intelligence framework in place, it
is revealed that social media, blogs and discussion boards are full with
blistering criticism of your call center escalation processes. The withering
criticism is poisoning the work environment and making a tough job even
more unpleasant. Viewed through this lens, the correct answer to “so
what?” is not to step up recruiting. Rather, it is to fix the poison call center
environment and re-engineer the escalation processes while empowering
call center employees.
Not only does this save substantial time and money, it actually boosts net
productivity by empowering knowledgeable employees and eliminating
training and the ramp up to full productivity required for each new hire.
The Outside Expert
You are ready to launch a new product into an overseas market. But there
are a host of regulatory issues to navigate. While you have plenty of “in-
dependent” research and case studies validating your approach, you still
want an expert in your technology and the foreign market to help guide
you through the approval process. The regulators don’t look kindly on
experts who are among your paid staff due to potential conflict of interest.
You want an external expert but you want to avoid someone who regularly
works for your competitors or who has expressed harsh opinions of your
company or product in the past.
Traditional competitive intelligence will not provide expertise location
like this. Traditional BI only tells you that your new market has a lot of po-
tential. If that were the only information you had, the “so what?” answer
would be to get some internal recommendations and do a Google search
and hope the person is available and credible. But hope makes for a poor
strategy, especially with something as big as a new foreign market launch.
With a social competitive intelligence framework in place, you are able
to perform a social network analysis to first locate the influencers on the
topic area, measure their credibility and influence relative to one anoth-
er, and finally screen them for competitor interaction and engagement.
This approach yields not only a deeper, more highly qualified “short list”
of available experts, it also reveals a large and rich set of topic influenc-
ers who your team can target for engagement and awareness of your
new product. Ultimately, this delivers not only the help navigating new
regulatory processes in new markets, it also identifies a new set of up and
coming influencers who will help your product remain successful after the
initial splash.
The Competitor Customer List
Your internal BI tells you that sales are plateauing despite the fact that you
have a better product with more features and a better history of quality.
Your competitive intelligence tells you that competitors are facing similar
slow-growth periods. It looks like the market is reaching saturation and
new opportunities are small. If that were all the information you had,
the answer to the “so what?” question would be to switch over your sales
strategy from a hunting to a farming operation. Marketing would shift to
promoting small incremental improvements and the grind of upgrades/
maintenance/renewal would become the core of your revenue model.
However, with a social competitive intelligence framework in place you
would reveal a gold mine of new accounts that you can hunt while dra-
matically boosting your competitive advantage. The framework would
reveal your competitor’s customer lists. First, realize that all customers
- yours and your competitors – are interested first in solving a business
problem and only secondarily staying with a particular vendor or service
provider. Staying with a particular provider tends to be more a matter
of convenience and trust than inherent and continued ability to deliver
value. This means there is opportunity to knock out your competitor or
at least to come alongside them and establish a beach head; but only if
you know who they are and how to approach them. This is what a social
competitive intelligence framework delivers.
That they are your competitor’s customer means that at one time in the
past, they got a better deal or had a better recommendation or were
simply aware of your competitor at the time they needed a solution. In
the B2B world, there are few things that lock in customer. Sure, they exist;
Social Media
Blending BI and Social Competitive Intelli-
gence for deep insight into your business
Billy Cripe
BloomThink
T
- otechmag.com - Fall 2013 -
21
big computing platform and enterprise application decisions tend to have
at least a 7 year life cycle. Similarly, being a Mac, Windows or Linux shop
tend to be about corporate culture. But as the recent Samsung mobile vs
Apple iPhone campaigns demonstrate, even the most loyal customers can
switch to a completely different platform if the reason to switch is compel-
ling.
A social competitive intelligence framework makes developing a target
list of your competitor’s customers easy. First, perform a social network
analysis of your competitor. See who is commenting, following, liking and
(re)tweeting about your competitor. Then filter that list by companies and
contacts you’d like to target. Perform this analysis again around the time
of your competitor’s big events like conferences and trade shows. The
cadence of social activity spikes during those times. Additionally, your
competitor will trot out their favorite case studies and customer testimo-
nials during that time to add credibility to their pitch. What they’re doing
for you is validating the customer need, interest and ability to pay. You
just need to get them to switch or try out your product too. Finally, mine
your competitor’s website for their customer information. Companies
routinely post logos and ROI or case studies online. Even if competitor
brag sheets use unnamed customers, there will generally be enough infor-
mation to make a very educated guess and narrow it down to only one or
two possible companies (your potential customers!) in the area.
My company, BloomThink, recently performed a social competitive intelli-
gence engagement designed to create a competitor customer list. During
one trade show, the target competitor was demonstrating an unbranded
intranet system. However, the layout, color scheme and look/feel of their
demo perfectly matched an educational YouTube video posted at about
the same time by a large local health care organization. The health care
company was added to the “competitor customer target list”. Only a so-
cial competitive intelligence framework and strategy could have revealed
the connection that was publicly available but buried in a mountain of
previously unrelated social data.
Conclusion
As the old saying goes, “text without context becomes pretext”. No matter
how good your BI data is alone, without the contextualizing force of a
social competitive intelligence framework, it becomes justification for gut
feelings, political games-playing and flights of fancy. That is no way to run
a business.
Enterprises and especially CIOs, CMOs and Sales EVPs need to implement
a social competitive intelligence framework that understands how to do
the following:
1. Collect & Gather deep web and social information
2. Filter & Categorize information to keep what matters and cull what
doesn’t
3. Analyze & Synthesize that information with existing BI & CI data
4. Report & Act so that actionable intelligence can deliver meaningful
business impact
Billy Cripe is the founder of the
Minneapolis, USA based compa-
ny BloomThink
T
- otechmag.com - Fall 2013 -
22
The Picture Desk
T
- otechmag.com - Fall 2013 -
23
Oracle Enterprise Man-
ager Cloud Control 12c:
Managing Cloud Center
Chaos
Porus Homi Havewala
Packt Publishing
ISBN: 978-1-84968-478-1
Published: December
2012
Rating:
Being in charge of the Oracle Enterprise Manager line of business for the
ASEAN region, Mr. Havewala has certainly been close to the action when it
comes to the topic of concern of this book.
Being released in December 2012, the Oracle Enterprise Manager Cloud
Control 12c: Managing Data Center Chaos book by Packt Publishing is one
of the first titles on the subject of version 12c of the administration tool of
choice for an Oracle environment.
The book is pretty well written and, because of the style it is written in, is
a pretty easy read. Despite the in-depth topics it coffers. This is something
we rarely see in technical books and both the author as the publisher can
see this as a complement.
Oracle Enterprise Manager Cloud Control 12c: Managing Data Center
Chaos offers the reader some critical insights in how EM is supposed to
help you to handle the everyday chaos in your data center(s). With chapter
titles like “Ease the Chaos with Performance Management” and “Ease the
Chaos with Automated Provisioning” it’s all about managing some sort of
awful situation in your data centers.
And it works. It actually offers a lot of insight about what you can do with
the help of Enterprise Manager. It does show us how to look further than
just the basic elements of EM that we have used for some while now. It
certainly helps looking for those functions of EM that where introduced in
12c.
The backside of it is that it only shows us how it is supposed to work. The
book barely goes beyond what might or should be possible. There are no
real live examples and all the examples that are worked out properly are
taken of the demo grounds at Oracle. But hey, that’s what you get when a
business development manager writes a book.
All in all it is a pretty nice read. Especially for those of us who are still
looking on where to position Oracle Enterprise Manager and want some
insights in how all this is implemented. It does offer the reader some good
and qualified information on the topic at hand: how do I create a more
manageable environment in my data center?
Oracle WebLogic Serv-
er 12c Advanced
Administration Cook-
book
Dalton Iwazaki
Packt Publishing
ISBN: 978-1-84968-
684-6
Published: June 2013
Rating:
A cookbook. What’s up with that? The idea seems handy: short articles on
how to do a specific job. And most of the time it works. It works fine. But
what when it is about a subject that has already been described at length?
Well, than it might be just a tiny bit too much.
Last June Packt Publishing released a title on Oracle WebLogic Server 12c
in their popular cookbook series. The book covers more some 60+ ‘reci-
pes’ that teach readers how to install, configure and run Oracle WebLogic
Server 12c.
The chapters, or ‘recipes’ as Packt tends to call them, about installation
and running truly reminds us of the installation and configuration chap-
ters in the official Oracle documentation. Does that mean that these
recipes add nothing to the knowledge of the reader? Of course not, it is
actually necessary in a book that tries to be complete about things and
shows us that the Oracle documentation is correct about some points.
Some of the articles cover configuring for high-availability, troubleshoot-
ing and stability & performance. And this is where the value of the book
kicks in. Because for a main product in the Oracle stack, WebLogic isn’t
always the best understandable of systems. If a book shows us where to
look for stability and when trouble is under way, it pays of immediately.
So, does this book add something to the overall knowledge of the profes-
sional who works with Oracle WebLogic Server 12c? Definitely, even if it’s
just as a convenient reference.
Oracle Enterprise Man-
ager 12c
Administration Cook-
book
Dhananjay Papde,
Tushar Nath & Vipul
Patel
Packt Publishing
ISBN: 978-1-84968-740-9
Published: March 2013
Rating:
Oracle Enterprise Manager is certainly gaining momentum and the times
that it was just a toolset to manage single database instances is definitely
in the past. That also means that the product is gaining a larger fan base,
as it probably should.
In this book a total of three authors wrote some 50 recipes on managing
the Oracle stack using the latest version of Enterprise Manager. What you
really notice right away is that it is not only about managing the Oracle
database, but there’s also a bit about managing middleware as well.
The last chapter of the book is completely reserved for a description of
the iPhone / iPad app for using Oracle Enterprise Manager. This is all exit-
ing stuff, but probably not the most interesting for in-depth administrators
on an excessive Oracle Stack.
What’s really missing in this book is the entire ‘Cloud Management’ part of
the latest edition of Enterprise Manager. Oracle’s promises that the toolset
is the perfect companion for the private or public cloud and data centers
managing those is not seen anywhere in this book.
Because cloud management is the main focus of Oracle Enterprise Man-
ager 12c it is really a shame that it is not part of this book. If the authors
would have shared the focus of the other Packt title on Oracle Entprise
Manager, it would really have added to the overall reading experience.
The Book Club
TTTTT
TTTTT TTTTT
T
- otechmag.com - Fall 2013 -
24
The Picture Desk
T
- otechmag.com - Fall 2013 -
25
It could just be the key to easy adaptation
This article is written based on our experience of working on ADF proj-
ects together with a User Experience Designer (UX designer) and his
value in this team. In this article we will explain what UX is and how it
is used in a big project to get more value out of our software. We will
also explain how we have used UX and ADF in an Agile environment.
In this article we are going to talk mainly about the front-end of soft-
ware. When reading this article you should have a basic understand-
ing of the scrum process however you need no knowledge about User
Experience design.
What is UX?
In this part of our article we will explain what UX is and what the role
of the UX designer in a software project is. A software project can be a
web, mobile or a desktop application.
In almost any IT project a business annalist is responsible for under-
standing and translating the business needs into clear software spec-
ifications. The business annalist is the eyes and ears of the business,
they make clear what solution should be built. So we have IT and
business involved in our project together. But who takes care of the
end users? Is there anybody who cares about them? This is where the
UX designer comes in.
UX is an acronym for “user experience”. It is almost always followed by the
word “design.” By the nature of the term, people who perform the work
become “UX designers.” But these designers aren’t designing things in the
same sense as a visual or interface designer. UX is the intangible design of
a strategy that brings us to a solution. So what does this actually mean?
This solution can be divided into several layers. Each of these layers de-
scribes a more detailed part of the system. A UX designer creates the total
user experience by designing and thinking about each of the layers and
validate these results with the end users of the system. In the following
example the techniques we used for our solution are described. For this
example we have used the “Elements of User Experience” developed by
a renowned User Experience professional called Jesse James Garret. Our
“solution” is an ADF web application built on a BPM middle layer to create
a case management system with workflow.
The Elements of User Experience
Surface
This is the visual style end users will see when they use the application
We have used the standard corporate design rules called “de Rijkshuisssti-
jl” offered by the Dutch government for designing web applications.
Skeleton
Describes the interactive components needed on the pages like buttons,
list boxes etc.
We have used component descriptions in wireframes to visualize the in-
teractive components for the system. A component is a functional piece of
software used on a screen for example: a search box on the screen.
Structure
Describes the pages needed in the application and their navigation flows
For each step in the process we have used flow diagrams and standard
page layouts to structure the application screens.
Scope
Describes the scope of the project that needs to be built.
• We have used a process design for scoping the screens witch where
needed.
• Product backlog with user stories to scope and prioritize the needed
functionality
Strategy
Describes the underlying application strategy to align with business
and user needs.
• For strategy we have used workshops to determine standards to be
used thru-out the whole application,
• We have talked to the future users to understand their needs.
• Talked with the business to determine their business goals
So we have gone over some background on what a UX designer is
and what kind of work he does. But why do you need a UX designer in
your ADF project?
• There is somebody who cares about your users and wants to make
them happy. Happy employees are more productive and less ill.
• A UX designer has the skills to test your application for usability issues
early in the build process.
• A UX designer keeps asking critical questions about functionality like
“is this really needed for our users”. This can lead to less functionality to
build.
• In a scrum project the UX designer helps to get user stories ready for in-
clusion in a sprint. Visualizing the software will save the developers time.
• A UX designer improves the user acceptance by including the end-users
early in the design process and use their feedback to improve the product.
• A UX designer is an objective hub between the business, development
and the end user. He tries to balance these to get the most usable and eco-
nomical product as possible.
But bear in mind that when you use a user centered design process in your
project you need to continually invest time into improving your applica-
tion and not only adding more functionality. So give the UX designer room
to organize user sessions and working this feedback back into the applica-
tion.
Oracle ADF
Successfully combining UX with ADF Marcel Maas
AMIS Services
Sander Haaksma
UX Company
T
- otechmag.com - Fall 2013 -
26
A real world case
For the project in which I worked with a UX designer the goal was to create
a case management system using Oracle BPM and an optimized worklist
application. Since a great deal of productivity could be gained by improv-
ing the screens and work processes that are being used by the end users
we flew in a UX designer. His assignment was to think up a really usable
interface which would be smart and supportive to the user. We wanted to
use contextually aware widgets to provide extra info to the end user at ev-
ery step of a case, as well as define our own navigation for the application.
We quickly realized we needed to create a new worklist application from
scratch instead of using and customizing the BPM workspace. Therefore
2 ADF developers were hired including me. For the realization of the BPM
processes 2 specialists were hired, as well as a tester, a process analyst
and a project manager. At our arrival it was already decided to use Scrum
as the way of managing the project which suited us well. At that time we
knew little of the requirements of the system and had a limited bag of
money at our disposal. So we settled for 2 week sprints and went to work
on the first iteration. We made sure the UX designer and the BPM guys
were always a sprint ahead of the ADF developers in terms of function-
ality to make sure they could rely on tasks created by the BPM team and
designs by the UX designer. The ADF team then only would have to focus
on the technology. During the sprint the UX designer would have sessions
with end users to define the UI which would then be validated by the
ADF team and eventually end up in one of the next sprints. At the time of
writing we are still going strong and are almost ready for the first release.
In the next few paragraphs we will dive into various parts of the process
more deeply.
Converting wishes to screens
The starting point of a project is always the user’s wishes which in our
case are a backlog of user stories. These stories provide a way of describ-
ing functionality in the perspective of the end user. The UX designer takes
these stories as a starting point for determining the general structure and
flow of the application. The stories themself provide no hint as to how
screens should look but they describe functionality. It is the “What”, not
the “How”. The designer takes a step back and analyses the stories to find
overlap and get a general feel for what the user wants. This is done by
talking to the user him or herself. The general flow and structure of the
application are hereby determined.
From here on out the designer takes a number of stories from the back-
log and uses these to create a sketch of the screen and its components.
These are functional components such as buttons, panels, images etc.
These sketches are then discussed with the end user to validate them and
modify them if needed until all stakeholders are satisfied with the result.
The trick is not to drive a sketch to perfection but specify just enough for a
developer to start building what the user wants. This makes it possible to
stay flexible when new insights arise.
Now the screens are ready to be implemented for real in a sprint. In the
next paragraph we will dig into this a bit more.
After the screens have been created the designer hosts a usability lab to
validate the usability of the screen and its components. A Usability lab is
a session where users are asked to complete tasks with the new software.
During these sessions stakeholders observe the behavior of the test users
and together decide which issues are important. The usability issues will
be logged as new user stories on the backlog. The usability issues found
during the lab will be prioritized and added to the backlog and find their
way back into a sprint to improve functionality. These iterations greatly
improve the usability of the product. Involvement of the business is abso-
lutely necessary for this process to be a success.
Example screen and components wire-framed in tool Axure RP.
Successfully combining ADF with UX design
In the previous paragraph the process of designing screens from user sto-
ries was explained. However nothing was said about the implementation
in ADF. In this part we will describe how to leverage the power of ADF and
combine it with effective UX design.
The availability of a UX designer for creating screens saves a lot of time
for a developer because he no longer needs to think about and design the
screens himself. However when one releases an UX designer on a product
and he gets to work as lone wolf usually the most beautiful and intuitive
design is created. However this design still needs to be implemented by a
specific technology, which has its own pro’s and con’s. This means it can
take a lot of time to create specific components when the technology itself
probably supports it in a different way.
T
- otechmag.com - Fall 2013 -
27
ADF is no different. A UX design for ADF is only a help when there is com-
munication between the UX designer and the developer. The UX designer
can explain what the user and business want and developer can explain
how the solution can be best implemented using ADF leveraging its
strengths. In this case the designer will come to the developer and using
the sketch explains what is needed. The developer can then provide the
ADF components that need to be used to make sure most of the func-
tionality can be achieved by using standard components and patterns.
This makes sure there is a balance between the user’s needs, the cost of
development and the technical feasibility. Luckily we don’t need to think
up everything. A lot of ways of interacting with users through ADF have
already been thought out and tested by Oracle. They have bundled these
ADF UX patterns and published them on the web. They can be found here:
http://www.oracle.com/technetwork/topics/ux/applications/gps-1601227.
html Some of these design patterns have already been implemented in
ADF components and others you can implement by yourself. This website
can be a great help however it is not strictly necessary.
The next step is to validate the design made by the UX designer against
ADF’s capabilities. We must try to design screens that are easily built using
ADF standard components. Because this saves us time from developing
components that look great but which take twice as much time to create.
For example: One goal of the UX designer was to only show input fields on
the screen when they are actually needed. If they weren’t, they would not
be shown. Now this could easily be realized with ADF. Only when we add
validations on these input components and the validations fire when the
component is hidden strange things will happen. So we have modified the
designs to always show required input fields as well as fields with other
validations. Sometimes we could see and avoid such problems before-
hand and for some others we learned the hard way. As you can see your
own experience can really make a difference here so think hard about the
possible difficulties that could arise when implementing a screen design
in ADF. Finding issues at this point and thinking up an alternative design
which works just as well for the UX designer as well as ADF can save you a
lot of time later on.
So 80% of the functionality is realized in 20% of the time. Also iteration is
key here. The solution gets created, tested by users and improved if neces-
sary. By using standard components and patterns there is time saved on
development, which then can be used for improvements after receiving
feedback.
To sum it all up
In this article you have read about UX design in combination with Oracle
ADF. The question is whether it pays to hire or request a UX designer on
your next project. The answer to this can be short, yes I think that when
interfacing with humans is involved it is already enough of a reason to hire
a UX designer. However it is not enough to hire a designer and let him go
about his business. The designer must talk not only with the business end
of your project but with the development team as well to make sure the
solution envisioned by the designer is feasible with the technology chosen
to implement the design. In the case of ADF this is the main point. When a
UX designer is paired with ADF developers and they mix their knowledge
the greatest potential is unlocked. Try and create a UX design which
uses design patterns that are already supported in ADF. In this case
you get the best usability and design possible which is then realized
in a minimal amount of time. Another key point here is iteration. It
does not matter whether you are doing an agile project or any other.
Validate your work with end users and improve your designs from
there on. Agile projects are best suited for this but you can imple-
ment it in other projects as well. Because you have included end
users from very early on they are more willing to adapt the product
and there is less of a learning curve. When done right you get a happy
customer and happy end users because the application is easy to use
and was made in less time and money.
Marcel Maas is Senior Oracle Devel-
oper at the Dutch company AMIS
Services
Sander Haaksma is UX Designer at
the Dutch company UX Company.
T
- otechmag.com - Fall 2013 -
28
The Picture Desk
T
- otechmag.com - Fall 2013 -
29
Ten years ago it was common to hear the adage that every Oracle
product has a database in it somewhere but even today, with Oracle’s
portfolio of thousands of products, there’s probably still some truth
in it! You would expect this to be the case for the Oracle Database Ap-
pliance (ODA) of course, but what is new is that it can now run virtual
machines, and in particular those for WebLogic Server and Oracle
Traffic Director (or OTD, Oracle’s software load balancer with iPlanet
heritage and acquired via Sun).
Recently I was fortunate enough to work with a favorite mid-sized custom-
er considering the ODA as part of a hardware refresh, and had get hands-
on experience of the latest ODA X3-2 model during a Proof of Concept
(POC). I have been sharing those experiences on my blog but here, in
OTech magazine, I’m combining them for the first time along with some
additional analysis.
Hardware
First, let us step back for a moment. For those of you who have not come
across it before the ODA (apparently pronounced to rhyme with “Yoda”)
is the smallest of Oracle’s Engineered Systems. Whilst not boasting exotic
components, such as InfiniBand fabric, it is comparable in spirit to the
other engineered systems like Exalogic and Exadata – you are buying not
just hardware but also the software installation design and the mainte-
nance approach.
Regarding the hardware itself,
ODA consists of two 1U “pizza
box” X3-2 servers connected to
one or two small storage arrays
(but sold as a package) includ-
ing software for provisioning
and applying updates.
Each of the servers has two
Intel E5-2690 2.9GHz proces-
sors, giving 16 cores per server,
and 256GB RAM. The storage
array(s) are directly attached to
the servers via SAS2 and have
24 2.5” bays, populated with
twenty 900GB 10k RPM spinning
disks and four 200GB SSDs. For
networking each server has four
on-board 10GbE ports (copper
as you might expect) and a PCI card with dual
10GbE ports.
I have been specifying this sort of 2 socket
servers, especially for middleware, over a
number of years now and still consider them
to be the sweet spot in x86-64 sizing. Intel
have recently announced a new version of this
processor family that now has up to 12 cores
and which, following a process shrink from
32nm to 22nm, promises even more perfor-
mance. With virtualization I expect most, if
not all, of my (mid-sized) customers could
run their middleware production estates on
a handful of, or maybe even just two, servers
like these.
So that’s the hardware. We could debate in
particular whether the storage is sufficiently
well-specified for Oracle enterprise deploy-
ments but I’ve come to realize that, for the ODA’s target customers, most
probably the performance is “adequate” (as Rolls-Royce apparently
used to say, in an understated manner, whenever asked by the motoring
press!).
Virtualized Platform
Originally the ODA ran Oracle Linux
in a purely physical manner but
since earlier in 2013 it has the abil-
ity to run virtual machines instead
– 2 of which, called “ODA Base”, run
the database (typically RAC) and
what you do with the remainder of
the resources is up to you. In this
mode ODA runs Oracle VM on each
server as the hypervisor and has
a command line interface (CLI) –
not OVMM – to manage these two
hypervisors, and their associated
VM repositories, called OAK CLI (i.e.
oakcli). The SAS storage control-
lers are passed through directly to
the ODA Base VM (also known as
oakDom1), so are essentially
The rear of the ODA X3-2 is a mass of cables. Note this was the POC system and
so several cables weren’t connected – in production for this customer there
would be another 9 redundant power and data cables to fit into the same
space.
connected to the I/O the same way as a physical host would be. This only
leaves the mirrored (boot) disks in each server to provide space for the
virtual machines, though it is supported to connect to NFS storage.
The installation and configuration of the ODA Virtualized Platform is
straightforward - you re-image with the virtualized system, copy over the
ODA Base template and run a utility called the ODA Appliance Manager…
by this point you will then have a system to run databases or other virtual
machines.
Oracle Database Appliance
WebLogic on the Oracle Database Appliance
Virtualized Platform
Simon Haslam
Veriton Limited
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013
O tech magazine   fall 2013

Mais conteúdo relacionado

Mais de ALI ANWAR, OCP®

Certificate Performance Tuning SQL server - wait statistics
Certificate Performance Tuning SQL server - wait statistics Certificate Performance Tuning SQL server - wait statistics
Certificate Performance Tuning SQL server - wait statistics ALI ANWAR, OCP®
 
ebs-continuous-innovation-on-release-12-2 ps EXTENDED 2034.pdf
ebs-continuous-innovation-on-release-12-2 ps EXTENDED 2034.pdfebs-continuous-innovation-on-release-12-2 ps EXTENDED 2034.pdf
ebs-continuous-innovation-on-release-12-2 ps EXTENDED 2034.pdfALI ANWAR, OCP®
 
Cloud-migration-essentials.pdf
Cloud-migration-essentials.pdfCloud-migration-essentials.pdf
Cloud-migration-essentials.pdfALI ANWAR, OCP®
 
certificate-SQL server Query Tuning Techniques.pdf
certificate-SQL server Query Tuning Techniques.pdfcertificate-SQL server Query Tuning Techniques.pdf
certificate-SQL server Query Tuning Techniques.pdfALI ANWAR, OCP®
 
Migrate or modernize your database applications using Azure SQL Database Mana...
Migrate or modernize your database applications using Azure SQL Database Mana...Migrate or modernize your database applications using Azure SQL Database Mana...
Migrate or modernize your database applications using Azure SQL Database Mana...ALI ANWAR, OCP®
 
Azure SQL Managed Instance infographic.pdf
Azure SQL Managed Instance infographic.pdfAzure SQL Managed Instance infographic.pdf
Azure SQL Managed Instance infographic.pdfALI ANWAR, OCP®
 
Hybrid Data Guard to Cloud GEN2 ExaCS.pdf
Hybrid Data Guard to Cloud GEN2 ExaCS.pdfHybrid Data Guard to Cloud GEN2 ExaCS.pdf
Hybrid Data Guard to Cloud GEN2 ExaCS.pdfALI ANWAR, OCP®
 
how-to-become-a-mysql-dba.pdf
how-to-become-a-mysql-dba.pdfhow-to-become-a-mysql-dba.pdf
how-to-become-a-mysql-dba.pdfALI ANWAR, OCP®
 
Flex your Database on 12c's Flex ASM Flex Cluster
Flex your Database on 12c's Flex ASM Flex ClusterFlex your Database on 12c's Flex ASM Flex Cluster
Flex your Database on 12c's Flex ASM Flex ClusterALI ANWAR, OCP®
 

Mais de ALI ANWAR, OCP® (12)

Certificate Performance Tuning SQL server - wait statistics
Certificate Performance Tuning SQL server - wait statistics Certificate Performance Tuning SQL server - wait statistics
Certificate Performance Tuning SQL server - wait statistics
 
WebLogic Scripting Tool
WebLogic Scripting ToolWebLogic Scripting Tool
WebLogic Scripting Tool
 
ebs-continuous-innovation-on-release-12-2 ps EXTENDED 2034.pdf
ebs-continuous-innovation-on-release-12-2 ps EXTENDED 2034.pdfebs-continuous-innovation-on-release-12-2 ps EXTENDED 2034.pdf
ebs-continuous-innovation-on-release-12-2 ps EXTENDED 2034.pdf
 
Cloud-migration-essentials.pdf
Cloud-migration-essentials.pdfCloud-migration-essentials.pdf
Cloud-migration-essentials.pdf
 
certificate-SQL server Query Tuning Techniques.pdf
certificate-SQL server Query Tuning Techniques.pdfcertificate-SQL server Query Tuning Techniques.pdf
certificate-SQL server Query Tuning Techniques.pdf
 
Migrate or modernize your database applications using Azure SQL Database Mana...
Migrate or modernize your database applications using Azure SQL Database Mana...Migrate or modernize your database applications using Azure SQL Database Mana...
Migrate or modernize your database applications using Azure SQL Database Mana...
 
Azure SQL Managed Instance infographic.pdf
Azure SQL Managed Instance infographic.pdfAzure SQL Managed Instance infographic.pdf
Azure SQL Managed Instance infographic.pdf
 
Hybrid Data Guard to Cloud GEN2 ExaCS.pdf
Hybrid Data Guard to Cloud GEN2 ExaCS.pdfHybrid Data Guard to Cloud GEN2 ExaCS.pdf
Hybrid Data Guard to Cloud GEN2 ExaCS.pdf
 
how-to-become-a-mysql-dba.pdf
how-to-become-a-mysql-dba.pdfhow-to-become-a-mysql-dba.pdf
how-to-become-a-mysql-dba.pdf
 
Cover letter
Cover letter Cover letter
Cover letter
 
Wp security-data-safe
Wp security-data-safeWp security-data-safe
Wp security-data-safe
 
Flex your Database on 12c's Flex ASM Flex Cluster
Flex your Database on 12c's Flex ASM Flex ClusterFlex your Database on 12c's Flex ASM Flex Cluster
Flex your Database on 12c's Flex ASM Flex Cluster
 

Último

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 

Último (20)

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 

O tech magazine fall 2013

  • 1. T - otechmag.com - Fall 2013 - 1 Fall 2013 A brand new magazine The art of content security Pick the right integration infrastructure component PL/SQL Function Statistics Database 12c for Developers Succesfully combining UX and ADF echMagazine T WebLogic on the Oracle Database Appliance And more...
  • 2. T - otechmag.com - Fall 2013 - 2 This adventure started about half a year ago. It started as just a crazy idea. ‘What about releasing a magazine’. Loads of people declared me an idiot (a general type of remark about most of my ideas, so I didn’t really listen to those). But almost just a large amount of people thought it was just simply cool. And because of both groups of people this magazine is here. And what an adventure it has been so far. In the beginning it was just about offering something else. I wanted to create something other that the usual Oracle Magazine (Oracle telling how good Oracle is) or the usual blog (consultants telling how good they are). In the small world around the technology that bounds us – Oracle – there are quite a lot of very great personalities. The knowledge is extensive and I wanted to offer them a platform that really has some- thing to offer. A magazine, I imagined. People working with Oracle software want good information. Independent information. But indepen- dent and well-written information is pretty hard to come by. At least in the world that’s called Oracle. When I started working on this magazine I had no idea if it would work – as a matter of fact I still don’t, you are the judge of that – but the enthusiastic response of a lot of highly regarded professionals in the Oracle scene made me work hard on this first issue. This magazine is – for now – just fun and games. It started just as a hobby. If this first edition will catch on there will be a second. If the second catches on a third. Without content there is no magazine. Therefore I would like to express my deepest gratitude to the authors of this magazine. Troy Allen, Lucas Jellema, Billy Cripe, Sander Haaksma, Marcel Maas, Simon Haslam, Peter Paul van de Beek, Michael Rosenblum and Lonneke Dikmans: thank you so much for participating with me on this adventure! Cheers! Douwe Pieter van den Bos September 2013 A brand new magazine Editorial
  • 3. T - otechmag.com - Fall 2013 - 3 The Picture Desk
  • 4. T - otechmag.com - Fall 2013 - 4 Contents Blending BI and Social Competitive Intelli- gence for deep insight into your business Page 20 Database 12c for Developers Page 12 The art of content security Page 7 The Book Club Page 23 Successfully combining UX with ADF Page 25
  • 5. T - otechmag.com - Fall 2013 - 5 Contents Pick the right integration infrastructure compo- nent Page 33 Oracle PL/SQL Function Statistics Page 37 Stop generating your UI, Start designing IT! Page 41 WebLogic on the Ora- cle Database Appliance Virtual Platform Page 29
  • 6. T - otechmag.com - Fall 2013 - 6 The Picture Desk
  • 7. T - otechmag.com - Fall 2013 - 7 WebCenter Content Over the years, I’ve had opportunities to work with many organizations, ranging from very small to some of the most recognizable brands in the world, and each one of them had the same requirements and questions: “I need to lock down our IP (intellectual property).” “We can’t have people digging through our files as they please.” “I only want my department to see our stuff, except for those others who need to see it.” “How can I restrict access to our files?” “Do I need a separate security model for each department?” In most cases, businesses under- stand that they need to secure their information in some fashion, but have no idea where to begin at an enterprise level. Security tends to be left in the hands of department managers, which will often lead to silos of information repositories and duplication of effort, and content, across the entire company. Additionally, or- ganizations find themselves with an over-kill of content security (leading to making it too difficult to work with their repositories) or virtually no security at all (leaving the organiza- tion at risk for data loss and corruption). Before someone can pick up paints, a brush, and a canvas to re-create a Picasso, they need to have a good idea of what they want to create and have an understanding of the tools they will use and how to mix and blend colors to get the desired results. Creating a content security strategy abides by the same requirements: know what needs to be accomplished and understand the tool and how security elements blend to get the desired results. Every content management application on the market provides some level of security and a defined set of elements to control user access and permissions to content. This article focuses on the Oracle WebCenter Content application (with the principles applying also to WebCenter Records Management), but the overall strategies outlined here can be applied to other repository tools. Learning to use the Brushes Oracle WebCenter Content (WCC) utilizes Security Groups, Roles, Accounts and Access Control Levels (ACLs) to control contribution (the ability to add new or edit existing content), consump- tion (the ability to search for and utilize or view content), and management (the control of back-end processes of content including designing and managing workflows) of content. Security Groups act like storage contain- ers within WCC. Content must be assigned to a Security Group, but it can only be assigned to one at a time. WCC utilizes Roles like a set of keys to grant users with permissions to the storage containers, or Security Groups. Roles provide users with specific permissions (Read, Write, Delete, and Ad- ministrate) to groups of content. Users can be assigned to more than one, and Roles can grant permissions to more than one Security Group. Many legacy WCC customers still only utilize Security Groups and Roles to secure their content and have faced a situation where the number of Secu- rity Groups and Roles that have to be created to manage their implementation become unmanageable or they simply cannot get to the level of granularity that is required. (As a side note, Oracle recommended no more than 50 Security Groups prior to the WCC 11g release. From an operational standpoint, this is still a good best-practice to keep in mind). In order to meet the demands of more complex security requirements, Accounts had been introduced to provide granular control ofcontent. If we visualize Security Groups as a filing cabinet, then accounts would be the folders that are held within it. Sometimes you have content that isn’t in a File folder, but is in the filing cabinet drawer; hence, a piece of content is being submitted to a Security Group without having an account applied to it. In physical filing cabinets, File folders can often contain more File folders providing a hierarchy of storage – Cabinet drawer, File folder, File folder, and then content. WCC Accounts can be difficult to grasp at first, but make perfect sense once the proverbial “light bulb” turns on. Account structures are identical across Security Groups. To put it another way, File folders are organized the exact same way across all cabinet drawers. Another point to remem- ber is that Accounts are hierarchical in nature. Another way to think about Accounts is to visualize a set of stairs that you are walking down. The Account structure or “stairs” has a top level of “Employee”, as an example, with the next step down being “Marketing”. We can continue to add more steps down, or sub-accounts, such as Employee/Marketing/Creative/ArtDept. Any user set at the top of the “Stairs”, or in- serted into the top Account and given Read access, will have Read access all the way down the stairs or Account structure. The user would have Read access to Employee and all the accounts down to, and including, Employee/Market- ing/Creative/ArtDept. Continuing on the previous Account example, Employee, Employee/ Marketing, Employee/Marketing/Cre- ative, and Employee/Marketing/Cre- ative/ArtDept would exist in both the Public and Secure Security Groups. In order to see content in Employee/ Marketing under the Public Security Group, Bob would have to at least be assigned to Public_Consumer Role AND either Read to the Account Employee, or Read to the Account Employee/Marketing. Mary would likewise need Secure_Consumer Role to the Secure Security Group AND at least Read to Employee, Employee/Marketing, or Employee/Marketing/ Creative in order to see content in the Secure Security Group stored under the Account Employee/Marketing/Creative. WCC evaluates the Role and Account assignments of each user to deter- mine what the actual combined permission set is for any given content item. When evaluating a user’s Roles, permissions between Roles that grant different access rights to the same Security Group will result in the user receiving the greatest permission between the two Roles. The lost art of content security Troy Allen TekStream Solutions
  • 8. T - otechmag.com - Fall 2013 - 8 WCC performs a similar operation when evaluating the permissions granted between Account assignments for a user. This becomes a bit more complex given that Accounts are hi- erarchical. For example, a user given Read and Write access for the Account Employee AND given Read access for the Account Employee/ Marketing, will actually have Read and Write access to Employee/Marketing. Since Employ- ee is a higher level Account than Employee/ Marketing, and the Access rights granted are also greater than those for Employee/Marketing, Read Write access prevails. If we changed permissions so that the user only has Read access to Account Employee and Read Write access to Account Employee/Marketing, then the user still retains the greater per- mission for the Account Employee/ Marketing. However, the user’s access rights to the Account Employee do not change. If you recall the correlation of Accounts to being like stairs, then you also need to visualize that you can only go one direction on them….down. An administrator can assign the user entry to an Account structure anywhere within its levels, but the permissions granted are ONLY valid for that entry point and below. A user being assigned to Employee/Marketing with Read access would NOT be able to see content that was assigned to the higher level Account Employee. As mentioned earlier on, Accounts are designed to provide a level of granularity to content security controls by providing a deeper level of control to Roles and Security Groups. Due to the fact that every content item has to be assigned to one, and only one, Security Group, users must have at least one Role that grants access to that Security Group in order to see content. A user may have an Account assignment with Read, or greater permissions, but may still not be able to have access to the content if they cannot enter into the Security Group it has been assigned to. WCC evaluates the rights a user has been assigned to a Security Group through the user’s Role assignment. WCC then evaluates the permissions assigned to the user’s Account assignment. This evaluation produces the combined permission that the user ultimately has on the content item. If the user’s Role grants Read, Write, Delete, and Administrate and his or her Account assignment is Read and Write, then WCC intersects between the two resulting in Read Write access for this particular case. WCC Access Control Lists (ACLs) provide another layer of controls for se- curing content. However, ACLs are counter intuitive in how they work and how most people expect them to work. ACLs are assigned to individual users, to groups of users (utilizing WCC’s Aliases), or to defined WCC Roles. Any combination of these may be applied to control users across multiple levels. Most users would expect that when they assign an ACL to content or a folder, that the permission assignment would “overrule” the security normally granted to other users for that piece of content. ACLs actually work in the same fashion that WCC Roles and Security Groups work with WCC Accounts. The actual permission granted to a user for a piece of content or a folder when ACLs are applied is an intersection of the overall permissions evaluated from Roles, Accounts, and ACLs. For example, Jill has Read Write access to all content assigned to the WCC Security Group Public through her WCC Role of Public_Contributor. Anoth- er user checks-in a piece of content, assigns it to the Public Security Group, and then direct- ly assigns “R” permission to Jill. The intersection between her WCC Role of Read Write and the applied ACL of Read is Read for this particular content item. A more complex version, including Accounts, Roles, and ACL means that WCC has to evaluate all the cre- dentials and determine the final security for the user on a specified content item or folder. A user with Read Write to a Security Group, Read Write Delete to an account, and an ACL assignment of Read Write Delete Admin to a piece of content with the same Security Group and Account will receive Read Write permission to the document. Painting the Pictures An artist wanting to paint a picture of the ocean can do it in an infinite number of ways and styles. An administrator wanting to apply security to their enterprise content can do it in many different ways based on several different approaches. The toughest part, for the artist and the administra- tor, is to determine exactly what they want to paint or what they want to secure and how. In terms of working with the “enterprise”, it can be difficult to determine what is needed since so many people and groups can be impacted. In my experience, I’ve seen a majority of organizations who have two types of content, Company Public and Departmental Restricted. I also see a rising need for collaborative security (usually driven by organizations that are familiar with departmental collaboration and want to apply a group level security across an enterprise). A final model that will be discussed is the Exception model which provides a structured approach to departmental security and managed exceptions to permit non-departmental access or restrictions within a department. Company Public and Departmental Restricted (CPDR) In a CPDR model, companies see the majority of their content as being company public domain, but should have controls for who can edit it. This model also addresses the need to have content that is restricted to departmental use. “What happens if an employee performs a search and finds a document that was presumed to be secure or receives a link to a document that was presumed to be secure and is able to open the link and view the document?” In most organizations, and for the majority of content, the answer is “as long as they cannot change it, it is okay.” It seems that many companies view the targeting of content as the same as content security (for this article, we will only focus on security). Occasionally, the answer is, “that would be very bad, that is a classified document or intended for a certain management level.” The second answer is usually given for legal, financial, human resources, or other controlled departments. CPDR model allows for both answers by maintaining a minimum set of Security Groups and Roles and relying on two types of Account structures. CPDR utilizes two primary Account structures, one for providing Read access to all public content while controlling who can edit it, and another for restricting both the consumption and contribution of content to a de- partment structure. It also utilizes two primary Security Groups for Public content and Controlled or Restricted content with Roles for each to con- trol Read access, Read Write access, and Read Write Delete Admin access. By doing so, organizations can submit content for general consumption to the Security Group Public and assign an account under the Employee tree to provide global consumption but controlled contribution. Automatically assigning all employees the Public_Consumer Role and Read to the root of the Employee account structure makes this possible. Content that is specific to a particular department, and should only be accessed by that department, can be assigned either to the Public or Restricted Security Groups and then to the appropriate level within a Department Account structure. Utilizing the Restricted Security Group allows departments to have Departmental Public content as well as “hidden or restricted” departmental content that only a select few within each department can view.
  • 9. T - otechmag.com - Fall 2013 - 9 The following outlines the elements of a CPDR modeled security imple- mentation: This model lends itself to large organizations taking an enterprise-wide view of their content. This model is highlighted by how it: • Minimizes the overall number of Roles and Accounts (which relate to LDAP Groups) that must be managed • Minimizes the overall number of Role and Account assignments typical users must be assigned • Provides the ability to designate enterprise content or department spe cific content Collaborative Security Some content management applications are based on the concept that groups of users collaborate on content and need to add participants in a more fluid manner. While this strategy works well on a small scale, it rare- ly lends itself to supporting “published” or finalized content that needs to be shared across the enterprise. In most cases, organizations will have multiple instances or installs of these types of systems to support each department. To make Collaborative Security work at the enterprise level, it is recom- mended that organizations start with the CPDR Model as a foundation. By adding WCC ACLs (Access Control Lists) into the mix, companies can specify a security model to support the ad hoc assignment of permissions to users, groups of users, or even WCC Roles. As described earlier in this article, ACLs act as an additional filter to the security that has already been assigned to users. Based on that, additional Security Groups and Accounts may be required. In most instances, adding a “Collaboration” Security Group will suffice with a single role of Collaboration that grants Read, Write, Delete, and Admin without having to add any additional Accounts (unless the compa- ny wants to have strict limits on participation). All users should have the default Role of Collaboration assigned to them. When a user creates a Folder for their collaboration efforts, they need to select the Collaboration security group, and then assign a specific set of users with defined ACL rights, groups of users with defined ACL rights, and/or WCC Roles with ACL rights. The same will hold true for content items as well. The net result of this can be illustrated by the following example: “All users of the Company-x repository have been assigned the LDAP Group Collaboration (which maps to the WCC Role of Collaboration) grant- ing them Read, Write, Delete, and Admin (RWDA) privileges to all content and folders assigned to the Collaboration WCC Security Group. Bob, a project manager, needs to create a collaboration folder in the repository and wants to assign specific rights to users. He creates the folder and assigns it the WCC Security Group of Collaboration. Bob then assigns him- self as a direct user with RWDA ACL permissions. (It is best practice for the creator of folders and content in the Collaboration model to assign them- selves the RWDA permissions so that someone else cannot inadvertently override their access). Bob also directly assigns Mary RW ACL permissions, Beth RWD permissions, and the Finance Group (controlled by WCC Aliases) R permission. “Mary is new to Company-x and has not been assign to the LDAP Group Collaboration yet. Because WCC evaluates the intersection of Roles to Roles, Roles to Accounts, Roles to ACLs, Accounts to ACLs, and Roles to Accounts to ACLs, Mary will not be able to access the folder or any of the content Bob created. Her permission intersection is null (since she has no rights to the Security Group Collaboration, there is nothing to intersect with her ACL permissions of RW). Beth on the other hand does have the LDAP Group Collaboration and her intersection of permissions gives her RWD to the folder and its content. Assuming that the users assigned to the Finance Alias Group have the LDAP Group of Collaboration, their inter- section of permissions is Read for the folder and its content” The following outlines the elements of a Collaboration modeled security implementation: This model lends itself to large organizations taking an enterprise wide view of their content while supporting collaboration groups for content. This model is highlighted by how it: • Minimizes the overall number of Roles and Accounts (which relate to LDAP Groups) that must be managed • Minimizes the overall number of Role and Account assignments typical users must be assigned • Provides the ability to designate enterprise content or department spe cific content • Provides flexibility to assign permissions to content and folders by users without having to extend or add Roles and Accounts for each new proj ect or collaboration group. Exception Model Some organizations have always managed their content by departments or divisions, with very little public sharing of content. In many ways, it is like the DPDR model minus the Account tree to provide global access with controlled contribution. In addition, many organizations take a very controlled and structured approach to assign security privileges and do not permit the ad hoc assignments that ACLs bring to the table. Managing exceptions comes from having folder structures in which a division or department expects their own people to see the structures and content, not let other departments or divisions have access, but also hide folders within a department or division from some of that department or division’s own people. The model also assumes that the hierarchical nature of accounts is being utilized for access to the folders. A company might have a divi- sion that contains a folder for all management documents. The Account for that folder may be assigned to DIVXMGT (division xmanagement ac- count). It is expected that all managers will have access to the management folder and all of its subfolders and cot- nent EXCEPT for the Employee Files folder which only certain people should be able to access. If ACLs were used, then a manager within Division X could assign specific users to that folder. The downside to doing that, is that any one on the ACL list with at least Read and Write permission could grant other users access to that folder and its content. The company in this example realizes that ACLs could cause serious issues with ad hoc granting of security, so they have decided to use the Exception model instead. To protect the Employ- ee Files folder and all of its content, they have assigned it to an excpetion account EDIVXMGTEFILES (E to denote an exception Account tree, DIVX for the division, MGT for the management level access, and EFILES to secure employee files). If the division had assigned it to the normal DIVXMGT account of the parent folder, then anyone who could access the Management folder could see the Employee Files folder. By assigning the exception account, user must have specific rights to the account in oder to see it at all. Utilizing Exception Accounts will increase the number of Accounts that need to be managed and added to LDAP as groups, but it does provide a highly restricted security model with tight controls for granting permis- sion to users.
  • 10. T - otechmag.com - Fall 2013 - 10 The following outlines the elements of Exception modeled security imple- mentation: This model lends itself to large organizations taking a strict departmen- tal/division view of their content while supporting the ability to manage access exceptions. This model is highlighted by how it: • Provides the ability to designate department specific content • Provides flexibility to assign permissions for content and folders to spe cific LDAP groups to support exception access controls Striking the Balance Designing a security model to meet the needs of any or- ganization is a balancing act between providing the right amount of access and permissions and limiting the amount of administration required to support it. While there are many different approaches, the three listed in this article are the most common models and seem to fit the needs of a wide variety of requirements and needs of my clients. My preference is to use either the CPDR model or the Collabora- tion model when possible. That said, there are times when the Exception model is the right approach. The following matrix provides some guidelines on when to use which model: Special Note for the Reader In all the examples of Accounts utilized by this article, I have shown a full, or almost full, account name such as Employee/Marketing/Creative to illustrate the types of accounts and structures utilized by each mod- el. This is not practical in a true implementation due to limitations of the dAccount field size in WCC. In most cases, an abbreviation method or numbering sequence is used to represent the accounts. For example, Employee would be the Account 01, the next level of Marketing would be 01 (and another entry at that level like Finance, would be 02), Creative would be 01, and ArtDept would be 01. The full Account value for Employ- ee/Marketing/Creative/ArtDept would be 01010101 and the Account value for Employee/Finance/Receivables/Management would be 01020101. It is common practice to provide a display name that makes sense to users and a storage value and LDAP group name based off of numbering or abbreviations to stay under the field size limits. There are many different ways that organizations can model security, and each company will have their own specific requirements. For many, it is a daunting task and may be difficulty to determine where to start. There are many firms that have years of experience in designing, developing, and deploying security models and companies engaging in modeling security for their content needs should seek a firm that has specific experience in the content management application and how to ensure a security model which will fit the needs of the enterprise. Leveraging external resources brings their years of experience and best practices to your initiatives. Troy Allen is Director Web- Center Solutions and Train- ing at the Atlata, USA based TekStream Solutions.
  • 11. T - otechmag.com - Fall 2013 - 11 The Picture Desk
  • 12. T - otechmag.com - Fall 2013 - 12 Database The initial release of Oracle Database 12c has been available since late June 2013. A long anticipated release – it took almost four years since the previous major database version – that is characterized first and foremost by the multitenant architecture with pluggable databases. This major architectural change – the biggest one since Oracle V6 was the first to support parallelism - is impressive and potentially has great impact from the administration point of view. For database developers however, this mechanism is entirely transparent. The question for developers now becomes: what is the big news for this 12c release – what is in it for me? This article will introduce a number of features that are introduced or enhanced in Oracle Database 12c that are – or could/should be – of rele- vance for application development. Features that make things possible that were formerly either impossible or very hard to do efficiently, fea- tures that make life easier for developers and features that currently may appear like a solution looking for a problem. After reading this article, as a developer you should have a good notion of what makes 12c of interest to you and what functionality you probably should take a closer look at in order to benefit from 12c when it arrives in your environment. To very succinctly list some highlights: SQL Pagination, Limit and Top-N Query; SQL Pattern Matching; In-line PL/SQL Functions, Flashback im- provements, revised Default definition, Data Masking, Security enhance- ments and miscellaneous details. SQL Translation The use case: an application sends SQL to the Oracle Database that is less than optimal. Before 12c, we were able to use Plan Management, to force the optimizer to apply an execution plan of our own design. This allowed interference in a non-functional way – typically to improve performance. This new 12c SQL Translation framework brings similar functionality at a more functional level. We can create policies that instruct the database to replace specific SQL statements received from application with al- ternative SQL statements. These alternative statements can make use of the same bind parameters that are used in the original state- ment. The alternative statement is expected – obviously – to return a result set with the same structure as the statement it replaces. This mechanism allows us to make an application run on SQL that is optimized for our database – for example using optimized Oracle SQL for an application that runs only generic SQL or using queries that contain additional join or filter conditions that make sense in our specific environment. Especially when 3rd party COTS applica- tions are used or when frameworks are applied in .NET, SOA, Java and other middleware applications that generate SQL for accessing the database, the SQL Translator framework is an option to ensure that only desirable SQL is executed. A simple example of using the SQL Translation framework: BEGIN DBMS_SQL_TRANSLATOR.REGISTER_SQL_TRANSLATION( profile_name => ‘ORDERS_APP_PROFILE’, sql_text => ‘select count(*) from or- ders’, translated_text => ‘select count(*) from or- ders_south’ ); END; The result of this statement is that a mapping is registered in the named ORDERS_APP_PROFILE that specifies that when the query ‘select count(*) from orders’ is submitted, the database will in fact execute the statement ‘select count(*) from orders_south’. This profile can have many such map- pings associated with it. A profile is typically created for each application for which SQL statements need to be translated. Before you can create such a profile, the schema needs to have been granted the create sql translation profile privilege. In the session in which we want a profile to be applied, we need to explicitly alter the session and set the sql_translation_profile. Finally, the 10601 system event must be set. See for example this blog article for more details on SQL Translation: https://blogs.oracle.com/dominicgiles/entry/sql_translator_profiles_ in_oracle. This article is also very useful: http://kerryosborne.oracle-guy. com/2013/07/sql-translation-framework/. SQL Pattern Matching Analytical functions were introduced in Oracle SQL in the 8i release and extended in 9i and to a small extent in 10g and 11g (LISTAGG). These func- tions added the ability to SQL to determine the outcome of a result row using other result rows, for example using LAG and LEAD to explicitly refer- ence other rows in the result set. This ability provided tremendous oppor- tunities to calculate aggregates, compare rows, spot fixed row patterns and more in an elegant, efficient manner. The 12c release adds SQL Pattern Matching functionality to complement the analytical functionality. It is also used to analyze between multiple rows in the result set and specifically to spot occurrences of patterns between these rows. However, pattern matching goes beyond analytical functions in its ability to find ‘dynamic’ and ‘fuzzy’ patterns instead of only predefined, fixed patterns. A simple example of this apparently subtle distinction would be, using the following table with ‘color events’: Fixed Pattern: find all occurrences of three subsequent records with the payload values ‘red’, ‘yellow’ and ‘blue’ Variable Pattern: find all occurrenc- es of subsequent records, starting with one or more Overview of Oracle Database 12c Application Development facilities Lucas Jellema AMIS Services
  • 13. T - otechmag.com - Fall 2013 - 13 ‘red’ records, followed by one or more ‘yellow’ records, followed by one or more ‘blue’ records. Both patterns result in the famous color combination ‘red, yellow and blue’. However, the variable pattern is much more flexible. Using analyti- cal functions, the fixed pattern is easily queried for. The variable pattern however is much harder – if doable at all. The new SQL Pattern Matching functionality is perfectly equipped to tackle this kind of challenge. The SQL for this particular task would be like this: SELECT * FROM events MATCH_RECOGNIZE ( ORDER BY seq MEASURES RED.seq AS redseq , MATCH_NUMBER() AS match_num ALL ROWS PER MATCH PATTERN (RED+ YELLOW+ BLUE+) DEFINE RED AS RED.payload =’red’, YELLOW AS YELLOW.payload =’yellow’, BLUE AS BLUE.payload =’blue’ ) MR ORDER BY MR.redseq , MR.seq; The core of this statement is the PATTERN that is to be found. This pattern is in fact a regular expression that refers to occurrences labeled RED, YEL- LOW and BLUE. These occurrences are defined as ‘a record with a payload value of ‘red’, ‘yellow’ and ‘blue’ respectively. The conditions used to de- fine occurrences can be a lot more complex than these ones; they can for example include references to other rows in the candidate pattern, using keywords PREV and NEXT (similar in function to LAG and LEAD). A somewhat more involved example uses a table of observations: In the collection of observations, we try to find the longest sequence of the same observations – the longest stretch of A or B values. However, we have de- cided to allow for a single interruption. So AAABAAAA would count as a sequence with length 8, despite the interruption with a single B value. The sequence AAABBAAAA however is not a single sequence – it con- sists of three sequences: AAA,BB and AAAA. The SQL statement for this challenge uses SQL Pattern Matching and can be written like this: SELECT substr(section_category,1,1) cat , section_start , seq FROM observations MATCH_RECOGNIZE ( ORDER BY seq MEASURES SAME_CATEGORY.category as section_cate- gory , FIRST(SAME_CATEGORY.seq) as section_ start , seq as seq ONE ROW PER MATCH AFTER MATCH SKIP TO NEXT ROW -- a next row in the current match may be -- start of a next string PATTERN (SAME_CATEGORY+ DIFFERENT_CATEGORY{0,1} SAME_CATEGORY* ) DEFINE SAME_CATEGORY AS SAME_CATEGORY.category = FIRST(SAME_CATEGORY.category) , DIFFERENT_CATEGORY AS DIFFERENT_CATEGORY. category != SAME_CATEGORY.category ) MR order by rows_in_section desc Note: the MATCH_RECOGNIZE syntax is virtually the same as the syntax used in CQL or Continuous Query Language. CQL is used in Oracle Event Processor (fka Complex Event Processor) to process a continuous stream of events to identify trends and patterns, find outliers and spot missing events. This blog article gives an example of using the SQL Pattern Match to find the most valuable player in a football match: http://technology.amis. nl/2013/07/24/oracle-database-12c-find-most-valuable-player-using- match_recognize-in-sql/ . A more general introduction to Pattern Match- ing in Oracle Database 12c is given in this article: http://technology.amis. nl/2013/06/27/oracle-database-12c-pattern-matching-through-match_ recognize-in-sql/ . In-line PL/SQL Functions In Oracle Database 9i, the select statement was changed quite dra- matically: the WITH clause through which inline views could be defined as introduced, meaning that a select statement could start with WITH: Inline views proved a very powerful instrument for SQL developers – mak- ing the creation of complex SQL queries much easier. In 12c, another big step is taken with the SQL statement through the introduction of the inline PL/SQL function or procedure. An example: WITH procedure increment( operand in out number , incsize in number) is begin operand:= operand + incsize; end; FUNCTION inc(value number) RETURN number IS l_value number(10):= value; BEGIN increment(l_value, 100); RETURN l_value; end; SELECT inc(sal) from emp Here we see a simple select statement (select inc(sal) from emp). The interesting bit is that PL/SQL function INC is defined inside this very SQL statement. The DBA will never be bothered with a DDL script for the creation of this function INC; in fact, that function is available only during the executing of the SQL statement and does not require any administra- tion effort. Another important aspects of inline PL/SQL functions: these functions do not suffer from the regular SQL <> PL/SQL context switch that adds so much overhead to interaction between SQL and PL/SQL. Inline PL/SQL functions are compiled ‘in the SQL way’ and therefore do not require the context switch. Note that by adding the PRAGMA UDF switch to any stand-alone PL/SQL Program Unit, we can also make it compiled the SQL way, meaning that it can be invoked from SQL without context switch overhead. When such a program unit is invoked from regular PL/SQL units, these calls will suffer from a context switch. Inline PL/SQL functions and procedures can invoke each other and them- selves (recursively). Dynamic PL/SQL can be used – EXECUTE IMMEDIATE. The following statement is legal – if not particularly good programming: WITH FUNCTION EMP_ENRICHER(p_operand varchar2) RETURN varchar2 IS l_sql_stmt varchar2(500); l_job varchar2(500); BEGIN l_sql_stmt := ‘SELECT job FROM emp WHERE ename = :param’; EXECUTE IMMEDIATE l_sql_stmt INTO l_job USING p_ operand; RETURN ‘ has job ‘||l_job; END; SELECT ename || EMP_ENRICHER(ename) from emp
  • 14. T - otechmag.com - Fall 2013 - 14 Some details on Inline PL/SQL Functions are described in this blog article: http://technology.amis.nl/2013/06/25/oracle-database-12c-in-line-plsql- functions-in-sql-queries/. Flashback for application developers A major new feature in Oracle Database 9i was the introduction of the notion of flashback. Based on the UNDO data that has been leveraged in the Oracle Data- base since time immemorial to produce multi version read concurrency and long running query read consistency, flash- back was both spectacular and quite straightforward. The past of our data as it existed in some previous point in time is still available in the database, ready to be unleashed. And unleashed it was, through Flashback Table and Database – for fine grained point in time recovery – as well as through Flashback Query and Flashback Versions (10g) in simple SQL queries. Before the 11g release of the database the usability of flashback was somewhat limited for application developers because there really was not much guarantee as to exactly how much history would be available for a particular data set. Would we be able to go back in time for a week, a month or hardly two hours? It depended on that single big pile of UNDO data where all transactions dumped their undo stuff. The Flashback Data Archive was introduced in the 11g release – touted as the Total Recall option. It made flashback part of database design: per table can be specified if and how much history should be retained. This makes all the difference: if the availability of history is assured, we can start to base application functionality on that fact. A couple of snags still existed with the 11g situation: • 11g Flashback Data Archive requires the Database Enterprise Edition EE with Advanced Compression database option • In 11g, the history of the data is kept but not the meta-history of the transactions so the flashback data archive does not tell you who made a change • In 11g, the start of time in your flashback data archive is the moment at which the table is associated with the archive; therefore: your history starts ‘today’ Now for the good news on Flashback in Oracle Database 12c – good news that comes in three parts 1. As of 12c – Flashback will capture the session context of transactions. To set the user context level (determining how much user context is to be saved), use the DBMS_FLASHBACK_ARCHIVE.SET_CONTEXT_LEVEL procedure. To access the context information, use the DBMS_FLASHBACK_ ARCHIVE.GET_SYS_CONTEXT function. (The DBMS_FLASHBACK_ARCHIVE package is described in Oracle Database PL/SQL Packages and Types Reference.) 2. As of 12c – you can construct and manipulate the contents of the Flashback Data Archive. In other words: you can create your own histo- ry. Which means that a flashback query can travel back in time to way beyond the moment you turned on the FDA. In fact, it can go to before the introduction of the Flashback feature in the Oracle Database and even before the launch of the Oracle RDBMS product. It is in your hands! Import and export of history using DBMS_FLASHBACK_ARCHIVE procedures to create a temporary history table, and then later importing that table into the designated history table after loading that table with the desired history data. The temporary history table can be loaded using a variety of methods, including Data Pump. Support is also included for importing user-generated history. If you have been maintaining history using some other mechanism, such as triggers, you can import that history into Flash- back Data Archive. 3. As of 12c, Flashback Data Archive is available in every edition of the database (XE, SE, SE One, EE). All of the above means that any application developer developing an ap- plication that will run against an Oracle Database 12c instance can benefit from flashback in queries. Fine grained flashback based on flashback data archives defined per table can be counted on. These archives can be pop- ulated with custom history data - for example taken from existing, custom journaling tables. Finally flashback can be configured to keep track from the session context at the time of each transaction to capture for example the client identifier of the real end user on whose behalf the transaction is executed. The syntax for flashback queries and flashback versions queries are the same in 12c as in earlier releases. SQL Temporal Validity aka Effective Data Modeling The SQL 2011 standard – which Oracle helps create and uphold – intro- duced a fairly new concept called ‘temporal database’, associated with terms such as Valid Time and Effective Date. This concept is explained in some detail in Wikipedia: http://en.wikipedia.org/wiki/SQL:2011#Tempo- ral_support. The short story is that a substantial number of records in our databases are somehow associated with time periods. Such records have a certain start date or time and a certain end timestamp. Between these two points in time, the record is valid or effective and outside that period it is not. Examples are price, discount, membership, allocation, subscrip- tion, employment, life in general. In a temporal database or one that sup- ports temporal validity, the database itself is aware of the effective date: it knows when records are valid from a business perspective. This knowl- edge can be translated into more efficient execution plans, enforcement of constraints related to the time based validity of the data and business validity related Flashback queries based on transaction time return records as they exist- ed in the database at the requested timestamp, regardless of what their logical status was at that time. Flashback queries based on valid date look at the valid time period for each record and uses that to determine wheth- er the record ‘logically existed’ at the requested timestamp. Take a look at this example: Table EMP that has been extended with a FIREDATE column and an effective time period based on HIREDATE and FIREDATE CREATE TABLE EMP ( employee_number NUMBER , salary NUMBER , department_id NUMBER , name VARCHAR2(30) , hiredate TIMESTAMP , firedate TIMESTAMP , PERIOD FOR employment (hiredate, firedate) ); We can now execute the following flashback query, based on the effective date as indicated by the employment period, to find all employees that were active at June 1st 2013: SELECT * FROM EMP AS OF PERIOD FOR employment TO_TIMESTAMP(‘01-JUN-2013 12.00.01 PM’) Just like we can go back in time in a session for transaction based flash- back using the dbms_flashback package, we can do the same thing for effective time based flashback: EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time ( ‘ASOF’ , TO_TIMESTAMP(‘29-JUL-13 12.00.01 PM’) ); Any query executed in that session after this statement has been executed will only return data that is either not associated with a valid time period or that is valid on the 29th of July 2013. A similar statement ensures that we will always see the records that are currently valid: EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_ time(‘CURRENT’); And the default of course is that we will always see all records, regardless of whether they are valid or not: EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_ time(‘ALL’); These first steps in the 12.1 release of the Oracle Database on the road towards full temporal database support, are likely to be followed by a lot of additional functionality in upcoming releases. The SQL 2011 standard defines a number of facilities in SQL and around database design that are likely to make their way into the Oracle Database at some point in the not too distant future. These could include: • Valid time aware DML - Update and deletion of application time rows with automatic time period splitting • Temporal primary keys incorporating application time periods with op- tional non-overlapping constraints via the WITHOUT OVERLAPS clause • Temporal referential constraints that take into account the valid-time
  • 15. T - otechmag.com - Fall 2013 - 15 during which the rows exist: Child needs to have a valid Master at any time during its own validity • Application time tables are queried using regular query syntax or using new temporal predicates for time periods including CONTAINS, OVER- LAPS, EQUALS, PRECEDES, SUCCEEDS, IMMEDIATELY PRECEDES, and IMMEDIATELY SUCCEEDS • Temporal Aggregation - group or order by valid-time • Normalization - coalescing rows which are in adjacent or overlapping time periods • Temporal joins – joins between tables with valid-time semantics based on ‘simultaneous validity’ • Use the Valid Time information for Information Lifecycle Management (ILM) to assess records to move The support for valid time modeling is potentially far reaching. If valid period related data is common in your database, it might be a good idea to study the theory and reference cases and keep a close watch on what Oracle’s next moves are going to be. Inspecting the PL/SQL Call Stack In Oracle Database 10g, the package dbms_utility was made available with two procedures (DBMS_UTILITY.FORMAT_ERROR_BACKTRACE and DBMS_UTILITY.format_call_stack) that helped inspect the call stack during PL/SQL execution. This provides insight into the program units that have been invoked to get to the current execution (or exception) point. The output of these units is formatted for human consumption and is not very useful for automated processing. In this 12c release, PL/SQL developers get a new facility that makes call stack information available in a more structured fashion that can be used programmatically. The new PL/SQL Package UTL_CALL_STACK provides API for inspecting the PL/SQL Callstack. The following helper procedure demonstrates how utl_call_stack can be accessed to get information about the current call stack: procedure tell_on_call_stack is l_prg_uqn UTL_CALL_STACK.UNIT_QUALIFIED_NAME; begin dbms_output.put_line(‘==== TELL ON CALLSTACK ==== ‘ ||UTL_CALL_STACK.DYNAMIC_ DEPTH ); for i in 1..UTL_CALL_STACK.DYNAMIC_DEPTH loop l_prg_uqn := UTL_CALL_STACK.SUBPROGRAM(i); dbms_output.put_line( l_prg_uqn(1) ||’ line ‘||UTL_CALL_STACK.UNIT_LINE(i) ||’ ‘ ||UTL_Call_Stack.Concatenate_Subprogram ( UTL_Call_Stack.Subprogram(i)) ); end loop; end tell_on_call_stack; When this helper procedure is used from a simple PL/SQL fragment that performs a number of nested calls: create or replace package body callstack_demo as function b( p1 in number, p2 in number) return num- ber is l number:=1; begin tell_on_call_stack; return l; end b; procedure a ( p1 in number, p2 out number) is begin tell_on_call_stack; for i in 1..p1 loop p2:= b(i, p1); end loop; end a; function c( p_a in number) return number is l number; begin tell_on_call_stack; a(p_a, l); return l; end c; end callstack_demo; The output is as follows: This output gives insight in how the anonymous PL/SQL call to package CALLSTACK_DEMO was processed. The initial call from the anonymous block got to line 50 in procedure c. One level deeper, from line 51 in C, a call had been made to procedure A. One level deeper still, a call from line 40 in A had been made to B. Package UTL_CALL_stack contains several other units that help with the call stack inspection. See for example this article for some examples: http://technology.amis.nl/2013/06/26/oracle-database-12c-plsql-pack- age-utl_call_stack-for-programmatically-inspecting-the-plsql-call-stack/. Default Column value Specifying a default value for a column has been possible in the Oracle Database for a very long time now. It’s fairly simple: you specify in the column definition which value the database should apply automatically to the column in a newly inserted record if the insert statement does not reference that particular column. When an application provides a NULL for the column, the default value is not applied; only when the column is missing completely from the insert statement will the default kick in. One typical example of a default value is the assignment of a primary key value based on a database sequence. However, that particular use case was never supported by the Oracle Database because a default value could only be either a constant, a reference to a pseudo-function such as systimestamp or an application context. In 12c, things have changed for the column default. We can now specify that a default value should be applied also when the insert statement provides NULL for a column. A column default can now also be based on a sequence – obviating the use of a before row insert trigger to retrieve the value from the sequence and assign it to the column. A column can even be created as an Identity column – meaning that the column is the primary key with its value automatically maintained (using an implicitly maintained system sequence). Finally – especially of interest to adminis- trators – a column can be added to a table with a meta data only default value; this means that the default value for the column is not explicitly set for every record, but is retrieved instead from the meta data definition; this means a huge savings in time and storage. The syntax for creating a default that is applied when a NULL is inserted: alter table emp modify ( sal number(10,2) DEFAULT ON NULL 1000 ) And the syntax for basing the default value on a sequence: alter table emp modify ( empno number(5) NOT NULL DEFAULT ON NULL EMPNO_SEQ.NEXTVAL )
  • 16. T - otechmag.com - Fall 2013 - 16 Data Masking aka Data Redaction Ideally, testing of applications can be done using production-like data. However, generating such data is usually not a realistic option. Using the real production data in a testing environment seems the better alterna- tive. However, this data set may contain sensitive data – financial, medi- cal, personal – that for various reasons should be a visible outside the pro- duction environment. The Data Redaction feature in Oracle Database 12c supports policies that can be defined on individual tables. These policies specify how data in the table should be ‘redacted’ before being returned – in order to ensure that unauthorized users cannot view the sensitive data. Redaction is selective and on-the-fly – not interfering with the data as it is stored but only with the way the data is returned. The next figure illustrates the data redaction process: a normal SQL statement is submitted and executed. When the results are prepared, the data redaction policies are applied and the actual results are transformed through conversion, randomizing and masking. This approach is very simi- lar to the way Virtual Private Database (fine grained access policies) can be used to mask data – records or column values). Redaction can be conditional, based on different factors that are tracked by the database or passed to the database by applications such as user identifiers, application identifiers, or client IP addresses. Redaction can apply to specific columns only – and act in specific ways on the values returned for those columns. Here is an example of a redaction policy: BEGIN DBMS_REDACT.ADD_POLICY( object_schema => ‘scott’, object_name => ‘emp’, column_name => ‘hiredate’, policy_name => ‘partially mask hiredate’, expression => ‘SYS_CONTEXT(‘’USERENV’’,’’SES- SION_USER’’)!= ‘’GOD’’’, function_type => DBMS_REDACT.PARTIAL, function_parameters => ‘m1d31YHMS’, expression => ‘1=1’ ); END; This policy ensures that values from col- umn HIREDATE are redacted for any user except GOD. The Month and Day parts of values for this column are all set to 1 and 31 respectively; the other data compo- nents – Year, Hour, Minute and Second – are all untouched. The query results from a query against table EMP will be masked. Data redaction seems most useful for ensuring that an export from the production database does not contain sensitive, unredacted data. A second use could be to ensure that application administrators can do their job in a production environment, working with all required records without being able to see the actual values of sensitive columns. More details on Data Redaction are for example in this White Paper: http:// www.oracle.com/technetwork/database/options/advanced-security/ advanced-security-wp-12c-1896139.pdf. An straightforward example is in this blog article: http://blog.contractoracle.com/2013/06/ora- cle-12c-new-features-data-re- daction.html. SQL Pagination, Limit and Top-N Query A common mistake made by inexperienced Oracle SQL devel- opers is the misinterpretation of what [filtering on] ROWNUM will do. The assumption that this next query will return the top three earning employees is so easily made: Oracle does not have – at least not before 12c – a simple SQL syntax to return the first few records from an ordered row selection. You need to resort to inline views – for example like this: Perhaps this is not a big deal to you. Anyway, Oracle decided to provide a simple syntax in 12c SQL to return the first X records from query, after the filtering and sorting has been completed. In our case, this statement would be used for the Top-3 earning employees: select * from emp order by sal desc FETCH FIRST 3 ROWS ONLY; Slightly more interesting I think is the simple support for row pagination that is introduced in this fashion. Many applications and services require the ability to query for records and then show the first set [or page] of maybe 20 records and then allow the next batch [or page] of 20 records to be returned. The new SQL syntax for retrieving a subset of records out of a larger collection looks like this: select * from emp order by sal desc OFFSET 20 FETCH NEXT 20 ROWS ONLY; Here we specify to select all records from emp, sort them by salary in descending order and then return the 21st through 40th record (if that many are available). The syntax also support fetching a certain percentage of records rather than a specific number. It does not have special support for ‘bottom-n’ queries. Note: checking the explain plan output for que- ries with the pagination or top-n queries is interesting: the SQL that gets executed uses familiar analytical functions such as ROW_NUMBER() to return the correct records – no new kernel functionality was added for this functionality.
  • 17. T - otechmag.com - Fall 2013 - 17 Security Many improvements were introduced in this 12c release in the area of se- curity. Some are primarily of interest to the administrator and others are quite relevant to application developers. Capture privilege usage One of these security related new features is the ‘capture privilege usage’ – a facility through which you can inspect the privileges that are actu- ally required by users to run applications. This feature is introduced to strengthen the security of the database to enforce the principle of least privilege: it tells you which privileges are used in a certain period of time by a certain user. When you compare these privileges with the privileges that have actually been granted to the user, there may some privileges that have been granted but are not actually required and should probably be revoked. Also see http://bijoos.com/oraclenotes/oraclenotes/2013/92 Invoker Rights View In addition to the Invoker Rights package, that has been around for a long time already, now finally there also is an invoker rights view – although its specific syntax uses the term bequeath current_user: create or replace view view_name ( col1, col2,….) BEQUEATH CURRENT_USER as select … from table1 join table2 … This statement specifies that privileged users can reuse the view’s SQL definition but only have the SQL applied to database objects owned by or granted explicitly to the user that invokes the view. Before 12c, anyone who has the select privilege on the view can query data from that view leveraging the select privileges of the view’s owner on the all objects refer- enced from the view. Inherit Privileges In the same area of invoker rights definitions, the database before 12c contains something of a loophole: when a user invokes an invoker rights package, anything done by the package is done using the authorizations of the invoking user – that is after all the whole idea. However, this means that the code in the package can do things based on the invoking user’s privileges and channel results to the user who owns the invoker rights package: In this example, the owner of the invoker rights program unit has added code to the procedure that leverages the invoking user’s select privilege on the special table to retrieve data that it then writes to its own TAB_TA- BLE on which it has granted public access. In previous releases, the invok- ing user had no control over who could have leverage his or her access privileges when he or she runs an invoker’s rights procedure. Starting with 12c, invoker’s rights procedure calls only can run with the privileges of the invoker if the procedure’s owner has the INHERIT PRIVI- LEGES privilege on the invoker or if the procedure’s owner has the INHER- IT ANY PRIVILEGES privilege. This gives invoking users control over who has access to their privileges when they run invoker’s rights procedures or query BEQUEATH CURRENT_USER views. Any user can grant or revoke the INHERIT PRIVILEGES privilege on themselves to the user whose invoker’s rights procedures they want to run. SYS_SESSION_ROLES A new built-in namespace, SYS_SESSION_ROLES, allows you to deter- mine if a specified role is enabled in the current session. For example, the following example determines if the HRM_ADMIN role is enabled for the current user: SELECT SYS_CONTEXT(‘SYS_SESSION_ROLES’, ‘HRM_AD- MIN’) FROM DUAL; This query returns either ‘TRUE’ or ‘FALSE’. Attach Roles to Program Units In 12c, you can attach database roles to program units functions, pro- cedures, packages, and types. The role then becomes enabled during execution of the program unit (but not during compilation of the program unit). This feature enables you to temporarily escalate privileges in the PL/ SQL code without granting the role directly to the user. The benefit of this feature is that it increases security for applications and helps to enforce the principle of least privilege. The syntax is quite straightforward: GRANT hrm_admin TO procedure scott.process_salaries If the execute privilege on procedure process_salaries is granted to some user JOHN_D, then during a call to process_salaries by JOHN_D an in- spection using SYS_SESSION_ROLES into the role HRM_ADMIN being enabled would return that the role is indeed enabled – even though that role has not been granted to JOHN_D. This blog article by Tom Kyte shows more de- tails on this facility: http://tkyte.blogspot. nl/2013/07/12c-code-based-access-control-cbac- part.html. White List on Program Units In 12c, we can indicate through a white list which program units are al- lowed to access a certain package or procedure. If a white list is specified, only a program unit on the list for a certain object can access the object. In the next figure, this has been illustrated. A, B, C as well as P, q, r and s are all PL/SQL program units in the same database schema. Units q and s have been associated with a white list. Unit s can only be invoked by object P and unit q is accessible only from P and r. This means for example that A, even though it is in the same schema as unit s, it cannot invoke unit s. If it would try to do so, it would run into an PLS-00904: insufficient privilege to access object s error. This white list mechanism can be used for example to restrict access to certain units in a very fine grained way. In the example above, it is al- most like a ‘blue module’ is created in the schema of which object P is the public interface and that contains private objects q and s that are for module-internal use only. The syntax for adding a white list to a PL/SQL program unit consists of the keywords accessible by followed by a list of one or more program units. create package s accessible by (p) is procedure …; end s; Note that the actual accessibility is checked at run time, not compile time. This means that you will be able to compile packages that reference pro- gram units with white lists in which they do not appear and that they will not be able to successfully access at run time. This blog article by Tom Kyte explains PL/SQL white lists very clearly: http://tkyte.blogspot.nl/2013/07/12c-whitelists.html.
  • 18. T - otechmag.com - Fall 2013 - 18 Conclusion The essence of the 12c release of the Oracle Database does not lie in appli- cation development facilities. Having said that, there is of course an inter- esting next step in the evolution of what database developers can do with the database. SQL and PL/SQL have evolved, allowing for more elegant, better performing and easier to write programs. Some facilities – for ex- ample SQL Temporal Validity and Flashback – are potentially far reaching and may lead to different designs of data models and applications. The compilation in this article is obviously quite incomplete. I have mentioned some of the most striking – in my eyes – new and improved features. Some glaring omissions are in the next list – which is of course equally incomplete: • Lateral Inline Views • (Cross and Outer) Apply for joining with Collections • VARCHAR2(32k) • XQuery improvements and other XMLDB extensions • Java in Database Java 6 or 7 • Export View as Table • DICOM support for WebCenter Content • New package dbms_monitor for fine grained trace collection • DBMS_UTILITY.EXPAND_SQL_TEXT for full query discovery Browsing through the Oracle Documentation on Oracle Database 12c - http://www.oracle.com/pls/db121/homepage - and browsing the internet for search terms such as ‘oracle database 12c new feature sql pl/sql’ are a pretty obvious way of getting more inspiration on what the next gener- ation of Oracle’s flagship product has to offer. Hopefully this article has contributed to that exploration as well. Lucas Jellema is CTO of the Dutch based company AMIS.
  • 19. T - otechmag.com - Fall 2013 - 19 The Picture Desk
  • 20. T - otechmag.com - Fall 2013 - 20 Business intelligence tells you what happened at work. Good business intelligence tells you what is happening now. Competitive intelli- gence tells you what your competitors and the market did. Good competitive intelligence tells you where they’re headed. Both BI and CI crunch big data to deliver answers to the questions, “what happened?” and “how did that happen?” Only a social and deep web competitive intelligence framework can answer the most important question, “so what?” In order to be actionable, intelligence must answer the “so what?” ques- tion. The answer to “so what?” describes the impact of the information. It describes assumed and presupposed context. It fills in the rest of the statement that starts out, “we care about this because…” Social competitive intelligence is a new discipline. It is emerging now and will continue to grow over the next decade. Some solutions exist already. But they and their marketing cousins – social media management soft- ware - are still largely focused on listening to and tracking social mentions and sentiment activity. While these are important, the solutions today are overly simplistic. They can list changes to competitor websites, report on where competitor PPC (pay per click) ads are run and measure generic brand sentiment. However, rather than exerting a contextualizing force on the already massive volumes of available business and social information, they add to the data tsunami. When your cup is already running over, it makes little sense to put it under a faster faucet. The better solution is to put into place a framework that gathers, filters, synthesizes and analyzes social competitive intelligence and deep-web analytics (i.e. the web beyond Google indexes). Then use that framework as a lens through which to view your existing BI and CI data. Then you will be able to answer the all-important, “so what?” question. Here are several real-world examples. The Call Center Cost Hole BI reporting shows increasing costs and increasing churn in your call center; overall a bad trend. If that were the only information you had, the “so what?” answer would be to kick up call center recruiting a fill the gaps and some stricter MBOs for call center managers on employee retention. However, with a social competitive intelligence framework in place, it is revealed that social media, blogs and discussion boards are full with blistering criticism of your call center escalation processes. The withering criticism is poisoning the work environment and making a tough job even more unpleasant. Viewed through this lens, the correct answer to “so what?” is not to step up recruiting. Rather, it is to fix the poison call center environment and re-engineer the escalation processes while empowering call center employees. Not only does this save substantial time and money, it actually boosts net productivity by empowering knowledgeable employees and eliminating training and the ramp up to full productivity required for each new hire. The Outside Expert You are ready to launch a new product into an overseas market. But there are a host of regulatory issues to navigate. While you have plenty of “in- dependent” research and case studies validating your approach, you still want an expert in your technology and the foreign market to help guide you through the approval process. The regulators don’t look kindly on experts who are among your paid staff due to potential conflict of interest. You want an external expert but you want to avoid someone who regularly works for your competitors or who has expressed harsh opinions of your company or product in the past. Traditional competitive intelligence will not provide expertise location like this. Traditional BI only tells you that your new market has a lot of po- tential. If that were the only information you had, the “so what?” answer would be to get some internal recommendations and do a Google search and hope the person is available and credible. But hope makes for a poor strategy, especially with something as big as a new foreign market launch. With a social competitive intelligence framework in place, you are able to perform a social network analysis to first locate the influencers on the topic area, measure their credibility and influence relative to one anoth- er, and finally screen them for competitor interaction and engagement. This approach yields not only a deeper, more highly qualified “short list” of available experts, it also reveals a large and rich set of topic influenc- ers who your team can target for engagement and awareness of your new product. Ultimately, this delivers not only the help navigating new regulatory processes in new markets, it also identifies a new set of up and coming influencers who will help your product remain successful after the initial splash. The Competitor Customer List Your internal BI tells you that sales are plateauing despite the fact that you have a better product with more features and a better history of quality. Your competitive intelligence tells you that competitors are facing similar slow-growth periods. It looks like the market is reaching saturation and new opportunities are small. If that were all the information you had, the answer to the “so what?” question would be to switch over your sales strategy from a hunting to a farming operation. Marketing would shift to promoting small incremental improvements and the grind of upgrades/ maintenance/renewal would become the core of your revenue model. However, with a social competitive intelligence framework in place you would reveal a gold mine of new accounts that you can hunt while dra- matically boosting your competitive advantage. The framework would reveal your competitor’s customer lists. First, realize that all customers - yours and your competitors – are interested first in solving a business problem and only secondarily staying with a particular vendor or service provider. Staying with a particular provider tends to be more a matter of convenience and trust than inherent and continued ability to deliver value. This means there is opportunity to knock out your competitor or at least to come alongside them and establish a beach head; but only if you know who they are and how to approach them. This is what a social competitive intelligence framework delivers. That they are your competitor’s customer means that at one time in the past, they got a better deal or had a better recommendation or were simply aware of your competitor at the time they needed a solution. In the B2B world, there are few things that lock in customer. Sure, they exist; Social Media Blending BI and Social Competitive Intelli- gence for deep insight into your business Billy Cripe BloomThink
  • 21. T - otechmag.com - Fall 2013 - 21 big computing platform and enterprise application decisions tend to have at least a 7 year life cycle. Similarly, being a Mac, Windows or Linux shop tend to be about corporate culture. But as the recent Samsung mobile vs Apple iPhone campaigns demonstrate, even the most loyal customers can switch to a completely different platform if the reason to switch is compel- ling. A social competitive intelligence framework makes developing a target list of your competitor’s customers easy. First, perform a social network analysis of your competitor. See who is commenting, following, liking and (re)tweeting about your competitor. Then filter that list by companies and contacts you’d like to target. Perform this analysis again around the time of your competitor’s big events like conferences and trade shows. The cadence of social activity spikes during those times. Additionally, your competitor will trot out their favorite case studies and customer testimo- nials during that time to add credibility to their pitch. What they’re doing for you is validating the customer need, interest and ability to pay. You just need to get them to switch or try out your product too. Finally, mine your competitor’s website for their customer information. Companies routinely post logos and ROI or case studies online. Even if competitor brag sheets use unnamed customers, there will generally be enough infor- mation to make a very educated guess and narrow it down to only one or two possible companies (your potential customers!) in the area. My company, BloomThink, recently performed a social competitive intelli- gence engagement designed to create a competitor customer list. During one trade show, the target competitor was demonstrating an unbranded intranet system. However, the layout, color scheme and look/feel of their demo perfectly matched an educational YouTube video posted at about the same time by a large local health care organization. The health care company was added to the “competitor customer target list”. Only a so- cial competitive intelligence framework and strategy could have revealed the connection that was publicly available but buried in a mountain of previously unrelated social data. Conclusion As the old saying goes, “text without context becomes pretext”. No matter how good your BI data is alone, without the contextualizing force of a social competitive intelligence framework, it becomes justification for gut feelings, political games-playing and flights of fancy. That is no way to run a business. Enterprises and especially CIOs, CMOs and Sales EVPs need to implement a social competitive intelligence framework that understands how to do the following: 1. Collect & Gather deep web and social information 2. Filter & Categorize information to keep what matters and cull what doesn’t 3. Analyze & Synthesize that information with existing BI & CI data 4. Report & Act so that actionable intelligence can deliver meaningful business impact Billy Cripe is the founder of the Minneapolis, USA based compa- ny BloomThink
  • 22. T - otechmag.com - Fall 2013 - 22 The Picture Desk
  • 23. T - otechmag.com - Fall 2013 - 23 Oracle Enterprise Man- ager Cloud Control 12c: Managing Cloud Center Chaos Porus Homi Havewala Packt Publishing ISBN: 978-1-84968-478-1 Published: December 2012 Rating: Being in charge of the Oracle Enterprise Manager line of business for the ASEAN region, Mr. Havewala has certainly been close to the action when it comes to the topic of concern of this book. Being released in December 2012, the Oracle Enterprise Manager Cloud Control 12c: Managing Data Center Chaos book by Packt Publishing is one of the first titles on the subject of version 12c of the administration tool of choice for an Oracle environment. The book is pretty well written and, because of the style it is written in, is a pretty easy read. Despite the in-depth topics it coffers. This is something we rarely see in technical books and both the author as the publisher can see this as a complement. Oracle Enterprise Manager Cloud Control 12c: Managing Data Center Chaos offers the reader some critical insights in how EM is supposed to help you to handle the everyday chaos in your data center(s). With chapter titles like “Ease the Chaos with Performance Management” and “Ease the Chaos with Automated Provisioning” it’s all about managing some sort of awful situation in your data centers. And it works. It actually offers a lot of insight about what you can do with the help of Enterprise Manager. It does show us how to look further than just the basic elements of EM that we have used for some while now. It certainly helps looking for those functions of EM that where introduced in 12c. The backside of it is that it only shows us how it is supposed to work. The book barely goes beyond what might or should be possible. There are no real live examples and all the examples that are worked out properly are taken of the demo grounds at Oracle. But hey, that’s what you get when a business development manager writes a book. All in all it is a pretty nice read. Especially for those of us who are still looking on where to position Oracle Enterprise Manager and want some insights in how all this is implemented. It does offer the reader some good and qualified information on the topic at hand: how do I create a more manageable environment in my data center? Oracle WebLogic Serv- er 12c Advanced Administration Cook- book Dalton Iwazaki Packt Publishing ISBN: 978-1-84968- 684-6 Published: June 2013 Rating: A cookbook. What’s up with that? The idea seems handy: short articles on how to do a specific job. And most of the time it works. It works fine. But what when it is about a subject that has already been described at length? Well, than it might be just a tiny bit too much. Last June Packt Publishing released a title on Oracle WebLogic Server 12c in their popular cookbook series. The book covers more some 60+ ‘reci- pes’ that teach readers how to install, configure and run Oracle WebLogic Server 12c. The chapters, or ‘recipes’ as Packt tends to call them, about installation and running truly reminds us of the installation and configuration chap- ters in the official Oracle documentation. Does that mean that these recipes add nothing to the knowledge of the reader? Of course not, it is actually necessary in a book that tries to be complete about things and shows us that the Oracle documentation is correct about some points. Some of the articles cover configuring for high-availability, troubleshoot- ing and stability & performance. And this is where the value of the book kicks in. Because for a main product in the Oracle stack, WebLogic isn’t always the best understandable of systems. If a book shows us where to look for stability and when trouble is under way, it pays of immediately. So, does this book add something to the overall knowledge of the profes- sional who works with Oracle WebLogic Server 12c? Definitely, even if it’s just as a convenient reference. Oracle Enterprise Man- ager 12c Administration Cook- book Dhananjay Papde, Tushar Nath & Vipul Patel Packt Publishing ISBN: 978-1-84968-740-9 Published: March 2013 Rating: Oracle Enterprise Manager is certainly gaining momentum and the times that it was just a toolset to manage single database instances is definitely in the past. That also means that the product is gaining a larger fan base, as it probably should. In this book a total of three authors wrote some 50 recipes on managing the Oracle stack using the latest version of Enterprise Manager. What you really notice right away is that it is not only about managing the Oracle database, but there’s also a bit about managing middleware as well. The last chapter of the book is completely reserved for a description of the iPhone / iPad app for using Oracle Enterprise Manager. This is all exit- ing stuff, but probably not the most interesting for in-depth administrators on an excessive Oracle Stack. What’s really missing in this book is the entire ‘Cloud Management’ part of the latest edition of Enterprise Manager. Oracle’s promises that the toolset is the perfect companion for the private or public cloud and data centers managing those is not seen anywhere in this book. Because cloud management is the main focus of Oracle Enterprise Man- ager 12c it is really a shame that it is not part of this book. If the authors would have shared the focus of the other Packt title on Oracle Entprise Manager, it would really have added to the overall reading experience. The Book Club TTTTT TTTTT TTTTT
  • 24. T - otechmag.com - Fall 2013 - 24 The Picture Desk
  • 25. T - otechmag.com - Fall 2013 - 25 It could just be the key to easy adaptation This article is written based on our experience of working on ADF proj- ects together with a User Experience Designer (UX designer) and his value in this team. In this article we will explain what UX is and how it is used in a big project to get more value out of our software. We will also explain how we have used UX and ADF in an Agile environment. In this article we are going to talk mainly about the front-end of soft- ware. When reading this article you should have a basic understand- ing of the scrum process however you need no knowledge about User Experience design. What is UX? In this part of our article we will explain what UX is and what the role of the UX designer in a software project is. A software project can be a web, mobile or a desktop application. In almost any IT project a business annalist is responsible for under- standing and translating the business needs into clear software spec- ifications. The business annalist is the eyes and ears of the business, they make clear what solution should be built. So we have IT and business involved in our project together. But who takes care of the end users? Is there anybody who cares about them? This is where the UX designer comes in. UX is an acronym for “user experience”. It is almost always followed by the word “design.” By the nature of the term, people who perform the work become “UX designers.” But these designers aren’t designing things in the same sense as a visual or interface designer. UX is the intangible design of a strategy that brings us to a solution. So what does this actually mean? This solution can be divided into several layers. Each of these layers de- scribes a more detailed part of the system. A UX designer creates the total user experience by designing and thinking about each of the layers and validate these results with the end users of the system. In the following example the techniques we used for our solution are described. For this example we have used the “Elements of User Experience” developed by a renowned User Experience professional called Jesse James Garret. Our “solution” is an ADF web application built on a BPM middle layer to create a case management system with workflow. The Elements of User Experience Surface This is the visual style end users will see when they use the application We have used the standard corporate design rules called “de Rijkshuisssti- jl” offered by the Dutch government for designing web applications. Skeleton Describes the interactive components needed on the pages like buttons, list boxes etc. We have used component descriptions in wireframes to visualize the in- teractive components for the system. A component is a functional piece of software used on a screen for example: a search box on the screen. Structure Describes the pages needed in the application and their navigation flows For each step in the process we have used flow diagrams and standard page layouts to structure the application screens. Scope Describes the scope of the project that needs to be built. • We have used a process design for scoping the screens witch where needed. • Product backlog with user stories to scope and prioritize the needed functionality Strategy Describes the underlying application strategy to align with business and user needs. • For strategy we have used workshops to determine standards to be used thru-out the whole application, • We have talked to the future users to understand their needs. • Talked with the business to determine their business goals So we have gone over some background on what a UX designer is and what kind of work he does. But why do you need a UX designer in your ADF project? • There is somebody who cares about your users and wants to make them happy. Happy employees are more productive and less ill. • A UX designer has the skills to test your application for usability issues early in the build process. • A UX designer keeps asking critical questions about functionality like “is this really needed for our users”. This can lead to less functionality to build. • In a scrum project the UX designer helps to get user stories ready for in- clusion in a sprint. Visualizing the software will save the developers time. • A UX designer improves the user acceptance by including the end-users early in the design process and use their feedback to improve the product. • A UX designer is an objective hub between the business, development and the end user. He tries to balance these to get the most usable and eco- nomical product as possible. But bear in mind that when you use a user centered design process in your project you need to continually invest time into improving your applica- tion and not only adding more functionality. So give the UX designer room to organize user sessions and working this feedback back into the applica- tion. Oracle ADF Successfully combining UX with ADF Marcel Maas AMIS Services Sander Haaksma UX Company
  • 26. T - otechmag.com - Fall 2013 - 26 A real world case For the project in which I worked with a UX designer the goal was to create a case management system using Oracle BPM and an optimized worklist application. Since a great deal of productivity could be gained by improv- ing the screens and work processes that are being used by the end users we flew in a UX designer. His assignment was to think up a really usable interface which would be smart and supportive to the user. We wanted to use contextually aware widgets to provide extra info to the end user at ev- ery step of a case, as well as define our own navigation for the application. We quickly realized we needed to create a new worklist application from scratch instead of using and customizing the BPM workspace. Therefore 2 ADF developers were hired including me. For the realization of the BPM processes 2 specialists were hired, as well as a tester, a process analyst and a project manager. At our arrival it was already decided to use Scrum as the way of managing the project which suited us well. At that time we knew little of the requirements of the system and had a limited bag of money at our disposal. So we settled for 2 week sprints and went to work on the first iteration. We made sure the UX designer and the BPM guys were always a sprint ahead of the ADF developers in terms of function- ality to make sure they could rely on tasks created by the BPM team and designs by the UX designer. The ADF team then only would have to focus on the technology. During the sprint the UX designer would have sessions with end users to define the UI which would then be validated by the ADF team and eventually end up in one of the next sprints. At the time of writing we are still going strong and are almost ready for the first release. In the next few paragraphs we will dive into various parts of the process more deeply. Converting wishes to screens The starting point of a project is always the user’s wishes which in our case are a backlog of user stories. These stories provide a way of describ- ing functionality in the perspective of the end user. The UX designer takes these stories as a starting point for determining the general structure and flow of the application. The stories themself provide no hint as to how screens should look but they describe functionality. It is the “What”, not the “How”. The designer takes a step back and analyses the stories to find overlap and get a general feel for what the user wants. This is done by talking to the user him or herself. The general flow and structure of the application are hereby determined. From here on out the designer takes a number of stories from the back- log and uses these to create a sketch of the screen and its components. These are functional components such as buttons, panels, images etc. These sketches are then discussed with the end user to validate them and modify them if needed until all stakeholders are satisfied with the result. The trick is not to drive a sketch to perfection but specify just enough for a developer to start building what the user wants. This makes it possible to stay flexible when new insights arise. Now the screens are ready to be implemented for real in a sprint. In the next paragraph we will dig into this a bit more. After the screens have been created the designer hosts a usability lab to validate the usability of the screen and its components. A Usability lab is a session where users are asked to complete tasks with the new software. During these sessions stakeholders observe the behavior of the test users and together decide which issues are important. The usability issues will be logged as new user stories on the backlog. The usability issues found during the lab will be prioritized and added to the backlog and find their way back into a sprint to improve functionality. These iterations greatly improve the usability of the product. Involvement of the business is abso- lutely necessary for this process to be a success. Example screen and components wire-framed in tool Axure RP. Successfully combining ADF with UX design In the previous paragraph the process of designing screens from user sto- ries was explained. However nothing was said about the implementation in ADF. In this part we will describe how to leverage the power of ADF and combine it with effective UX design. The availability of a UX designer for creating screens saves a lot of time for a developer because he no longer needs to think about and design the screens himself. However when one releases an UX designer on a product and he gets to work as lone wolf usually the most beautiful and intuitive design is created. However this design still needs to be implemented by a specific technology, which has its own pro’s and con’s. This means it can take a lot of time to create specific components when the technology itself probably supports it in a different way.
  • 27. T - otechmag.com - Fall 2013 - 27 ADF is no different. A UX design for ADF is only a help when there is com- munication between the UX designer and the developer. The UX designer can explain what the user and business want and developer can explain how the solution can be best implemented using ADF leveraging its strengths. In this case the designer will come to the developer and using the sketch explains what is needed. The developer can then provide the ADF components that need to be used to make sure most of the func- tionality can be achieved by using standard components and patterns. This makes sure there is a balance between the user’s needs, the cost of development and the technical feasibility. Luckily we don’t need to think up everything. A lot of ways of interacting with users through ADF have already been thought out and tested by Oracle. They have bundled these ADF UX patterns and published them on the web. They can be found here: http://www.oracle.com/technetwork/topics/ux/applications/gps-1601227. html Some of these design patterns have already been implemented in ADF components and others you can implement by yourself. This website can be a great help however it is not strictly necessary. The next step is to validate the design made by the UX designer against ADF’s capabilities. We must try to design screens that are easily built using ADF standard components. Because this saves us time from developing components that look great but which take twice as much time to create. For example: One goal of the UX designer was to only show input fields on the screen when they are actually needed. If they weren’t, they would not be shown. Now this could easily be realized with ADF. Only when we add validations on these input components and the validations fire when the component is hidden strange things will happen. So we have modified the designs to always show required input fields as well as fields with other validations. Sometimes we could see and avoid such problems before- hand and for some others we learned the hard way. As you can see your own experience can really make a difference here so think hard about the possible difficulties that could arise when implementing a screen design in ADF. Finding issues at this point and thinking up an alternative design which works just as well for the UX designer as well as ADF can save you a lot of time later on. So 80% of the functionality is realized in 20% of the time. Also iteration is key here. The solution gets created, tested by users and improved if neces- sary. By using standard components and patterns there is time saved on development, which then can be used for improvements after receiving feedback. To sum it all up In this article you have read about UX design in combination with Oracle ADF. The question is whether it pays to hire or request a UX designer on your next project. The answer to this can be short, yes I think that when interfacing with humans is involved it is already enough of a reason to hire a UX designer. However it is not enough to hire a designer and let him go about his business. The designer must talk not only with the business end of your project but with the development team as well to make sure the solution envisioned by the designer is feasible with the technology chosen to implement the design. In the case of ADF this is the main point. When a UX designer is paired with ADF developers and they mix their knowledge the greatest potential is unlocked. Try and create a UX design which uses design patterns that are already supported in ADF. In this case you get the best usability and design possible which is then realized in a minimal amount of time. Another key point here is iteration. It does not matter whether you are doing an agile project or any other. Validate your work with end users and improve your designs from there on. Agile projects are best suited for this but you can imple- ment it in other projects as well. Because you have included end users from very early on they are more willing to adapt the product and there is less of a learning curve. When done right you get a happy customer and happy end users because the application is easy to use and was made in less time and money. Marcel Maas is Senior Oracle Devel- oper at the Dutch company AMIS Services Sander Haaksma is UX Designer at the Dutch company UX Company.
  • 28. T - otechmag.com - Fall 2013 - 28 The Picture Desk
  • 29. T - otechmag.com - Fall 2013 - 29 Ten years ago it was common to hear the adage that every Oracle product has a database in it somewhere but even today, with Oracle’s portfolio of thousands of products, there’s probably still some truth in it! You would expect this to be the case for the Oracle Database Ap- pliance (ODA) of course, but what is new is that it can now run virtual machines, and in particular those for WebLogic Server and Oracle Traffic Director (or OTD, Oracle’s software load balancer with iPlanet heritage and acquired via Sun). Recently I was fortunate enough to work with a favorite mid-sized custom- er considering the ODA as part of a hardware refresh, and had get hands- on experience of the latest ODA X3-2 model during a Proof of Concept (POC). I have been sharing those experiences on my blog but here, in OTech magazine, I’m combining them for the first time along with some additional analysis. Hardware First, let us step back for a moment. For those of you who have not come across it before the ODA (apparently pronounced to rhyme with “Yoda”) is the smallest of Oracle’s Engineered Systems. Whilst not boasting exotic components, such as InfiniBand fabric, it is comparable in spirit to the other engineered systems like Exalogic and Exadata – you are buying not just hardware but also the software installation design and the mainte- nance approach. Regarding the hardware itself, ODA consists of two 1U “pizza box” X3-2 servers connected to one or two small storage arrays (but sold as a package) includ- ing software for provisioning and applying updates. Each of the servers has two Intel E5-2690 2.9GHz proces- sors, giving 16 cores per server, and 256GB RAM. The storage array(s) are directly attached to the servers via SAS2 and have 24 2.5” bays, populated with twenty 900GB 10k RPM spinning disks and four 200GB SSDs. For networking each server has four on-board 10GbE ports (copper as you might expect) and a PCI card with dual 10GbE ports. I have been specifying this sort of 2 socket servers, especially for middleware, over a number of years now and still consider them to be the sweet spot in x86-64 sizing. Intel have recently announced a new version of this processor family that now has up to 12 cores and which, following a process shrink from 32nm to 22nm, promises even more perfor- mance. With virtualization I expect most, if not all, of my (mid-sized) customers could run their middleware production estates on a handful of, or maybe even just two, servers like these. So that’s the hardware. We could debate in particular whether the storage is sufficiently well-specified for Oracle enterprise deploy- ments but I’ve come to realize that, for the ODA’s target customers, most probably the performance is “adequate” (as Rolls-Royce apparently used to say, in an understated manner, whenever asked by the motoring press!). Virtualized Platform Originally the ODA ran Oracle Linux in a purely physical manner but since earlier in 2013 it has the abil- ity to run virtual machines instead – 2 of which, called “ODA Base”, run the database (typically RAC) and what you do with the remainder of the resources is up to you. In this mode ODA runs Oracle VM on each server as the hypervisor and has a command line interface (CLI) – not OVMM – to manage these two hypervisors, and their associated VM repositories, called OAK CLI (i.e. oakcli). The SAS storage control- lers are passed through directly to the ODA Base VM (also known as oakDom1), so are essentially The rear of the ODA X3-2 is a mass of cables. Note this was the POC system and so several cables weren’t connected – in production for this customer there would be another 9 redundant power and data cables to fit into the same space. connected to the I/O the same way as a physical host would be. This only leaves the mirrored (boot) disks in each server to provide space for the virtual machines, though it is supported to connect to NFS storage. The installation and configuration of the ODA Virtualized Platform is straightforward - you re-image with the virtualized system, copy over the ODA Base template and run a utility called the ODA Appliance Manager… by this point you will then have a system to run databases or other virtual machines. Oracle Database Appliance WebLogic on the Oracle Database Appliance Virtualized Platform Simon Haslam Veriton Limited