O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a navegar o site, você aceita o uso de cookies. Leia nosso Contrato do Usuário e nossa Política de Privacidade.
O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a utilizar o site, você aceita o uso de cookies. Leia nossa Política de Privacidade e nosso Contrato do Usuário para obter mais detalhes.
What is a Data Warehouse?
• A data warehouse is a database used for
reporting and analysis.
• The data stored in the warehouse is uploaded
from the operational systems.
• The data may pass through an operational data
store for additional operations before it is used
in the data warehouse for reporting.
Benefits of a Data Warehouse
A data warehouse maintains a copy of information from the source
transaction systems. This architectural complexity provides the
• Maintain data history.
• Integrate data from multiple source systems.
• Improve data quality.
• Present the organisation's information consistently.
• Provide a single common data model for all data of interest regardless of
the data's source.
• Restructure the data so that it makes sense to the business users.
• Restructure the data so that it delivers excellent query performance, even
for complex analytic queries.
• Add value to operational business applications.
History of Data Warehousing
• 1990 — Red Brick Systems, founded by Ralph Kimball,
introduces Red Brick Warehouse, a database management
system specifically for data warehousing.
• 1991 — Prism Solutions, founded by Bill Inmon, introduces
Prism Warehouse Manager, software for developing a data
• 1992 — Bill Inmon publishes the book Building the Data
• 1995 — The Data Warehousing Institute, a not-for-profit
organisation that promotes data warehousing, is founded.
• 1996 — Ralph Kimball publishes the book The Data
• 2000 — Daniel Linstedt releases the Data Vault, enabling real
time auditable data warehouses.
Dimensional v Normalised
There are two leading approaches to storing data in a data warehouse
— the dimensional approach and the normalised approach.
• The dimensional approach, whose supporters are referred to as
“Kimballites”, believe in Ralph Kimball’s approach in which it is
stated that the data warehouse should be modelled using a
Dimensional Model (DM). For example, a sales transaction can be
broken up into facts such as the number of products ordered and
the price paid for the products, and into dimensions such as order
date, customer name, product number, order ship-to and bill-to
locations, and salesperson responsible for receiving the order.
• The normalised approach, also called the 3NF model, whose
supporters are referred to as “Inmonites”, believe in Bill Inmon's
approach in which it is stated that the data warehouse should be
modelled using Peter Chen’s Entity-Relationship (ER) model with
which, of course, we are all familiar!
Kimball’s Bottom Up Design
• In the bottom-up approach data marts are first
created to provide reporting and analytical
capabilities for specific business processes.
• Data marts contain, primarily, dimensions and facts.
• Facts can contain either atomic data and, if
necessary, summarised data.
• The single data mart often models a specific business
area such as "Sales" or "Production."
• These data marts can eventually be integrated to
create a comprehensive data warehouse.
Inmon’s Top Down Design
Inmon states that the data warehouse is:
• Subject-oriented: The data in the data warehouse is
organised so that all the data elements relating to
the same real-world event or object are linked
• Non-volatile: Data in the data warehouse are never
over-written or deleted — once committed, the data
are static, read-only, and retained for future
• Integrated: The data warehouse contains data from
most or all of an organisation's operational systems
and these data are made consistent.
• Data warehouse (DW) solutions often resemble hub
and spoke architecture.
• Legacy systems feeding the DW solution often
include customer relationship management (CRM)
and enterprise resource planning solutions (ERP),
generating large amounts of data.
• To consolidate these various data models, and
facilitate the extract transform load (ETL) process,
DW solutions often make use of an operational data
Data Warehouse Appliances
• IBM Netezza
• Oracle ExaData
• Kognitio 360
Demystifying the Data Warehouse
What is Data Mining?
• Data mining is the analysis step of the
Knowledge Discovery in Databases (KDD)
• It is a relatively young and interdisciplinary
field of computer science.
• It is the process of discovering new patterns
from large data sets involving methods at the
intersection of artificial intelligence, machine
learning, statistics and database systems.
The KDD Process
The knowledge discovery in databases (KDD)
process is commonly defined in 5 stages:
(4) Data Mining
The CRISP-DM Process
The CRoss Industry Standard Process for Data Mining
(CRISP-DM) defines six phases:
(2) Data Understanding
(3) Data Preparation
The simplified process is (1) Pre-processing, (2) Data
mining and (3) Results validation
Spatial Data Mining
• Spatial data mining is the application of data mining methods
to spatial data.
• Spatial data mining follows along the same functions in data
mining, with the end objective to find patterns in geography.
• So far, data mining and Geographic Information Systems (GIS)
have existed as two separate technologies, each with its own
methods, traditions and approaches to visualization and data
• The immense explosion in geographically referenced data
occasioned by developments in IT, digital mapping, remote
sensing, and the global diffusion of GIS emphasises the
importance of developing data driven inductive approaches
to geographical analysis and modelling.
Build a KPI Dashboard in 5 Minutes
Build a KPI Dashboard in 5 minutes
with no programming in Excel 2010
Choose 6 of the Keywords in the above!
Data Visualisation Defined
Data visualisation is the
study of the visual
representation of data,
that has been
abstracted in some
including attributes or
variables for the units
Tufte and Data Visualisation
‘The success of
visualisation is based
on deep knowledge and
care about the
substance and the
quality, relevance and
integrity of the
5 Principles of Graphic Display
1. Above all else, show the data.
2. Maximise the data-ink ratio.
3. Erase non-data-ink.
4. Erase redundant data-ink.
5. Revise and edit.
The Beauty of Data Visualisation
A Data Mining & Data Visualisation Tool
• The Gapminder application is the brain-child
of Hans Rosling.
• He thought of the title when he heard the
prompt ‘mind the gap’ on the London
• He is Professor of International Health at
Karolinska Institute, Stockholm, Sweden.
• He is a Doctor of Medicine and a Doctor of
Hans uses Gapminder
allows you to show
from your own laptop.
• Use Gapminder World
• Save a list of your own
• Updates automatically
when new data is