This article takes a look at some of the reasons behind this data explosion, and some of the possible effects if the growth is not managed. We’ll also examine some of the ways in which these problems can be avoided.
1. Managing the data explosion
The causes, effects and solutions of growing data volumes
By Julian Stuhler
Director, Triton Consulting & IDUG President
Drowning in data
As Information Technology becomes ever more prevalent in nearly every aspect of our lives, the
amount of data generated and stored continues to grow at an astounding rate. According to IBM,
worldwide data volumes are currently doubling every two years. IDC estimates that 45GB of data
currently exists for each person on the planet: that’s a mind-blowing 281 Billion Gigabytes (281
Exabytes) in total. While a mere 5% of that data will end up on Enterprise data servers, it is forecast
to grow at a staggering 60% per year, resulting in 14 Exabytes of corporate data by 2011.
This article takes a look at some of the reasons behind this data explosion, and some of the possible
effects if the growth is not managed. We’ll also examine some of the ways in which these problems
can be avoided.
Industry Trends
A major trend over the last few years has seen many organisations implementing ERP and CRM
solutions. This in turn has caused a dramatic increase in the amount of data we are storing about
our customers, prospects, partners and suppliers.
Companies are also investing in ever more sophisticated business intelligence and analytics. In an
increasingly competitive marketplace, the ability to base business decisions on solid, reliable and
timely management information is becoming a key differentiator, but trend analysis can require very
large amounts of historical data to be stored and managed.
The trend towards company consolidation is not a new one, but the current economic situation has
inevitably resulted in a significant increase in the number of mergers and acquisitions. This is
creating a huge increase in data volumes, with the associated data duplication and application
retirement issues. Organisations are faced with not only managing all of their own data, both
historic and current but this influx of additional data from other parties. Imagine the “data
headache” of combining all of the ERP, CRM, Business Intelligence and Analytic systems from
different organisations into one manageable enterprise system.
Legislation
Corporate compliance legislation has had a major effect on how we use, store and maintain our
data. The requirements placed on organisations by HIPPA, SOX, Basel ll and others mean that many
companies are having to keep hold of more data, and for longer periods. Just as importantly, that
retained data rapidly transforms from a corporate asset to a liability once the legal minimum
retention period has expired, making it vital that such data can be accurately identified and deleted.
It is vital that organisations adhere to this legislation in order to avoid the cost of court appearances,
heavy fines and the resultant damage to the brand.
Technical Trends
New capabilities within the databases used to store corporate information are another major driver
of data growth. For example, DB2 now supports XML and LOBs (“large objects” such as audio, video,
images, etc). The ability to store this kind of data alongside more traditional structured information
can be very useful, but can also have a huge impact on the overall size of the database.
2. Other technical trends that are contributing to database growth include storage of data in Unicode
format (which can often expand overall database size by 10%-50% depending on the data), and
duplication of databases due to replication requirements and/or backup strategies.
Finally, there’s the perennial problem of removing old or obsolete data once it has reached the end
of its useful life. Application data archiving is often considered as an optional extra, and even if it is
included in the initial project plan it is often the first item to be postponed to a later release.
Effects of rapid data growth
This unprecedented growth in data volumes is having a significant effect on many organisations.
Perhaps the most obvious impact is on operational costs. More staff time is required for routine
maintenance and data-related exception handling such as out-of-space conditions and re-
partitioning. As the database increases in size, so too does the CPU cost of running batch operations
and routine housekeeping. Ongoing running costs also increase due to the additional disk space
required and storage and processing capacity upgrades may be needed even though they often
haven’t been budgeted for.
Painful though they may be, increases in operational costs aren’t the end of the story. What price
can you place on customer satisfaction? Performance for critical application processes can degrade
as data volumes increase, resulting in missed service level objectives. Teams across the whole
organisation may be affected, with call centre staff unable to access the information they need
quickly enough to satisfy customer demand.
Coping with the data explosion
Various coping strategies are available to address the issues associated with rapid data growth.
Measures such as implementing database partitioning and data compression or purchasing extra
CPU/DASD can help. However, these have their own costs there are many issues still remaining,
including:
– Disaster recovery times
– Legal risk of exceeding minimum data retention periods (data as a liability, not an
asset)
– DBA effort to manage/tune workloads and databases
– Cost of spending IT budget on maintaining current capacity, not innovating
So, what are the alternatives?
Implement a data archiving strategy!
According to a recent Gartner report “database archiving significantly lowers storage costs for
primary storage by moving older data to less-costly storage” they go on to say “archiving reduces the
size of primary storage, resulting in improved application performance and lower storage
requirements for copies of the database for testing, backup and other purposes”
Also, you may think that archiving is only applicable to the largest of applications but in the same
report Gartner state that “Performance and cost improvements can be sizeable, even with
applications that have less than 200GB of data”
So, it would appear that a data archiving strategy is the best way for organisations to cope with
growing data. Giving cost savings and improved application performance. However, once the need
to archive has been agreed many new questions arise:
3. • Build Vs Buy
• Flexibility Vs Speed
• Software expenditure Vs staff time costs
These are the tough decisions which need to be made before a data archiving strategy can be put
into place. While the temptation to build in house may be strong is there really justification for so
doing? Can staff be spared to work on this project? Although the up-front cost is cheaper what
about the long-term cost, not just in staff time for the project but ongoing as expertise is lost
through staff movement. What about the need to implement the strategy across multiple platforms
within the same organisation? Can we spare project staff from each area of the organisation to
work on developing a bespoke solution for their operating platform?
The answer is potentially a bought in solution which will work across multiple platforms thus
bringing a scalable solution to the enterprise without needing to take precious staff time away onto
separate, long-term test and development projects to create a bespoke solution.
So, it seems that there are ways to control data growth before it controls us. By implementing a
thorough archiving policy and an “intelligent archiving” system we can manage data throughout it’s
lifecycle.
Triton Consulting are Information Management Specialists and IBM Premier Business Partners. For
more information on Triton and the solutions they provide visit www.triton.co.uk
www.bloor-research.com