According to a recent Gartner survey, 62% of respondents consider data quality the top priority in data and analytics initiatives but only 27% of practitioners completely trust data. Do these statistics align with how you feel about your organization’s data? If so, how can you boost confidence in data, and ensure you can trust it for fast, confident decision-making? It’s time to take away the doubts about your data and reimagine data quality. After all, the quantity of data we create and manage continues to explode, with no signs of slowing down.
Join us for this fireside chat as we explore how technology advancements are requiring us to reimagine data quality. We’ll cover the many benefits of this modern approach, including how these advances enable you to:
Harness the explosion of artificial intelligence and machine learning to move faster and gain the biggest ROIImprove data quality efficiencies with hybrid cloud operationsEmpower new users to onboard and ramp up initiatives fasterBreak down data silos with a modern data catalog and easily access the datasets most relevant to your needs – so you can get to analysis and reporting faster
Boost PC performance: How more available memory can improve productivity
Reimagining Data Quality: Key Modern Considerations
1. Reimagining Data Quality:
Key Modern Considerations
Emily Washington
SVP, Product Management
Precisely
Scott Arnett
Sr. Director, Product Management
Precisely
HEAD
SHOT
HEAD
SHOT
A conversation with
2. Moderated by Mike Meriton
Co-Founder & COO, EDM Council
• Joined EDM Council full-time 2015 to lead Industry Engagement
• EDM Council Co-Founder & First Chairman (2005-2007)
• EDM Council Finance Board Chair (2007-2015)
• Former CEO GoldenSource (2002-2015)
• Former Executive for D&B Software and Oracle
• FinTech Innovation Lab – Executive Mentor (2011 – Present)
4. 4
Poll 1: What is the current level
of data quality maturity in your
organization?
• Not Initiated
• Early Stage
• In Progress
• Mature
5. Organizational needs are changing…
5
NOW
AND
THEN
Responsive
Data Mgmt. / IT teams cleaning data post-entry
Operational in use case
Focused on supporting business function
efficiency and effectiveness
On-premises data stores
On-prem databases supporting operational
systems and BI
Proactive
Data engineering embedding data quality to
build and maintain data pipelines
Analytics-driven
Focus on analytics, artificial intelligence &
machine learning, and decision-intelligence use
cases
Data cloud
Companies now migrating and centralizing data
in the cloud
6. …and so are data quality needs
6
Manual deployment processes
Manually deploy and maintain software and data
quality processes
Technical SME to manage DQ
Dedicated resources to configure and manage
data quality
Data replication to validate
Replicate data within data quality tool to identify
data issues
Automated deployment
processes
Automated access to latest features and data
quality process deployments
Intelligent data quality and
usability
Leverage semantics, profiles, and observations in
a seamless user experience to enable more users
Native data quality execution
Run data quality natively within environment data
is stored
NOW
AND
THEN
7. 7
Poll 2: Which of these trends is most
impacting your business and related
data quality initiatives in 2023?
• Rapidly increasing volume and variety of data
sources
• Data-driven decision-making culture
• Artificial Intelligence and Machine Learning
applications
• Data integration and interoperability
• Data Democratization
9. The leader in data integrity
Our software, data enrichment products and
strategic services deliver accuracy, consistency, and
context in your data, powering confident decisions.
of the Fortune
100
99
countries
100 2,500
employees
customers
12,000
Brands you trust, trust us
Data leaders partner with us
10. Join EDM Council and our membership
community of companies…
Worldwide
Americas, Europe,
Africa, Asia, Australia
350+ Member Firms
Cross-industry,
including Regulators
25,000+
Professionals
edmcouncil.org
These organizational trends have a significant impact on how the data quality needs of the business are addressed.
We see additional involvement from the business teams not just as a participant but jointly leading the initiatives. Business teams need to ensure that data can be trusted and they are now an active participant in the resolution process.
These processes can now become more automated with deployment methods and built-in intelligence.
This intelligence can be used to guide users in building the appropriate rules, to perform sophisticated matching processes and by monitoring the data.
This observation process can provide proactive alerts to users about potential issues so that the issues can be investigated and resolved before any decisions have been made based on this data. All of this can provide a more seamless user experience by leveraging semantics and metadata as part of the processes.
In addition, because companies are migrating and centralizing data in the cloud, they want to ensure data quality validations occur where the data resides, without taking it out of the centralized location to apply data quality rules. This requires data quality processes that can run natively where that data sits.
It used to be that organizations had data management projects that were led by IT teams and focused primarily on responding to issues.
They would target operational use cases and look to improve efficiency and effectiveness.
They typically addressed data that resided on premises at the organization.
Now we are seeing proactive data engineering with data engineers embedding data quality within data pipelines.
There is a huge focus on analytics including artificial intelligence & machine learning use cases.
In addition, companies are now migrating and centralizing data in the cloud using cloud data providers such as Snowflake and Databricks.
These organizational trends have a significant impact on how the data quality needs of the business are addressed.
We see additional involvement from the business teams not just as a participant but jointly leading the initiatives. Business teams need to ensure that data can be trusted and they are now an active participant in the resolution process.
These processes can now become more automated with deployment methods and built-in intelligence.
This intelligence can be used to guide users in building the appropriate rules, to perform sophisticated matching processes and by monitoring the data.
This observation process can provide proactive alerts to users about potential issues so that the issues can be investigated and resolved before any decisions have been made based on this data. All of this can provide a more seamless user experience by leveraging semantics and metadata as part of the processes.
In addition, because companies are migrating and centralizing data in the cloud, they want to ensure data quality validations occur where the data resides, without taking it out of the centralized location to apply data quality rules. This requires data quality processes that can run natively where that data sits.
These organizational trends have a significant impact on how the data quality needs of the business are addressed.
We see additional involvement from the business teams not just as a participant but jointly leading the initiatives. Business teams need to ensure that data can be trusted and they are now an active participant in the resolution process.
These processes can now become more automated with deployment methods and built-in intelligence.
This intelligence can be used to guide users in building the appropriate rules, to perform sophisticated matching processes and by monitoring the data.
This observation process can provide proactive alerts to users about potential issues so that the issues can be investigated and resolved before any decisions have been made based on this data. All of this can provide a more seamless user experience by leveraging semantics and metadata as part of the processes.
In addition, because companies are migrating and centralizing data in the cloud, they want to ensure data quality validations occur where the data resides, without taking it out of the centralized location to apply data quality rules. This requires data quality processes that can run natively where that data sits.