While OCR technology has evolved substantially, it still struggles with handwriting recognition. However, modern machine learning offers new tools to solve long-standing challenges. New software solutions can now optimize complex business processes by turning even hand-filled paper documents into business-ready data automatically.
Learn how leading insurance, financial services, and healthcare companies are leveraging new technology to automate processing of their paper-to-digital operations (including handwritten forms). You’ll hear how New York Life and five of the top 10 U.S. insurers have gained efficiencies and enabled critical business improvements with a 99.9% accuracy rate.
7. The majority of critical business
processes still rely on paper…
It is still common practice to have
people type HANDWRITTEN
information from paper forms into
downstream systems.
Manual data entry processes are
error-prone, time consuming, and
expensive.
Analytics teams are unable to access
the most valuable legacy customer
data from documents.
Problems Organizations Face
01
02
03
04
8. Manual Data Entry
• Prone to human error
• Requires multiple reviews to avoid rework
• Slow responses contribute to poor service
• Expensive to train and hard to scale
• Only works during regular business hours
OCR
• Requires >95% accuracy
• Fails to effectively recognize handwriting
• Expensive, license-based model
(hard to scale)
• Maintenance and updates require IT
resources
• Lacks end-to-end business/workflow logic
The Challenges With Traditional Solutions
9. Captiva/Datacap/Kofax
• Typically on premise
• Complex deployment and configuration
• Requires a team for Q/A due to 70-75%
accuracy
• High annual support costs
• Costly and complex back-end integrations
Captricity & Zia
• Cloud based with high security standards
• In production in weeks, not months
• 99.5%+ accuracy on structured, semi-
structured, and unstructured content
• Straight-thru processing
• Simple pricing model
• Easy integrations
How Do Our Solutions Compare? They Don’t.
USE TO INDEX YOUR DOCUMENTS USE TO PROCESS YOUR DOCUMENTS
10. Use Cases: Operations and Analytics
• Contract change requests
• Collateral forms
• Withdrawal forms
• Death certificates
• Applications/Enrollment forms
(new business)
• Change forms
• Claims forms
• Electronic payment
authorizations
Analytics use cases involve using
Captricity on archived/historic documents to
unlock missing or unavailable data.
Operational use cases involve using
Captricity day-to-day, eliminating manual
entry.
Familiar forms and workflows:
11. Value Delivered
Same-day
turnaround time;
time-to-value <30 days
Cost savings of 50–70%+;
return on investment <3–8 months
Enterprise-grade security;
100% HIPAA
compliant
Infinite, elastic platform
scalability
Significant data accuracy
improvement
(99.5%+, even on handwriting)
12. The True Value of Captricity
Processing Work With
Traditional Capture and Indexing Processing Work After Captricity
Case Creation,
Data Entry,
& Clerical
38%
Low Value
42.5%
Moderate
Value
19.5%
High Value
Utilization
Analysis &
Decision Making
Information
Compilation,
Inventory, &
Performance Management
Analysis &
Decision Making
Information
Compilation, Inventory,
& Performance
Management
Case Creation,
Data Entry, & Clerical
73%
High Value
Utilization
10%
Low Value
17%
Moderate
Value
14. CHALLENGE:
Digitize large amounts of life insurance application
forms during peak business season when
receiving thousands per month through scans and
emailed PDFs. Processing teams are bogged down
processing simplistic applications that don’t really
need their review.
SOLUTION:
Provide straight-thru processing of applications
allowing them to skip the queue and the
processors to focus on more high-value tasks.
Operational Efficiency:
Streamlining the Onboarding Process
16. CHALLENGE:
No easy access to cause-of-death data from
death certificates which are submitted along
with death claims but typically filed away without
further analysis.
(100s of potential templates to identify, sort, and
capture)
SOLUTION:
Enabled big-data analytics for underwriting
innovation to, specifically, improve fraud
detection capabilities leveraging patterns
found in historical customer data.
Analytics Enablement:
Extracting Data from 1M+ Death Claims
23. Discover Data Remediation Opportunities: Addresses
Example:
Incorrect: 28263 HH Williams Rd, Angie, LA 70436
Correct: 28263 H H Williams Rd, Angie, LA 70426-1823
Address verification and correction via web API
Able to improve:
• Unstandardized addresses
• Zip codes
• Match street, city, and state data
• Verify entire address is correct
Key Insight: 41% of
addresses were
improvable, 12% were
invalid
Handwritten Address Data:
✘
24. Discover Data Quality Issues: Phone
Phone Number
Data:
Key Insight: 91% of phone
numbers listed above
“home phone number” were
not landlines.
Phone number verification via web phone validation API
Able to verify:
• Mobile vs. landline
• Active vs. inactive
• Registered name
• Carrier
✘
26. What is Business Process Automation?
Process model to orchestrate on-site
patient visit designed with BPMN editor
and powered by BPM engine
Data Enrichment – Link input
data to enrichment/validation
actions
28. A Model for Process Optimization: Process Analytics
Identify Workflow Collect Inputs Run POC Analyze Data
Customer selects
a workflow to use as
a starting point to
build a clear business
case around
Customer provides batch
of sample forms (1K+)
Both parties determine
critical data fields
Discover basic business
rules
Sample forms run
through Captricity’s
crowd-guided
digitization platform
Captricity’s data science team
performs analysis of sample data
• Form design
• Meta data analysis
• Errored Submissions
• Data validation
• Data enrichment
• Fraud detection
• Intelligent delivery
Step-by-Step Project: 1–2 Weeks
The problem Captricity found many organizations face are:
The majority of critical business processes still rely on paper..
Common practice is (still) to have people type handwritten information from paper forms into downstream systems.
Manual data entry process are error prone, time consuming and expensive.
Analytics teams unable to access the most valuable legacy customer data from documents.
JDF – in the second bullet can we emphasize HANDWRITEN as well as TYPED information in paper forms?
MB - Addressed
NIGO Problems
Complex workflows
What we hear in the market
We give you great (and getting greater) data but it get ‘stuck”
Need to be “automation ready”
NIGO Problems
Complex workflows
What we hear in the market
We give you great (and getting greater) data but it get ‘stuck”
Need to be “automation ready”
Compare to IBM, EMC, Kofax
Up to 5k crowd sourced workers working for us. “Global elastic pool of workers”
Same-day turnaround for any volume of documents
As much as 50% less expensive than existing solutions
100% HIPAA compliant
Infinite scalability for all document types
Policy Change Forms
SLF example.
E-Apps aren’t always e-apps
Here is how your data flows through Captricity.
1.
Once document images (i.e., scans or photos) have been uploaded to Captricity, document images are captured and sorted via Captricity’s Intelligent Document ID capabilities and then the data is put through Shreddr™ to isolate individual pieces of information, or data fields (see: HIPAA compliance). We call this process “shredding” and the resulting small data pieces are referred to as “shreds”.
2. Next, shreds are sent through Captricity’s machine learning digitization engines to generate predicted values for each piece of data. To ensure 99.9 percent accuracy, the machine predicted values are then sent to crowdsourced workers to VALIDATE shreds have been digitized correctly.
Once validated, the data can then be VERIFIED using the customer’s existing, internal systems of record (policy admin systems, claims management systems, CRMs, beneficiary databases, etc) or via 3rd party web services such as Lexis Nexis (address verification), Google (confirming obituary for life claims), etc.
Data can also be “enriched” via 3rd party web services such as Lexis Nexis (latest personal contact information, for example) to ensure your customer data is business-ready for all lines of business in your organization.
3. Finally, business-ready data is formatted and delivered back to the customer - we’ve worked with nearly all common industry formats from basic CSV to ACORD.
Copy:
Captricity can ensure a secure environment for your data when submitting shreds to our crowd-sourced workers
Each shred of data is pulled from our database via secure API commands and delivered to our crowdsource platform to be validated.
The worker will be able to view a non-contextual image of the shred on their screen (delivered via HTML page), but no data will actually be stored on their computer or a server outside of the U.S.
The data validation process is designed so that each worker is assigned to verify a given class or type of data, such as last name, from many forms rather than a group of complete forms. This ensures that every worker sees only one piece of data, or shred, from a single form.
The worker receives no additional information along with the shred image that would correlate the image to a user, image file, organization or API token. Once the worker has completed their work on a single shred of data, the API terminates the connection and all data associated with that shred becomes permanently unavailable to the worker.
Address verification & Correction via “Smarty Streets”
“SmartyStreets” able to remediate 41% of incoming addresses that were incorrect
Need to know whether phone numbers are landline vs. mobile
Insight: 91.06% of phone numbers listed above “home” were not landlines
Great way to stand-up a mobile (SMS) customer engagement program using data you already have or are collecting
Should add the collect data section, maybe until Build Requirements, to talk about collecting not just forms to run the POC but also the many types of business rules (system, process human, etc.).
Also add business rules analysis to the Analyze Data section.
Great case for Hoover / micro services to grab and use business rules
For Robotics….”we want to deliver automation ready data so that you can train the robots to hit the 80/20 rule. So we send you the enriched validated data, but also the meta data associated so that you can better train the bot”
Should we include the type of training data into the “Intelligent Delivery” side of our analysis? (JDF – Can talk about this in terms of RPA and its use of the meta datawe provide as a part of Automation Ready Data to manage rules for intelligent delivery)