Debugging AI is (almost) nothing like verifying and debugging traditionally engineered software systems, and it’s still in its infancy. In his talk, Dr Christian Betz will present a brief overview of the field of AI, explaining the properties of different approaches to AI, and in which contexts AI systems shine, explaining the current hype. From there, Christian will discuss prominent failures of AI systems and show ways to hack AI systems. He’ll explain why verifying and testing AI is hard, and will guide us through different approaches to tackle this problem.
7. 7
and language processing.
28 Chapter 2 – Why is AI important?
AI is important because, for the first time, traditionally human
capabilities can be undertaken in software inexpensively
and at scale. AI can be applied to every sector to enable
new possibilities and efficiencies.
40 Chapter 3 – Why has AI come of age?
Specialised hardware, availability of training data, new
algorithms and increased investment, among other factors,
have enabled an inflection point in AI capability. After seven
false dawns since the 1950s, AI technology has come of age.
constraints of human experience
82 Chapter 6 – The war
While demand for AI professiona
winners and losers are emerging
Part 3: The AI Disrup
96 Chapter 7 – Europe’s
The landscape for entrepreneurs
AI startups are maturing, bringing
industries, and navigating new op
While the UK is the powerhouse o
France may extend their influence
https://www.mmcventures.com/wp-content/uploads/2019/02/The-State-of-AI-2019-Divergence.pdf
Speed of development
8. https://www.mmcventures.com/wp-content/uploads/2019/02/The-State-of-AI-2019-Divergence.pdf 8
years (Fig. 23), to an estimated $15bn in 2018 (CB Insights,
MMC Ventures).
Today’s leading technology companies – including Apple,
Amazon, Facebook, Google, IBM, Microsoft and Salesforce
– are also spending heavily on research and personnel to
develop and deploy AI. Internal corporate investment on AI,
among just the top 35 high tech and advanced manufacturing
companies investing in AI, may be 2.0x to 4.5x greater than the
capital invested by venture capital firms, private equity firms
and other sources of external funding combined (McKinsey),
further catalysing progress.
have increased fifteen-fold
in five years, to an estimated
$15bn in 2018.
(CB Insights, MMC Ventures)
Source: CB Insights, MMC Ventures
Fig 23. Venture capital investment in AI has increased 15-fold in five years
0
200
400
600
800
1000
1200
0
2
4
6
8
10
12
14
16
20132012 2014 2015 2016 2017 2018E
Fig. X: Venture capital investment in AI has increased 15-fold in five years
AIdeals
Disclosed Funding (right axis)
Number (left axis)
AIdealinvestment($billion)
High valuation
10. It’s important to understand…
10
that these factors impact your AI project.
11. What is AI?
Strong AI vs. Weak AI
AI is used as a generic term for a set of tools to cope with a certain set of
problems.
Machine Learning is a subset of this AI-toolset: „Programming by example“.
Other subsets are knowledge representation, planning, reasoning.
AI uses probabilistic logic instead of boolean logic.
11
12. 12
„The brown quick fox jumps over the lazy dog“
https://www.mcohen.io/2017/machine-learning-explained-in-three-easy-steps/
brown quick
13. Properties of AI problems
Hard to code „by hand“
• Requires non-formalized knowledge (experiential knowledge)
• Or even not yet existing knowledge
Afflicted with uncertainty (or missing information)
Changes rapidly (making it unreasonable to adopt software
manually)
13
16. Recap: Properties of AI problems
Hard to code „by hand“
• Requires non-formalized knowledge (experiential knowledge)
• Or even not yet existing knowledge
Afflicted with uncertainty (or missing information)
Changes rapidly (making it unreasonable to adopt software
manually)
16
17. 17
These properties make verification
hard „by design“
Recap: Properties of AI problems
Hard to code „by hand“
• Requires non-formalized knowledge (experiential knowledge)
• Or even not yet existing knowledge
18. 18Photo by Pixabay from Pexels
Problems with unknown truth
For example: Medical classification problem
What is the correct diagnosis?
What is the correct therapy?
You won’t know (maybe until your patient either recovers or dies?)
Same is true for customer support systems. Is you customer satisfied?
You’ll probably only know by loosing him/her as a customer.
19. 19https://towardsdatascience.com/gender-bias-word-embeddings-76d9806a0e17, Photo by rawpixel.com from Pexels
ML replicates bias in the data
Example: Conceptual similarities from word embeddings
With words typically collocated, you can ask you model for
conceptual similarities:
king - man + woman ⇾ queen
Depending on your input corpus, your model will give you
doctor - man + woman ⇾ nurse
Do you really run a sexist, racist chatbot on your website?
20. AI systems break fundamental patterns we
developed as an industry
• No (or very little) isolation. You need to verify and retrain the
whole system.
• Higher dimension of failure space
• Time intense training cycles (instead immediate feedback cycles)
• Non-stationary nature of ML systems
20http://ai.stanford.edu/~zayd/why-is-machine-learning-hard.html
23. Quality management on input data
Implement quality management for your input data. Visualize, use
statistical metrics on the input data. Identify bias in the input data.
Establish panel of judges both on input data labelling and on
outcomes. Due to the non-stationary nature this is is an ongoing
task, not a closed project.
Use generated test data with known patterns, because otherwise
you won’t know if you miss whole categories.
23
24. Work with test sets
To test for non-binary outcomes (i.e., results with confidence level),
you need to handle test set as opposed to sets of single test
outcomes: For example test for outcome confidence distribution.
For non-stationary systems: establish test monitoring to accept
regression. „Accept new model if result is at least 95% of last
model.“
Do not only test outcomes, but implement inspection tools. E.g., in
RoboCup Simulation map agent movement paths.
24
25. Add a safety net
Implement multi-layer security fallbacks for subsets of your
problem (also to be used while testing), like „emergency break
systems“. Test these. Use for testing: If you need your security
fallback too often, your model may be bad.
Just in research: Add explainability, local properties by black-box
tests on the model to verify the „anchor rules" (https://
homes.cs.washington.edu/~marcotcr/aaai18.pdf)
25