This presentation was given by Christopher Konrad at the Interaction19 Redux event in San Diego. It is an examination of the aspects of human behavior that can impact or influence the Artificial Intelligence (AI) systems that we are designing.
7. FAST SLOW
SYSTEM1 SYSTEM2
Operates automatically / quickly with little effort.
No sense of voluntary control.
Example:
Detect that one object that is more distant than
another
Controlled and allocates attention to
the effortful mental activities that
demand it, including complex
computations.
Example:
Compare two washing machines for
overall value
TWO-SYSTEMSMODELOFTHEMIND
12. Group1 Group2
How happy are you these days?
How many dates did you have last
month?
How many dates did you have last month? How happy are you these days?
Any correlation with the number of dates
and being happier? No
Any correlation with the number of
dates and being happier? Yes
ANSWERING AN EASIER QUESTION
The Mood Heuristic for happiness
13. ANSWERING AN EASIER QUESTION
Substitution of 3-D for 2-D size occurred automatically
As shown on the screen, is the figure on the right larger than the figure on the left?
14. TargetQuestion EasierQuestion
How much would you contribute to save an
endangered species?
How much emotion do I feel when I
think of dying polar bears?
How happy are you with your life these
days?
What is my mood right now?
How should financial advisers who prey on
the elderly be punished?
How much anger do I feel when I
think of financial predators?
ANSWERING AN EASIER QUESTION
Substituting Questions
16. REMEMBERTHIS… ASKYOURSELFTHIS…
System 1 generates impressions, feelings, and
inclinations; when endorsed by system 2 these
become beliefs, attitudes and intentions.
Should AI have fast and slow complimentary
systems with similar characteristics as the
human mind?
System 1 is a machine for jumping to
conclusions.
When do you want AI to trust its gut and jump to
conclusions based on feelings and inclinations?
We are rarely stumped because system 1 will just
answer an easier question that uses available
information.
Do you want AI to substitute hard questions for
easier questions? It’s a very human quality.
DOWEPOSSESSTHEINTELLIGENCE
TODESIGNARTIFICIALINTELLIGENCE?
22. System 1 is adept at causal connections but they can be wrong.
We fall into a trap that “What you see is all there is.”
It is a difficult relationship between our mind and statistics.
THE LAW OF SMALL NUMBERS
Small samples yield extreme results more often than large samples do.
23.
24. PLAYER 1 2 3 4
1 66 78 70 69
2 77 67 69 71
ROUND
REGRESSION TO THE MEAN
Success = Talent + Luck
Regression effects are ubiquitous, and so are misguided causal stories to explain them.
25. STORE 2017 2018
1 $11,000,000
2 $12,000,000
3 $18,000,000
4 $29,000,000
Total $61,000,000 $67,100,000
REGRESSION TO THE MEAN
Regressive forecasts are more predictive.
Low performers likely to do better than 10% while high performers likely do worse than 10%.
26. REMEMBERTHIS… ASKYOURSELFTHIS…
System 1 is adept at causal connections on the
basis of scraps of evidence. We fall into a trap
that “What you see is all there is.”
Are ML models trained on data that are too
limited and biased? Does the dataset accurately
represent the diversity of the customer base?
Sustaining doubt is harder work than sliding into
certainty.
How can AI systems help us make better causal
judgements?
Causal explanations will be falsely used when
regression is the inevitable explanation.
Do we need to be concerned with our attempt to
humanize AI systems and pass along bogus or
biased causal explanations?
DOWEPOSSESSTHEINTELLIGENCE
TODESIGNARTIFICIALINTELLIGENCE?
29. Willing to sell for less than $1 million
Buyer said that was too high
Lucky? What else didn’t happen?
CONFIDENTIAL
EXAGGERATE SKILL & UNDERESTIMATE LUCK
Narrative Fallacy
31. REMEMBERTHIS… ASKYOURSELFTHIS…
We exaggerate the role of skill and
underestimate the part that luck played in the
outcome.
Will we say, “that robot was lucky?”
We get the causal relationship backwards
because of the halo effect.
When is an algorithm better than human
judgement?
DOWEPOSSESSTHEINTELLIGENCE
TODESIGNARTIFICIALINTELLIGENCE?
35. People become risk seeking when all their options are bad.
PROBLEM1:WHICHDOYOUCHOOSE?
GET$900FORSUREOR90%CHANCETOGET$1,000
PROBLEM2:WHICHDOYOUCHOOSE?
LOSE$900FORSUREOR90%CHANCETOLOSE$1,000
RISK SEEKING
43. REMEMBERTHIS… ASKYOURSELFTHIS…
Losses loom larger than gains which make us
loss averse.
Will AI choices factor the psychological benefits of
gains and cost of losses of the choice?
We become risk seeking when all the options
are bad.
Will our personal AI assistants inherit our
individual morality and make choices accordingly?
DOWEPOSSESSTHEINTELLIGENCE
TODESIGNARTIFICIALINTELLIGENCE?