Boost Fertility New Invention Ups Success Rates.pdf
A computational model that predicts human criminal choices - Presentation at PRIMA 2013
1. A Computational Model
of Affective Moral
Decision Making
that predicts
Human Criminal Choices
Matthijs Pontier, matthijspon@gmail.com
Jean-Louis Van Gelder
Reinout E. de Vries
2. Overview of this presentation
•
•
•
•
•
•
•
SELEMCA
Moral Reasoning
Silicon Coppelia: Model of Emotional Intelligence
Moral Reasoning + Silicon Coppelia = Moral Coppelia
Predicting Crime with Moral Coppelia
Conclusions
Future Work
Dunedin, 04-12-2013
PRIMA 2013
2
3. SELEMCA
• Develop ‘Caredroids’: Robots or Computer Agents
that assist Patients and Care-deliverers
• Focus on patients who stay in long-term care facilities
Dunedin, 04-12-2013
PRIMA 2013
3
4. Possible functionalities
• Care-broker: Find care that matches need patient
• Companion: Become friends with the patient to
prevent loneliness and activate the patient
• Coach: Assist the patient in making healthy choices:
Exercising, Eating healthy, Taking medicine, etc.
Dunedin, 04-12-2013
PRIM A 2013
4
7. Background Machine Ethics
• Machines are becoming more autonomous
Rosalind Picard (1997): ‘‘The greater the freedom of
a machine, the more it will need moral standards.’’
• Machines interact more with people
We should manage that machines do not harm us or
threaten our autonomy
• Machine ethics is important to establish perceived
trust in users
Dunedin, 04-12-2013
PRIM A 2013
7
8. Moral reasoning system
We developed a rational moral reasoning system that
is capable of balancing between conflicting moral goals.
Dunedin, 04-12-2013
PRIMA 2013
8
9. Limitations rational moral reasoning
• Only moral reasoning results in very cold decisionmaking, only in terms of rights and duties
• Wallack, Franklin & Allen (2010): “Ethical agents
require Emotional Intelligence as well as other
‘supra-rational’ faculties, such as a sense of self and
a Theory of Mind”
• Tronto (1993): “Care is only thought of as good care
when it is personalized”
Dunedin, 04-12-2013
PRIM A 2013
9
10. Solution: Add Emotional Processing
• Previously, we developed Silicon Coppelia,
a model of Emotional Intelligence.
• This can be projected in others for Theory of Mind
• Learns from experience Personalization
Connect Moral Reasoning to Silicon Coppelia
• More human-like moral reasoning
• Personalize moral decisions and communication
about moral reasoning
Dunedin, 04-12-2013
PRIM A 2013
10
14. Moral Coppelia
Decisions based on:
1. Rational influences
• Does action help me to reach my goals?
2. Affective influences
• Does action lead to desired emotions?
• Does action reflect Involvement I feel towards user?
• Does action reflect Distance I feel towards user?
3. Moral reasoning
• Is this action morally good?
Dunedin, 04-12-2013
PRIMA 2013
14
15. Background Criminology
• Substantial evidence emotions are fundamental in
criminal decision making
• But emotions rarely in criminal choice models
Study relation Ratio + Emotions + Morality
Apply Moral Coppelia to criminology data
Predict criminal decisions participants
Dunedin, 04-12-2013
PRIMA 2013
15
16. Matching data to model
Match:
• Honesty/Humility
to Weightmorality
•
•
to Expected Utility
to EESA
Perceived Risk
Negative State Affect
Parameter Tuning:
1. Find optimal fits for initial sample
2. Predict decisions for holdout sample
Dunedin, 04-12-2013
PRIMA 2013
16
18. Conclusions
• Validation of Moral Coppélia:
Moral Coppélia can be used to predict
human criminal choices
• Adds to theory criminology
• Useful in applications
• Serious games: virtual crook
• Entertainment
• Theory of Mind
• Predict crime?
Dunedin, 04-12-2013
PRIMA 2013
18
19. Future Work
• Publish Book: “Machine Medical Ethics”
• More detailed model of Autonomy
• Develop Applications that require
Affective Moral Behavior, such as
• Persuasive Technology:
Moral dilemmas about Helping vs Manipulating
• Integrate current system with
Health Care Intervention models
Dunedin, 04-12-2013
PRIMA 2013
19
Science + Health Care + Creative Industry
Triangle Patient / Care-deliverer / Robot
Robot: Repetitive tasks, so that
Care-deliverer has time for: Medical + Social tasks
Functionalities can all be in the same robot
Same functionality can be in different kind of robots (physical robot, agent, app)
Results: Able to match decisions medical ethical experts
Behavior system matches experts medical ethics
Wallach, Franklin & Allen: Ratio / Logic alone is not enough for ethical behavior towards humans
Silicon Coppelia emotional intelligence, theory of mind, personalization (through adaptation / learning from interaction)
Model from media perception to let medium perceive user
Ga door model, ook emotie-regulatie uitleggen
Moral and Affective Decision
Robot (mijn modellen) vs Mens
Multiple Choice & Emoties
Wat vond dit mannetje nou van jou?
Proefpersonen zien geen verschil
Je zou kunnen spreken van een geslaagde Turing Test
Mental integrity, Physical integrity, Privacy, Capability to make autonomous decision: Cognitive Functioning, Adequate Information, Reflection