O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

Toward machines that behave ethically better than humans do - Poster

With the increasing dependence on autonomous operating agents
and robots the need for ethical machine behavior rises. This paper
presents a moral reasoner that combines connectionism,
utilitarianism and ethical theory about moral duties. The moral
decision-making matches the analysis of expert ethicists in the
health domain. This may be useful in many applications, especially
where machines interact with humans in a medical context.
Additionally, when connected to a cognitive model of emotional
intelligence and affective decision making, it can be explored how
moral decision making impacts affective behavior.

  • Seja o primeiro a comentar

  • Seja a primeira pessoa a gostar disto

Toward machines that behave ethically better than humans do - Poster

  1. 1. Toward machines that behave ethically better than humans do Matthijs Pontier 1, 2 1 Moral Goals Belief strengths Actions Output Johan F, Hoorn Moral reasoner 1 VU F University, Amsterdam Autonomy 2 http://camera-vu.nl/matthijs/ Action1 matthijspon@gmail.com Beneficence Action2AbstractIncreasing dependence on autonomous Non-maleficenceoperating systems calls for ethical machinebehavior. Our moral reasoner combinesconnectionism, utilitarianism, and ethicaltheory about moral duties. The moraldecision-making matches the analysis ofexpert ethicists in the health domain. This isparticularly useful when machines interactwith humans in a medical context. Connectedto a model of emotional intelligence andaffective decision making, we can explorehow moral decision making impacts affectivebehavior and vice versa.BackgroundRosalind Picard (1997): ‘‘The greater thefreedom of a machine, the more it will needmoral standards.’’Wallach, Franklin, and Allen (2010) arguethat agents that adhere to a deontologicalethic or that are utilitarians also requireemotional intelligence, a sense of self, and atheory of mind.We connected the moral system to SiliconCoppélia (Hoorn, Pontier, & Siddiqui, 2011), amodel of emotional intelligence and affectivedecision making. Silicon Coppélia contains afeedback loop that learns the preferences ofan individual patient so to personalize itsbehavior.Results Discussion Sample Exp. 5: A patient with incurable cancer refuses chemotherapy to live a few months longer, almost without pain, because he is convinced of being cancer-free. According to Buchanan and Brock (1989), the ethically preferable answer is to “try again.” The patient seems less than fully autonomous and his decision leads to harm, denying the chance to a longer life (a violation of the duty of beneficence). This he might regret later. Our moral reasoner comes to the same conclusion as the ethical experts. However, even among doctors, there is no consensus about the interpretation of values, their ranking and meaning. Van Wynsberghe (2012) found this depends on: the type of care (i.e., social vs. physical care), the task (e.g., bathing vs. lifting vs. socializing), the care-givers and their style, as well as the care-receivers and their specific needs.

    Seja o primeiro a comentar

    Entre para ver os comentários

With the increasing dependence on autonomous operating agents and robots the need for ethical machine behavior rises. This paper presents a moral reasoner that combines connectionism, utilitarianism and ethical theory about moral duties. The moral decision-making matches the analysis of expert ethicists in the health domain. This may be useful in many applications, especially where machines interact with humans in a medical context. Additionally, when connected to a cognitive model of emotional intelligence and affective decision making, it can be explored how moral decision making impacts affective behavior.

Vistos

Vistos totais

980

No Slideshare

0

De incorporações

0

Número de incorporações

3

Ações

Baixados

2

Compartilhados

0

Comentários

0

Curtir

0

×