O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

A koene humaint_march2018

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Carregando em…3
×

Confira estes a seguir

1 de 15 Anúncio

A koene humaint_march2018

Baixar para ler offline

Presentation on the IEEE P7000series standards, and the P7003 standard for Algorithmic Bias Considerations, at the EC JRC HUman behaviour and MAchine INTelligence (HUMAINT) project kick-off workshop, March 2018

Presentation on the IEEE P7000series standards, and the P7003 standard for Algorithmic Bias Considerations, at the EC JRC HUman behaviour and MAchine INTelligence (HUMAINT) project kick-off workshop, March 2018

Anúncio
Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Semelhante a A koene humaint_march2018 (20)

Anúncio

Mais recentes (20)

Anúncio

A koene humaint_march2018

  1. 1. The IEEE P7003 Standard for Algorithmic Bias Considerations Ansgar Koene Horizon Digital Economy Research institute, University of Nottingham HUMAINT, Barcelona, 5-6 March 2018 http://unbias.wp.horizon.ac.uk/
  2. 2. 2
  3. 3. 3
  4. 4. Ethically Aligned Design 4
  5. 5. Ethically Aligned Design, v2 5 • More than one hundred pragmatic recommendations for technologists, policy makers and academics • Created by 250+ global cross-disciplinary thought leaders
  6. 6. IEEE P70xx Standards Projects • IEEE P7000: Model Process for Addressing Ethical Concerns During System Design • IEEE P7001: Transparency of Autonomous Systems • IEEE P7002: Data Privacy Process • IEEE P7003: Algorithmic Bias Considerations • IEEE P7004: Standard on Child and Student Data Governance • IEEE P7005: Standard on Employer Data Governance • IEEE P7006: Standard on Personal Data AI Agent Working Group • IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation Systems • IEEE P7008: Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems • IEEE P7009: Standard for Fail-Safe Design of Autonomous and Semi- Autonomous Systems • IEEE P7010: Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems 6
  7. 7. Levels of AI autonomy (P7001) Level of Autonomy Description (adapted from Endsley & Kaber, 1999) Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics, 42(3), 462-492 • Manual Control: Human monitors, generates options, makes decisions, and physically carries out options. • Action Support: System assists human with execution of selected action. Human performs some control actions. • Batch Processing: Human generates and selects options; turns them over to the system to carry out (e.g., cruise control in automobiles). • Shared Control: Human and System both generate possible decision options. Human has control to select which options to implement; carrying out the options is a shared task. • Decision Support: System generates decision options that human can select. Once option is selected, system implements it. • Blended Decision Making: System generates option, selects it, and executes it if human consents. Human may approve of the option selected by the system, select another, or generate another option. • Rigid System: System provides set of options and human has to select. Once selected, system carries it out. • Automated Decision Making: System selects and carries out option. Human can have input in the alternatives generated by the system. • Supervisory Control: System generates options, selects, and carries out desired option. Human monitors and intervenes if needed (in which case the level of autonomy becomes Decision Support). • Full Automation: System carries out all actions. System itself decides if human intervention is needed. We can also consider three components of supervised autonomy: ‘direction’ (telling it what to do), ‘monitoring’ (watching what it is doing), and ‘control’ (being able to intervene and change what it is doing). 7
  8. 8. 8 http://sites.ieee.org/sagroups-7003/
  9. 9. P7003 working group paticipants • Areas of expertise: • Computer Science (18) • Engineering (8) • Law (6) • Business (6) • Policy (6) • Humanities (4) • Social Sciences (3) • Arts (2) • Natural Sciences (1) • Sectors: • Academics (40), NGO (13), Industry (14), Gov (2) • Countries: USA (11), UK (6), Canada (3), Germany (3), Brazil (2), India (2), Japan (2), the Netherlands (2), Australia (1), Belgium (1), Israel (1), Pakistan (1), Peru (1), Philippines (1), S. Korea (1), Taiwan (1) and Uganda (1); 9 75 working group members (25 active contributors)
  10. 10. P7003 foundational sections • Taxonomy of Algorithmic Bias • Legal frameworks related to Bias • Psychology of Bias • Cultural aspects
  11. 11. P7003 algorithm development sections • Algorithmic system design stages • Person categorization and identifying affected population groups • Assurance of representativeness fo testing/training data • Evaluation of system outcomes • Evaluation of algorithmic processing • Assessment of resilience against external manipulation to Bias • Documentation of criteria, scope and justifications of choices • Use Cases
  12. 12. Related AI standards activities • British Standards Institute (BSI) – BS 8611 Ethics design and application of robots • ISO/IEC JTC1 SC42 • Artificial Intelligence Concepts and Terminology • Framework for Artificial Intelligence Systems Using Machine Learning • Jan 2018 China published “Artificial Intelligence Standardization White Paper.”
  13. 13. ACM Principles on Algorithmic Transparency and Accountability • Awareness • Access and Redress • Accountability • Explanation • Data Provenance • Auditability • Validation and Testing 13
  14. 14. FAT/ML: Principles for Accountable Algorithms and a Social Impact Statement for Algorithms • Responsibility: Externally visible avenues of redress for adverse individual or societal effects, and designate an internal role for the person who is responsible for the timely remedy of such issues. • Explainability: Ensure that algorithmic decisions as well as any data driving those decisions can be explained to end-users and other stakeholders in non-technical terms. • Accuracy: Identify, log, and articulate sources of error and uncertainty so that expected and worst case implications can be understood and inform mitigation procedures. • Auditability: Enable interested third parties to probe, understand, and review the behavior of the algorithm through disclosure of information that enables monitoring, checking, or criticism, including through provision of detailed documentation, technically suitable APIs, and permissive terms of use. • Fairness: Ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing across different demographics (e.g. race, sex, etc). https://www.fatml.org/resources/principles-for-accountable-algorithms 14
  15. 15. Thank you – questions? 15 http://unbias.wp.horizon.ac.uk/ ansgar.koene@nottingham.ac.uk

Notas do Editor

  • Endsley, M. R., & Kaber, D. B. (1999). Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics, 42(3), 462-492 https://people.engr.ncsu.edu/dbkaber/papers/Endsley_Kaber_Ergo_99.pdf

×