O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

AI Governance and Ethics - Industry Standards

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio

Confira estes a seguir

1 de 31 Anúncio

AI Governance and Ethics - Industry Standards

Baixar para ler offline

Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".

Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".

Anúncio
Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Semelhante a AI Governance and Ethics - Industry Standards (20)

Anúncio

Mais de Ansgar Koene (20)

Mais recentes (20)

Anúncio

AI Governance and Ethics - Industry Standards

  1. 1. AI Governance and Ethics: Industry Standards as vehicle to address socio-technical challenges from AI Ansgar Koene, University of Nottingham IEEE P7003 Standard for Algorithmic Bias Considerations
  2. 2. Undesired impact The need for ethics, legal, social, economic intervention
  3. 3. Algorithms in the news 2
  4. 4. 3 Algorithmic Discrimination Machine Bias: There’s software used across the country to predict future criminals. Propublica
  5. 5. Algorithmic systems are socio-technical ▸Algorithmic systems do not exist in a vacuum ▸They are built, deployed and used: - by people, - within organizations, - within a social, political, legal and cultural context. ▸The outcomes of algorithmic decisions can have significant impacts on real, and possibly vulnerable, people. 4
  6. 6. Governance frameworks for algorithmic systems Public and Private sector responses to Undesired impacts of AI
  7. 7. EU response (in addition to GDPR) EU Parliament Science and Technology Options Assessment (STOA) panel request for study on “Algorithmic Opportunities and Accountability” 6
  8. 8. Governance options 7 Florian Saurwein, Natascha Just, Michael Latzer, (2015) "Governance of algorithms: options and limitations", info, Vol. 17 Issue: 6, pp.35-49, doi: 10.1108/info-05-2015- 0025
  9. 9. Demand side Market solutions 8
  10. 10. Supply side Market solutions 9
  11. 11. Companies self-organization (aka CSR) 10
  12. 12. Branches self-regulations 11
  13. 13. Branches self-regulations: Industry standards ▸British Standards Institute (BSI) – BS 8611 Ethics design and application of robots ▸IEEE P70xx Ethics of Autonomous and Intelligent Systems standards ▸ISO/IEC JTC1 SC42 (launched at start of 2018) - Artificial Intelligence Concepts and Terminology - Framework for Artificial Intelligence Systems Using Machine Learning • SG 2 on Trustworthiness  transparency, verifiability, explainability, controllability, etc.  robustness, resiliency, reliability, accuracy, safety, security, privacy, etc. ▸The EU standards bodies CEN and CENELEC officially created a Focus Group on AI, in support of ISO/IEC SC42 (December 2018). ▸Jan 2018 China published “Artificial Intelligence Standardization White Paper.” 12
  14. 14. Co-regulation 13
  15. 15. State Intervention 14
  16. 16. Oversight by regulatory organisations ▸An FDA for algorithms – Andrew Tutt (2016) ▸An FAA for algorithms – paraphrasing Alan Winfield (Chair of IEEE P7001 Standard for Algorithm Transparency) 15
  17. 17. Setting legal requirements 16
  18. 18. Industry Standards IEEE P70xx Standards, developed as part of the Global Initiative for Ethics of Autonomous and Intelligent Systems 17
  19. 19. 18
  20. 20. IEEE P70xx Standards Projects IEEE P7000: Model Process for Addressing Ethical Concerns During System Design IEEE P7001: Transparency of Autonomous Systems IEEE P7002: Data Privacy Process IEEE P7003: Algorithmic Bias Considerations IEEE P7004: Child and Student Data Governance IEEE P7005: Employer Data Governance IEEE P7006: Personal Data AI Agent Working Group IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation Systems IEEE P7008: Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems IEEE P7009: Fail-Safe Design of Autonomous and Semi-Autonomous Systems IEEE P7010: Wellbeing Metrics Standard for Ethical AI and Autonomous Systems IEEE P7011: Process of Identifying and Rating the Trustworthiness of News Sources IEEE P7012: Standard for Machines Readable Personal Privacy Terms IEEE P7013: Inclusion and Application Standards for Automated Facial Analysis Technology 19
  21. 21. P7003 - Algorithmic Bias Considerations ▸All non-trivial* decisions are biased ▸We seek to minimize bias that is: - Unintended - Unjustified - Unacceptable ▸as defined by the context where the system is used. *Non-trivial means the decision space has more than one possible outcome and the choice is not uniformly random.
  22. 22. Causes of algorithmic bias ▸Insufficient understanding of the context of use. ▸Failure to rigorously map decision criteria. ▸Failure to have explicit justifications for the chosen criteria.
  23. 23. Key question when developing or deploying an algorithmic system 23  Who will be affected?  What are the decision/optimization criteria?  How are these criteria justified?  Are these justifications acceptable in the context where the system is used?
  24. 24. IEEE P7003 general structure 24
  25. 25. 5Rights Universal Standards for childhood and adolescence A collaboration between 5Rights and IEEE-SA 25
  26. 26. Why these Standards are needed ▸That presence of children in the digital environment must be anticipated - by design - by default ▸Technology must be provided in children in a way that upholds their rights and meets their needs. - Not simply about content - The ways it uses their data - The behaviours it encourages - The responsibility it takes for the impact of its services. 26
  27. 27. The approach Look systemically for the drivers and inhibitors in technological systems, to create a good environment for child ▸How does the system impact on, or promote the autonomy of a child? ▸What effect might a child’s engagement have on their health or well being? ▸Have you considered both their physical and emotional wellbeing? ▸What processes are in place to inform the child of the likely impact of using your service? From the big ‘is it fair’ and ‘does it uphold the rights of the child’ to the entirely granular about where on the screen a button might be better placed. 27
  28. 28. A suite of Standards/guidance - industry connections group ▸Age Appropriate Contract – that is to determine what terms and conditions, or community rules should offer when the end user is a child. ▸Standards that cover: - security of IoT, - Child Online Protection Issues, - Privacy differentials, - context capacity authentication, - guidance for duty of care, - appropriate governance structures, - reporting standards, - flagging systems, - data minimization standards, - best practice geolocation - etc. 28
  29. 29. Interplay between Standards and Legislation ▸5rights Founder and Chair, Baroness Beeban Kidron is the architect of ground-breaking new Age Appropriate Design Code, an enhanced GDPR for children under 18. ▸5Rights as an organization, believes that standards should both anticipate and be an alternative to legislation. ▸Creating standards from a trusted source allows all businesses – small and big – access to the thoughtful and ethical digital services for children. 29
  30. 30. Thank you! ansgar.koene@Nottingham.ac.uk IEEE P7003 Standard for Algorithmic Bias Considerations project site: http://sites.ieee.org/sagroups-7003/ https://5rightsfoundation.com/

Notas do Editor

  • Automated decisions are not defined by algorithms alone. Rather, they emerge from automated systems that mix human judgment, conventional software, and statistical models, all designed to serve human goals and purposes. Discerning and debating the
    social impact of these systems requires a holistic approach that considers:
    Computational and statistical aspects of the algorithmic processing;
    Power dynamics between the service provider and the customer;
    The social-political-legal-cultural context within which the system is used;
  • All non-trivial decisions are biased. For example, a good results from a search engine should be biased to match the interests of the user as expressed by the search-term, and possibly refined based on personalization data.

    When we say we want ‘no Bias’ we mean we want to minimize unintended, unjustified and unacceptable bias, as defined by the context within which the algorithmic system is being used.
  • In the absence of malicious intent, bias in algorithmic system is generally caused by:

    Insufficient understanding of the context that the system is part of. This includes lack of understanding who will be affected by the algorithmic decision outcomes, resulting in a failure to test how the system performs for specific groups, who are often minorities. Diversity in the development team can partially help to address this.
    Failure to rigorously map decision criteria. When people think of algorithmic decisions as being more ‘objectively trustworthy’ than human decisions, more often than not they are referring to the idea that algorithmic systems follow a clearly defined set of criteria with no ‘hidden agenda’. The complexity of system development challenges, however, can easily introduce ‘hidden decision criteria’ introduced as a quick fix during debugging or embedded within Machine Learning training data.
    Failure to explicitly define and examine the justifications for the decision criteria. Given the context within which the system is used, are these justifications acceptable? For example, in a given context is it OK to treat high correlation as evidence of causation?

×