SlideShare a Scribd company logo
1 of 16
Algorithmic Decision-Making:
Fairness, Bias and the Role of
Ethics Standards
Ansgar Koene
Senior Research Fellow
University of Nottingham
INDEX
1. UnBias porject
2. IEEE Global Initiative for Ethical
Considerations in AI and AS
SECTION 1
UnBias: Emancipating Users Against Algorithmic Biases for
a Trusted Digital Economy.
Mission: Develop co-designed recommendations for design,
regulation and education to mitigate unjustified bias in algorithmic
systems.
http://unbias.wp.horizon.ac.uk/
Types and sources of Algorithmic Bias
Types and sources of Algorithmic Bias
WP1: ‘Youth Juries’ deliberation process to capture experiences,
concerns and recommendations from teen-aged digital natives
Participants
N Age Gender
144 13-23
(av. 15)
67 F
77 M
Youth Juries: Concerns & Recommendations
Security
(over privacy)
Privacy Settings
and
Location
Lack of control over
personal data sharing
Online identity
Lack of transparency
Automatic decision
making
Plug-ins to control
level of tracking
More control over
personal data
Control level of
personalisation
More accessible T&C
Accessible information
about how algorithms
rule the Web
Engaging educational
programmes
• 30 participants from academia, education, NGOs, industry
• Four key case studies: fake news, personalisation, gaming the system, and
transparency
• What constitutes a fair algorithm?
• What kinds of (legal and ethical) responsibilities do Internet companies have,
to ensure their algorithms produce results that are fair and without bias?
WP4: Multi-Stakeholder Workshop on fairness in
relation to algorithmic design and practice
Criteria relating to social norms and values:
1. Sometimes disparate outcome are acceptable if based on individual lifestyle
choices over which people have control.
2. Ethical precautions are more important than higher accuracy.
3. There needs to be a balancing of individual values and socio-cultural values.
Problem: How to weigh relevant social-cultural value?
Criteria relating to system reliability:
1. Results must be balanced with due regard for trustworthiness.
2. Need for independent system evaluation and monitoring over time
Criteria relating to (non-)interference with user control: (next slide)
Participant recommendations
1. Subjective fairness experience depends on user objectives at time of use, therefore
requires an ability to tune the data and algorithm.
2. Users should be able to limit data collection about them and its use. Inferred personal
data is still personal data. Meaning assigned to the data must be justified towards the
user.
3. Functioning of algorithm should be demonstrated/explained in a way that can be
understood by the data subject.
4. If not vital to the task, there should be option to opt-out of the algorithm
5. Users must have freedom to explore algorithm effects, even if this would increase the
ability to “game the system”
6. Need for clear means of appeal/redress for impact of the algorithmic system.
Criteria relating to (non-)interference with user control:
1.Societal impact assessment
1. Pre-development/implementation
2. Re-assess as soon as the service reaches a large user base
3. Include collateral impact (e.g. indirect impact on non-users)
• Risk assessment matrix
2.Algorithmic system development must include clear documentation of:
1. Decision criteria that are used
2. Justification of decision criteria
3. Context limitations within which system behaviour has been assessed
Preliminary recommendations.
SECTION 2
IEEE Global Initiative for Ethical Considerations in Artificail
Intelligence and Autonomous Systems.
PICTURE
• IEEE P7000: Model Process for Addressing Ethical Concerns During System Design
• IEEE P7001: Transparency of Autonomous Systems
• IEEE P7002: Data Privacy Process
• IEEE P7003: Algorithmic Bias Considerations
• IEEE P7004: Standard on Child and Student Data Governance
• IEEE P7005: Standard on Employer Data Governance
• IEEE P7006: Standard on Personal Data AI Agent Working Group
• IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation
Systems
• IEEE P7008: Standard for Ethically Driven Nudging for Robotic, Intelligent and
Autonomous Systems
• IEEE P7009: Standard for Fail-Safe Design of Autonomous and Semi-Autonomous
Systems
• IEEE P7010: Wellbeing Metrics Standard for Ethical Artificial Intelligence and
Autonomous Systems
IEEE-SA Standards Projects
PICTURE
Open invitation to join the P7003 working group
http://sites.ieee.org/sagroups-7003/
THANK YOU

More Related Content

What's hot

Intelclinic - NeuroOn
Intelclinic - NeuroOnIntelclinic - NeuroOn
Intelclinic - NeuroOn
Intelclinic
 

What's hot (20)

Algorithmically Mediated Online Inforamtion Access workshop at WebSci17
Algorithmically Mediated Online Inforamtion Access workshop at WebSci17Algorithmically Mediated Online Inforamtion Access workshop at WebSci17
Algorithmically Mediated Online Inforamtion Access workshop at WebSci17
 
Taming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and PolicyTaming AI Engineering Ethics and Policy
Taming AI Engineering Ethics and Policy
 
Editorial responsibilities arising from personalisation algorithms
Editorial responsibilities arising from personalisation algorithmsEditorial responsibilities arising from personalisation algorithms
Editorial responsibilities arising from personalisation algorithms
 
IEEE P7003 at ICSE Fairware 2018
IEEE P7003 at ICSE Fairware 2018IEEE P7003 at ICSE Fairware 2018
IEEE P7003 at ICSE Fairware 2018
 
Bsa cpd a_koene2016
Bsa cpd a_koene2016Bsa cpd a_koene2016
Bsa cpd a_koene2016
 
A koene ai_in_command_control
A koene ai_in_command_controlA koene ai_in_command_control
A koene ai_in_command_control
 
The Age of Algorithms
The Age of AlgorithmsThe Age of Algorithms
The Age of Algorithms
 
are algorithms really a black box
are algorithms really a black boxare algorithms really a black box
are algorithms really a black box
 
Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17Young people's policy recommendations on algorithm fairness web sci17
Young people's policy recommendations on algorithm fairness web sci17
 
A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017A koene intersectionality_algorithmic_discrimination_dec2017
A koene intersectionality_algorithmic_discrimination_dec2017
 
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesIndustry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
 
AI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry StandardsAI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry Standards
 
EU Ethics guidelines for trustworthy AI
EU Ethics guidelines for trustworthy AI EU Ethics guidelines for trustworthy AI
EU Ethics guidelines for trustworthy AI
 
Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...Responsible AI: An Example AI Development Process with Focus on Risks and Con...
Responsible AI: An Example AI Development Process with Focus on Risks and Con...
 
Model bias in AI
Model bias in AIModel bias in AI
Model bias in AI
 
Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it? Algorithmic Bias - What is it? Why should we care? What can we do about it?
Algorithmic Bias - What is it? Why should we care? What can we do about it?
 
Intelclinic - NeuroOn
Intelclinic - NeuroOnIntelclinic - NeuroOn
Intelclinic - NeuroOn
 
AAMAS-2017 8-12 May, 2017, Sao Paulo, Brazil
AAMAS-2017 8-12 May, 2017, Sao Paulo, BrazilAAMAS-2017 8-12 May, 2017, Sao Paulo, Brazil
AAMAS-2017 8-12 May, 2017, Sao Paulo, Brazil
 
The AI Now Report The Social and Economic Implications of Artificial Intelli...
The AI Now Report  The Social and Economic Implications of Artificial Intelli...The AI Now Report  The Social and Economic Implications of Artificial Intelli...
The AI Now Report The Social and Economic Implications of Artificial Intelli...
 
05 Bronner Richards
05 Bronner Richards05 Bronner Richards
05 Bronner Richards
 

Similar to A koene un_bias_ieee_ebdvf_nov2017

Ethics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptxEthics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptx
Petar Radanliev
 
IntroductionIT has revolutionized the entire scope of work env.docx
IntroductionIT has revolutionized the entire scope of work env.docxIntroductionIT has revolutionized the entire scope of work env.docx
IntroductionIT has revolutionized the entire scope of work env.docx
normanibarber20063
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
Edge AI and Vision Alliance
 
Project Access Control ProposalPurposeThis course project i.docx
Project Access Control ProposalPurposeThis course project i.docxProject Access Control ProposalPurposeThis course project i.docx
Project Access Control ProposalPurposeThis course project i.docx
stilliegeorgiana
 

Similar to A koene un_bias_ieee_ebdvf_nov2017 (20)

Ethics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptxEthics and Responsible AI Deployment.pptx
Ethics and Responsible AI Deployment.pptx
 
IntroductionIT has revolutionized the entire scope of work env.docx
IntroductionIT has revolutionized the entire scope of work env.docxIntroductionIT has revolutionized the entire scope of work env.docx
IntroductionIT has revolutionized the entire scope of work env.docx
 
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
 
ANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
ANIn Kolkata April 2024 |Ethics of AI by Abhishek NandyANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
ANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
 
APF2015-slides-general
APF2015-slides-generalAPF2015-slides-general
APF2015-slides-general
 
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics
[DSC Europe 23] Bunmi Akinremi - Ethical Considerations in Predictive Analytics
 
ICS2208 lecture4
ICS2208 lecture4ICS2208 lecture4
ICS2208 lecture4
 
Privacy Engineering in the Wild
Privacy Engineering in the WildPrivacy Engineering in the Wild
Privacy Engineering in the Wild
 
What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?
 
Generative AI - Responsible Path Forward.pdf
Generative AI - Responsible Path Forward.pdfGenerative AI - Responsible Path Forward.pdf
Generative AI - Responsible Path Forward.pdf
 
Chapter -6- Ethics and Professionalism of ET (2).pptx
Chapter -6- Ethics and Professionalism of ET (2).pptxChapter -6- Ethics and Professionalism of ET (2).pptx
Chapter -6- Ethics and Professionalism of ET (2).pptx
 
social networking site
social networking sitesocial networking site
social networking site
 
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
e-SIDES workshop at BDV Meet-Up, Sofia 14/05/2018
 
Biometric Systems and Security
Biometric Systems and SecurityBiometric Systems and Security
Biometric Systems and Security
 
ARI2132 lecture4
ARI2132 lecture4ARI2132 lecture4
ARI2132 lecture4
 
20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology 20CS024 Ethics in Information Technology
20CS024 Ethics in Information Technology
 
Ijcet 06 07_004
Ijcet 06 07_004Ijcet 06 07_004
Ijcet 06 07_004
 
Ethics.ppt
Ethics.pptEthics.ppt
Ethics.ppt
 
Project Access Control ProposalPurposeThis course project i.docx
Project Access Control ProposalPurposeThis course project i.docxProject Access Control ProposalPurposeThis course project i.docx
Project Access Control ProposalPurposeThis course project i.docx
 
Univ. of IL Physicians Q Study
Univ. of IL Physicians Q StudyUniv. of IL Physicians Q Study
Univ. of IL Physicians Q Study
 

More from Ansgar Koene

More from Ansgar Koene (10)

What is AI?
What is AI?What is AI?
What is AI?
 
A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018A koene governance_framework_algorithmicaccountabilitytransparency_october2018
A koene governance_framework_algorithmicaccountabilitytransparency_october2018
 
IEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias ConsiderationsIEEE P7003 Algorithmic Bias Considerations
IEEE P7003 Algorithmic Bias Considerations
 
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagementTRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
TRILcon'17 confernece workshop presentation on UnBias stakeholder engagement
 
A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017A koene Rebooting The Expert Petcha Kutcha 2017
A koene Rebooting The Expert Petcha Kutcha 2017
 
Internet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User TrustInternet Society (ISOC Uk England) Webinar on User Trust
Internet Society (ISOC Uk England) Webinar on User Trust
 
Explorers fair talk who_isincontrol_you_thealgorithm
Explorers fair talk who_isincontrol_you_thealgorithmExplorers fair talk who_isincontrol_you_thealgorithm
Explorers fair talk who_isincontrol_you_thealgorithm
 
Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16Gada CaSMa oxford connected life oxcl16
Gada CaSMa oxford connected life oxcl16
 
Dasts16 a koene_un_bias
Dasts16 a koene_un_biasDasts16 a koene_un_bias
Dasts16 a koene_un_bias
 
Ass a koene_ca_sma
Ass a koene_ca_smaAss a koene_ca_sma
Ass a koene_ca_sma
 

Recently uploaded

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 

Recently uploaded (20)

FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Cyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdfCyberprint. Dark Pink Apt Group [EN].pdf
Cyberprint. Dark Pink Apt Group [EN].pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 

A koene un_bias_ieee_ebdvf_nov2017

  • 1. Algorithmic Decision-Making: Fairness, Bias and the Role of Ethics Standards Ansgar Koene Senior Research Fellow University of Nottingham
  • 2. INDEX 1. UnBias porject 2. IEEE Global Initiative for Ethical Considerations in AI and AS
  • 3. SECTION 1 UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy. Mission: Develop co-designed recommendations for design, regulation and education to mitigate unjustified bias in algorithmic systems. http://unbias.wp.horizon.ac.uk/
  • 4. Types and sources of Algorithmic Bias
  • 5. Types and sources of Algorithmic Bias
  • 6. WP1: ‘Youth Juries’ deliberation process to capture experiences, concerns and recommendations from teen-aged digital natives Participants N Age Gender 144 13-23 (av. 15) 67 F 77 M
  • 7. Youth Juries: Concerns & Recommendations Security (over privacy) Privacy Settings and Location Lack of control over personal data sharing Online identity Lack of transparency Automatic decision making Plug-ins to control level of tracking More control over personal data Control level of personalisation More accessible T&C Accessible information about how algorithms rule the Web Engaging educational programmes
  • 8. • 30 participants from academia, education, NGOs, industry • Four key case studies: fake news, personalisation, gaming the system, and transparency • What constitutes a fair algorithm? • What kinds of (legal and ethical) responsibilities do Internet companies have, to ensure their algorithms produce results that are fair and without bias? WP4: Multi-Stakeholder Workshop on fairness in relation to algorithmic design and practice
  • 9. Criteria relating to social norms and values: 1. Sometimes disparate outcome are acceptable if based on individual lifestyle choices over which people have control. 2. Ethical precautions are more important than higher accuracy. 3. There needs to be a balancing of individual values and socio-cultural values. Problem: How to weigh relevant social-cultural value? Criteria relating to system reliability: 1. Results must be balanced with due regard for trustworthiness. 2. Need for independent system evaluation and monitoring over time Criteria relating to (non-)interference with user control: (next slide) Participant recommendations
  • 10. 1. Subjective fairness experience depends on user objectives at time of use, therefore requires an ability to tune the data and algorithm. 2. Users should be able to limit data collection about them and its use. Inferred personal data is still personal data. Meaning assigned to the data must be justified towards the user. 3. Functioning of algorithm should be demonstrated/explained in a way that can be understood by the data subject. 4. If not vital to the task, there should be option to opt-out of the algorithm 5. Users must have freedom to explore algorithm effects, even if this would increase the ability to “game the system” 6. Need for clear means of appeal/redress for impact of the algorithmic system. Criteria relating to (non-)interference with user control:
  • 11. 1.Societal impact assessment 1. Pre-development/implementation 2. Re-assess as soon as the service reaches a large user base 3. Include collateral impact (e.g. indirect impact on non-users) • Risk assessment matrix 2.Algorithmic system development must include clear documentation of: 1. Decision criteria that are used 2. Justification of decision criteria 3. Context limitations within which system behaviour has been assessed Preliminary recommendations.
  • 12. SECTION 2 IEEE Global Initiative for Ethical Considerations in Artificail Intelligence and Autonomous Systems.
  • 14. • IEEE P7000: Model Process for Addressing Ethical Concerns During System Design • IEEE P7001: Transparency of Autonomous Systems • IEEE P7002: Data Privacy Process • IEEE P7003: Algorithmic Bias Considerations • IEEE P7004: Standard on Child and Student Data Governance • IEEE P7005: Standard on Employer Data Governance • IEEE P7006: Standard on Personal Data AI Agent Working Group • IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation Systems • IEEE P7008: Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems • IEEE P7009: Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems • IEEE P7010: Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems IEEE-SA Standards Projects
  • 15. PICTURE Open invitation to join the P7003 working group http://sites.ieee.org/sagroups-7003/