Presentation at European Big Data Values forum on Fairness, Bias and the role of Ethics Standards in Algorithmic Decision Making. Part of the Data & Society session.
3. SECTION 1
UnBias: Emancipating Users Against Algorithmic Biases for
a Trusted Digital Economy.
Mission: Develop co-designed recommendations for design,
regulation and education to mitigate unjustified bias in algorithmic
systems.
http://unbias.wp.horizon.ac.uk/
6. WP1: ‘Youth Juries’ deliberation process to capture experiences,
concerns and recommendations from teen-aged digital natives
Participants
N Age Gender
144 13-23
(av. 15)
67 F
77 M
7. Youth Juries: Concerns & Recommendations
Security
(over privacy)
Privacy Settings
and
Location
Lack of control over
personal data sharing
Online identity
Lack of transparency
Automatic decision
making
Plug-ins to control
level of tracking
More control over
personal data
Control level of
personalisation
More accessible T&C
Accessible information
about how algorithms
rule the Web
Engaging educational
programmes
8. • 30 participants from academia, education, NGOs, industry
• Four key case studies: fake news, personalisation, gaming the system, and
transparency
• What constitutes a fair algorithm?
• What kinds of (legal and ethical) responsibilities do Internet companies have,
to ensure their algorithms produce results that are fair and without bias?
WP4: Multi-Stakeholder Workshop on fairness in
relation to algorithmic design and practice
9. Criteria relating to social norms and values:
1. Sometimes disparate outcome are acceptable if based on individual lifestyle
choices over which people have control.
2. Ethical precautions are more important than higher accuracy.
3. There needs to be a balancing of individual values and socio-cultural values.
Problem: How to weigh relevant social-cultural value?
Criteria relating to system reliability:
1. Results must be balanced with due regard for trustworthiness.
2. Need for independent system evaluation and monitoring over time
Criteria relating to (non-)interference with user control: (next slide)
Participant recommendations
10. 1. Subjective fairness experience depends on user objectives at time of use, therefore
requires an ability to tune the data and algorithm.
2. Users should be able to limit data collection about them and its use. Inferred personal
data is still personal data. Meaning assigned to the data must be justified towards the
user.
3. Functioning of algorithm should be demonstrated/explained in a way that can be
understood by the data subject.
4. If not vital to the task, there should be option to opt-out of the algorithm
5. Users must have freedom to explore algorithm effects, even if this would increase the
ability to “game the system”
6. Need for clear means of appeal/redress for impact of the algorithmic system.
Criteria relating to (non-)interference with user control:
11. 1.Societal impact assessment
1. Pre-development/implementation
2. Re-assess as soon as the service reaches a large user base
3. Include collateral impact (e.g. indirect impact on non-users)
• Risk assessment matrix
2.Algorithmic system development must include clear documentation of:
1. Decision criteria that are used
2. Justification of decision criteria
3. Context limitations within which system behaviour has been assessed
Preliminary recommendations.
12. SECTION 2
IEEE Global Initiative for Ethical Considerations in Artificail
Intelligence and Autonomous Systems.
14. • IEEE P7000: Model Process for Addressing Ethical Concerns During System Design
• IEEE P7001: Transparency of Autonomous Systems
• IEEE P7002: Data Privacy Process
• IEEE P7003: Algorithmic Bias Considerations
• IEEE P7004: Standard on Child and Student Data Governance
• IEEE P7005: Standard on Employer Data Governance
• IEEE P7006: Standard on Personal Data AI Agent Working Group
• IEEE P7007: Ontological Standard for Ethically Driven Robotics and Automation
Systems
• IEEE P7008: Standard for Ethically Driven Nudging for Robotic, Intelligent and
Autonomous Systems
• IEEE P7009: Standard for Fail-Safe Design of Autonomous and Semi-Autonomous
Systems
• IEEE P7010: Wellbeing Metrics Standard for Ethical Artificial Intelligence and
Autonomous Systems
IEEE-SA Standards Projects