Presentation outlining the stakeholder engagement activities of the UnBias project, including case study driven debate with participants at the Winchester TRILcon conference on May 3rd 2017
2. UnBias: Emancipating Users Against Algorithmic
Biases for a Trusted Digital Economy
Mission: Develop co-designed recommendations for design, regulation and
education to mitigate unjustified bias in algorithmic systems.
Aim: A ‘fairness toolkit’ consisting of three co-designed tools
◦ a consciousness raising tool for young internet users to help them understand
online environments;
◦ an empowerment tool to help users navigate through online environments;
◦ an empathy tool for online providers and other stakeholders to help them
understand the concerns and rights of young internet users.
3. UnBias: work-packages
WP1 [led by Nottingham] ‘Youth Juries’ with 13-17 year old “digital natives” to
co-produce citizen education materials on properties of information
filtering/recommendation algorithms;
WP2 [led by Edinburgh] co-design workshops/hackathons & double-blind testing
to produce user-friendly open source tools for benchmarking and visualizing
biases in filtering/recommendation algorithms;
WP3 [led by Oxford] user observation & interviews to identify design
requirements for filtering/recommender algorithms that satisfy subjective
criteria of bias avoidance;
WP4 [co-led by Oxford and Nottingham] stakeholder engagement to develop
policy briefs for information and education governance;
4. Workshop outline
14:05-14:15 – Introduction by Ansgar (including task description)
14:15-14:40 – Group discussion of case study
14:40 – 15:10 – Groups report back, plenary discussion with summary of key points by Helena
15:10 – 15:15 – Overview of further activities for WP4 by Menish
15:15 – 15:20 – Overview of preliminary findings from WP1 by Virginia
6. Stakeholder Engagement Workshops
Fairness in relation to algorithmic design and practice
What constitutes a fair algorithm?
What kinds of (legal and ethical) responsibilities do Internet
companies have, to ensure their algorithms produce results that are
fair and without bias?
What factors might serve to enhance users’ awareness of, and trust
in, the role of algorithms in their online experience?
How might concepts of fairness be built into algorithmic design?
7. First Multi-Stakeholder Workshop,
Digital Catapult, London. 3rd Feb 2017
Multiple stakeholders: academia, education, NGOs, enterprises
30 participants
Four key case studies: fake news, personalisation, gaming the system, and
transparency
8. The Conceptualisation of Fairness
“a context-dependent evaluation of the algorithm processes and/or
outcomes against socio-cultural values. Typical examples might
include evaluating: the disparity between best and worst outcomes;
the sum-total of outcomes; worst-case scenarios; everyone is
treated/processed equally without prejudice or advantage due to
task-irrelevant factors”.
9. Transparency Discussion
Difference between transparency and
meaningful transparency
Transparency of an algorithmic process does not
overcome bias if the data the algorithm works on
is biased.
Wariness that transparency can lead to ‘gaming
the system’
10. Transparency Solutions
1) An intermediate organisation to audit and analyse algorithms,
trusted by end users to deliver opinion about what the
operation of an algorithm means in practice
2) Certification for algorithms – showing agreement to submit
to regular testing, guarantees of good practice etc.
3) Voluntary certificates of fairness etc. e.g. Fairtrade.
11. Summary Points
1) the ‘problem’ of potential bias and unfairness in algorithmic practice is broad
in scope, has the potential to disproportionately affect vulnerable users
2) the problem is also nuanced as the presence of algorithms on online
platforms can be of great benefit to assist users in achieving their goals;
3) finding solutions to the problem is highly complex.
13. Youth Juries
Stimulus Discussion Recommendations
Personalisation algorithms
Search Results & News
Transparency & Regulation
Participants
N Age Gender
144 13-23
(av. 15)
67 F
77 M
14. Pre-session survey results
No preference between the internet experience to be more personalised or more
neutral: A.: 53% More personalised, 47% More neutral
Lack of awareness about the way search engines rank the information but jurors
believe it’s important for people to know
• How much do you know?- A.: 36% Not much, 58% A little, 6% Quite a lot
• Do you think it’s important to know?- A:. 62% Yes, 16% Not really, 22% Don’t know
Regulation role: Who makes sure that the Internet and digital world is safe and
neutral? A:.4% Police, 23% Nobody, 29% Government, 44% The big tech companies
15. Post-session survey results
Q.: Did you learn anything new today about:
• Algorithm fairness? A.: 68% Yes, a lot; 31% Yes, a little; 1% No
• How the Internet affects your life?: A.: 55% Yes, a lot; 39% Yes, a little; 5% No; 1% Don’t
know
“Social media sites should not influence the information to their users”
A.: 49% Agree, 25% Neutral, 20% Disagree, 5% No Response
“When I use digital technologies I would like to have more control of what happens to
me and my personal data” A.: 82% Agree, 7% Neutral, 8% Disagree, 3% NR
“It should be made easier for people to remove digital content about themselves and
recreate their online profiles” A.: 74% Agree, 18% Neutral, 4% Disagree, 4% Neutral
“The big tech companies are accountable enough to users of digital technologies”
A.: 42% Agree, 35% Neutral, 16% Disagree,7% NR
16. Attitudinal change
Increase in participants’ confidence as social actors:
I can influence the way that digital technologies work for young people
Before: 4% Yes, a lot; 52% Yes, a little; 44% No
After: 32% Agree, 36% Neutral, 27% No, 5% NR
13-24 year olds should influence how digital technologies and services are run
Before: 42% Yes, 24% Not bothered, 28% Don’t know, 6% No
After: 58% Agree, 27% Neutral, 10% No, 5% NR
17. Summary
Nearly all participants expressed learning something new about
algorithms fairness (99%) and how the Internet affects their lives (94%)
No preference regarding personalisation vs neutrality, but 50% agreed
social media should not influence the information to their users
Nearly half of the jurors (44%) believe the big tech companies are
responsible for the Internet and digital world to be safe and neutral
Young people would like to have more control over their online lives
(82%) and should influence how digital technologies and services are run
(58%)
Notas do Editor
A important aspect of the Unbias project is Stakeholder engagement- largely planned in workshop formats such as these.
Id like to tell you about the first workshop that we have held recently.
This is so you can get an initial idea of the kinds of issues that have been raised by our multi-stakeholder group, and how they may align to some of the discussion that we have had here today.
As part of the stakeholder engagement work in UnBias,
We hope to undertake a range of workshops to investigate issues related to fairness in relation to algorithmic design and practice.
The aim of these workshops, of is to bring together individuals from a range of professional backgrounds who are likely to have differing perspectives on this.
And the hope is that the workshops are opportunities for stakeholders to share perspectives and seek answers to key project questions such as:
Our first stakeholder workshop was held on February 3rd 2017, at the Digital Catapult in London.
The first workshop brought together participants from academia, education, NGOs and enterprises. We were fortunate to have 30 particiipants on the day, which was a great turnout.
The workshop itself focused on four case studies each chosen as it concerned a key current debate surrounding the use of algorithms and fairness.
The case studies centred around: fake news, personalisation, gaming the system, and transparency.
Before I go on to describe some of the key outcomes in relation to one of these case studies, it is worth touching on the conceptualization of fairness that was used to underpin discussion during the workshop.
This definition was rated by most stakeholders, in a pre-workshop questionnaire as a good or reasonable starting point.
Its crucial that we see this a starting point, as it is only with discussion from stakeholders and the other engagement work being done through this project and elsewhere, that we can begin to grasp what fairness means in this context, and the reasoning behind this.
And this grounded understanding is one of the aims of the UnBias Project
As mentioned there were four case studies, of which one was related to transparency and fairness in relation to algorithms.
Ill just briefly touch on some of the key points raised in this discussion:
Stakeholders raised key concern in relation to the difference between transparency and meaningful transparency i.e. in making algorithms transparant posting source code is not real transparency if most people are unable to understand it. Transparancy has to be meaningful so that I can be understood by everyone. How this can be accomplished was debated.
Transparency of an algorithmic process does not overcome bias if the data the algorithm works on is biased. So, there were concerns surrounding the data sets that are used to train algorithms, and how they make then engender biased algorithms.
Wariness that transparency can lead to ‘gaming the system’; so people may deilbaterely enter certain criteria to elicit outcomes that they would like
As well as considering and discussing issues in relation to transparency and algorithmic fairness, participants were asked to consider what potential solutions to facilitate meaningful transparency may be:
There was the suggestion of an intermediate organisation, so some kind of impartial body, that could audit and analyse algirthms and then inform end users about how fair the algorithm was?
In relation to the point above- there was the suggestion of certifying algorithms… again a way to provide some verification of the fairness of the algorithm without users having to understand its intricacies
And finally for organisations using algorithms to apply for certifcates of fairness such as fairtrade, and this would be a markers of their good practice.
Solutions tended towards avoiding explanation on a general basis of the technical workings of algorithms, but in getting trusted bodies/insititutions to some how vet algorithms on behalf of the end users and provide a marker to suggest good practice if this was evident.
discussion of the four case studies raised a number of recurrent key points that were returned to in the plenary discussion that closed the workshop.
Debate amongst the participants was very productive and highlighted that:
the ‘problem’ of potential bias and unfairness in algorithmic practice is broad in scope, has the potential to disproportionately affect vulnerable users (such as children), and may be worsened by the current absence of effective regulation in relation to the use of algorithms
;
2) the problem is also nuanced as the presence of algorithms on online platforms can be of great benefit to assist users achieve their goals; so it is important to avoid any implication that their role is always harmful;
that finding solutions to the problem is highly complex. This involved multiple stakeholders with varying perspectives on what fairness means, and other tensions such as that between being open and commercial interests.
YJ Format: youth- led focus groups where given a stimulus, participants express their views, discuss and suggest recommendations.
Participants: jurors
Format of the JY (2hrs session with a 10 min break):
-Pre session questionnaire
-Introductions, ice breaking question ”What do you use the internet for most
-Stimulus: 3 Topics in this 2nd wave:
Personalisation algorithms: Filter bubble, Privacy, personal data (what companies do with your data?) T&C, Echo chambers
Search Results & News: Autocomplete - gaming the system, discrimination?; News feeds & Fake News.
10 MIN Break
Transparency & Regulation: What participants want to know? What kind of transparency they want from the algorithms if so? What and who should be regulating?
Recommendations – post tips
Post-session questionnaire
We are particularly interested in the ATITUDINAL CHANGE throughout the session
Would you like your internet experience to be more personalised or more neutral?
Do you know how about the way SE rank the info?
Comparison of similar questions on pre- vs. post- questionnaires
2- 50% Agree, 25% Neutral , 20% disagree, 5 NR
3- 44% Big tech, 29% government, 23% Nobody, 4% Police
Ongoing work…
Qualitative data analysis: deliberation process and recommendations
Surveys’ efficacy shown by raising awareness and
Next…
Survey’s data analysis by participants’ age, gender, session location
Third wave of UnBias Youth Juries Jan.-Febr. 2018