O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Algorithmically Mediated Online Inforamtion Access workshop at WebSci17

Carregando em…3

Confira estes a seguir

1 de 34 Anúncio

Algorithmically Mediated Online Inforamtion Access workshop at WebSci17

Baixar para ler offline

This was a half-day UnBias project workshop at the WebSci'17 conference presenting some of the interim UnBias project results and engaging the audience in debate on issues related to the role of algorithms in mediated access to online information.

This was a half-day UnBias project workshop at the WebSci'17 conference presenting some of the interim UnBias project results and engaging the audience in debate on issues related to the role of algorithms in mediated access to online information.


Mais Conteúdo rRelacionado

Diapositivos para si (20)

Semelhante a Algorithmically Mediated Online Inforamtion Access workshop at WebSci17 (20)


Mais recentes (20)


Algorithmically Mediated Online Inforamtion Access workshop at WebSci17

  1. 1. Algorithm Mediated Online Information Access: user trust, transparency, control and responsibility 25TH JUNE 2017 WEBSCI’17
  2. 2. Workshop outline 9:00 – 9:10 Introduction 9:10 – 9:30 Overview of the ongoing UnBias project: Youth Juries deliberations, user observation studies, stakeholder engagement workshops 10:10 – 10:30 “Platforms: Do we trust them”, by Rene Arnold 10:30 – 10:50 “IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems”, by John Havens 10:50 – 11:10 Break 11:10 – 11:50 Case study 1: News recommendation & Fake News 11:50 – 12:30 Case study 2: Business models, CSR/RRI a role for WebScience 12:30 – 12:50 Break 12:50 – 13:30 Case study 3: Algorithmic discrimination 13:30 – 14:00 Summary of outcomes
  3. 3. UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy User experience of algorithm driven internet platforms and the processes of algorithm design
  4. 4. UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy Mission: Develop co-designed recommendations for design, regulation and education to mitigate unjustified bias in algorithmic systems. Aim: A ‘fairness toolkit’ consisting of three co-designed tools ◦ a consciousness raising tool for young internet users to help them understand online environments; ◦ an empowerment tool to help users navigate through online environments; ◦ an empathy tool for online providers and other stakeholders to help them understand the concerns and rights of internet users.
  5. 5. Project team and institutions University of Nottingham Derek McAuley Ansgar Koene Elvira Perez Vallejos Virginia Portillo Monica Cano Gomez Liz Douthwaite http://unbias.wp.horizon.ac.uk/ University of Edinburgh Michael Rovatsos Sofia Ceppi University of Oxford Marina Jirotka Helena Webb Menisha Patel Giles Lane (Proboscis)
  6. 6. UnBias: work-packages WP1 [led by Nottingham] ‘Youth Juries’ with 13-17 year old “digital natives” to co-produce citizen education materials on properties of information filtering/recommendation algorithms; WP2 [led by Edinburgh] co-design workshops/hackathons & double-blind testing to produce user-friendly open source tools for benchmarking and visualising biases in filtering/recommendation algorithms; WP3 [led by Oxford] user observation & interviews to explore human-algorithm interaction; WP4 [co-led by Oxford and Nottingham] stakeholder engagement involving informed professionals from key groups to discuss issues of algorithmic bias and fairness. Workshop outcomes to inform project recommendations and outputs.
  7. 7. http://unbias.wp.horizon.ac.uk/
  8. 8. WP1: Youth Juries Stimulus Discussion Recommendations  Personalisation algorithms  Search Results & News Transparency & Regulation Participants N Age Gender 144 13-23 (av. 15) 67 F 77 M
  9. 9. Pre-session survey results  No preference between the internet experience to be more personalised or more neutral: A.: 53% More personalised, 47% More neutral  Lack of awareness about the way search engines rank the information but jurors believe it’s important for people to know • How much do you know?- A.: 36% Not much, 58% A little, 6% Quite a lot • Do you think it’s important to know?- A:. 62% Yes, 16% Not really, 22% Don’t know  Regulation role: Who makes sure that the Internet and digital world is safe and neutral? A:.4% Police, 23% Nobody, 29% Government, 44% The big tech companies
  10. 10. Post-session survey results  Q.: Did you learn anything new today about: • Algorithm fairness? A.: 68% Yes, a lot; 31% Yes, a little; 1% No • How the Internet affects your life?: A.: 55% Yes, a lot; 39% Yes, a little; 5% No; 1% Don’t know “Social media sites should not influence the information to their users” A.: 49% Agree, 25% Neutral, 20% Disagree, 5% No Response  “When I use digital technologies I would like to have more control of what happens to me and my personal data” A.: 82% Agree, 7% Neutral, 8% Disagree, 3% NR  “It should be made easier for people to remove digital content about themselves and recreate their online profiles” A.: 74% Agree, 18% Neutral, 4% Disagree, 4% Neutral  “The big tech companies are accountable enough to users of digital technologies” A.: 42% Agree, 35% Neutral, 16% Disagree,7% NR
  11. 11. Attitudinal change Increase in participants’ confidence as social actors:  I can influence the way that digital technologies work for young people Before: 4% Yes, a lot; 52% Yes, a little; 44% No After: 32% Agree, 36% Neutral, 27% No, 5% NR  13-24 year olds should influence how digital technologies and services are run Before: 42% Yes, 24% Not bothered, 28% Don’t know, 6% No After: 58% Agree, 27% Neutral, 10% No, 5% NR
  12. 12. Summary  Nearly all participants expressed learning something new about algorithms fairness (99%) and how the Internet affects their lives (94%)  No preference regarding personalisation vs neutrality, but 50% agreed social media should not influence the information to their users  Nearly half of the jurors (44%) believe the big tech companies are responsible for the Internet and digital world to be safe and neutral Young people would like to have more control over their online lives (82%) and should influence how digital technologies and services are run (58%)
  13. 13. WP2: Algorithm ‘fairness’
  14. 14. Evaluating fairness from outputs only Most preferred Least preferred
  15. 15. Evaluating fairness with knowledge about the algorithm decision principles  A1: minimise total distance while guaranteeing at least 70% of maximum possible utility  A2: maximise the minimum individual student utility while guaranteeing at least 70% of maximum possible total utility  A3: maximise total utility  A4: maximise the minimum individual student utility  A5: minimise total distance Most preferred Least preferred
  16. 16. WP3: User behaviour
  17. 17. WP4: First Multi-Stakeholder Workshop, 3rd Feb 2017 Multiple stakeholders: academia, education, NGOs, enterprises 30 participants Fairness in relation to algorithmic design and practice Four key case studies: fake news, personalisation, gaming the system, and transparency What constitutes a fair algorithm? What kinds of (legal and ethical) responsibilities do Internet companies have, to ensure their algorithms produce results that are fair and without bias?
  18. 18. The Conceptualisation of Fairness “a context-dependent evaluation of the algorithm processes and/or outcomes against socio-cultural values. Typical examples might include evaluating: the disparity between best and worst outcomes; the sum-total of outcomes; worst-case scenarios; everyone is treated/processed equally without prejudice or advantage due to task-irrelevant factors”.
  19. 19. Criteria relating to social norms and values: (i) Sometimes disparate outcome are acceptable if based on individual lifestyle choices over which people have control. (ii) Ethical precautions are more important than higher accuracy. (iii)There needs to be a balancing of individual values and socio- cultural values. Problem: How to weigh relevant social- cultural value?
  20. 20. Criteria relating to system reliability: (i) Results must be balanced with due regard for trustworthiness. (ii) Needs for independent system evaluation and monitoring over time.
  21. 21. Criteria relating to (non-)interference with user control: (i) Subjective fairness experience depends on user objectives at time of use, therefore requires an ability to tune the data and algorithm. (ii) Users should be able to limit data collection about them and its use. Inferred personal data is still personal data. Meaning assigned to the data must be justified towards the user. (iii) Reasoning/behaviour of algorithm demonstrated/explained in a way that can be understood by the data subject. (iv) If not vital to the task, there should be opt-out of the algorithm (v) Users must have freedom to explore algorithm effects, even if this would increase the ability to “game the system” (vi) Need for clear means of appeal/redress for impact of the algorithmic system.
  22. 22. Transparency Discussion Difference between transparency and meaningful transparency Transparency of an algorithmic process does not overcome bias if the data the algorithm works on is biased. Wariness that transparency can lead to ‘gaming the system’
  23. 23. Transparency Solutions 1) An intermediate organisation to audit and analyse algorithms, trusted by end users to deliver opinion about what the operation of an algorithm means in practice 2) Certification for algorithms – showing agreement to submit to regular testing, guarantees of good practice etc. 3) Voluntary certificates of fairness etc. e.g. Fairtrade.
  24. 24. WP4: Second Multi-Stakeholder Workshop, June 2017 25 participants: academia, law, enterprise, NGOs Fair resource allocation task Empathy tool for developers of algorithms
  25. 25. Key points Discussion of algorithms inevitably leads to discussion of context in which the algorithm will be applied Discussion of preferred or ‘best’ algorithm for allocation inevitably leads to moral debates over fairness, equality and what individuals ‘deserve’ Empathy can be a positive as it promotes understanding and may be followed by positive action. Empathy from developers of algorithms towards users might entail respect for users’ time, attention, content and contact with platforms. However empathy cannot be guaranteed to lead to positive action; it might make users more vulnerable to manipulation Is empathy compatible with a business model based on advertising? (cf Facebook targetting of vulnerable teenagers.
  26. 26. Case Study 1: The role of recommender algorithms in online hoaxes and fake news
  27. 27. Questions to consider 1. How easy is it for a) individual users and b) groups of users to influence the order to responses in a web-search? 2. How could search engines weight their search results towards more authoritative results ahead of more popular ones? Should they? 3. To what extent should web search platforms manually manipulate their own algorithms and in what instances? NB Google has made a number of adjustments re anti-Semitism etc. and places a suicide help line at the top of searches about how to kill oneself. 4. To what extent should public opinion influence the ways in which platforms design and adjust their autocomplete and search algorithms? 5. What other features should and should not have a role in influencing the design of autocomplete and search algorithms?
  28. 28. Case Study 2: Business models - how can WebScience boost CSR / RRI Responsible Research and Innovation (RRI) RRI emerged from concerns about increasingly potent and transformative potential of research and innovation, and the societal and ethical implications that these may engender,. A responsible approach to the design, development and appropriation of technologies through the lens of RRI, entails a multi-stakeholder involvement through the processes and outcomes of research and innovation. YouTube ad delivery algorithm controversy
  29. 29. Questions to consider:
  30. 30. Case Study 3: Unintended algorithmic discrimination online – towards detection and prevention Personalization can be very helpful However there are concerns: 1. The creation of online echo chambers or filter bubbles. 2. The results of personalisation algorithms may be inaccurate and even discriminatory. 3. Personalisation algorithms function to collate and act on information collected about the online user. 4. Algorithms typically lack transparency.
  31. 31. Questions to consider: 1. What is your response to this comment from Mark Zuckerberg to explain the value of personalisation on the platform? “A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa” 2. What (legal or ethical) responsibilities do internet platforms have to ensure their personalisation algorithms are 1) not inaccurate or discriminatory and 2) transparent? 3. To what extent should users be able to determine how much or how little personal data internet platforms collect about us? 4. To what extent would algorithmic transparency help to address concerns raised about the negative impacts of personalisation algorithms? 5. Is there any point to algorithmic transparency? What might be some useful alternatives?
  32. 32. UnBias workshop report available at http://unbias.wp.horizon.ac.uk/
  33. 33. Summary Points 1) the ‘problem’ of potential bias and unfairness in algorithmic practice is broad in scope, has the potential to disproportionately affect vulnerable users 2) the problem is also nuanced as the presence of algorithms on online platforms can be of great benefit to assist users in achieving their goals; 3) finding solutions to the problem is highly complex.

Notas do Editor

  • Background to the project –rising concern over the ubiquity of algorithms in our online experience
  • YJ Format: youth- led focus groups where given a stimulus, participants express their views, discuss and suggest recommendations.
    Participants: jurors

    Format of the JY (2hrs session with a 10 min break):

    -Pre session questionnaire
    -Introductions, ice breaking question ”What do you use the internet for most
    -Stimulus: 3 Topics in this 2nd wave:
    Personalisation algorithms: Filter bubble, Privacy, personal data (what companies do with your data?) T&C, Echo chambers
    Search Results & News: Autocomplete - gaming the system, discrimination?; News feeds & Fake News.
    10 MIN Break
    Transparency & Regulation: What participants want to know? What kind of transparency they want from the algorithms if so? What and who should be regulating?

    Recommendations – post tips

    Post-session questionnaire

    We are particularly interested in the ATITUDINAL CHANGE throughout the session
  • Would you like your internet experience to be more personalised or more neutral?
    Do you know how about the way SE rank the info?

  • Comparison of similar questions on pre- vs. post- questionnaires
  • 2- 50% Agree, 25% Neutral , 20% disagree, 5 NR
    3- 44% Big tech, 29% government, 23% Nobody, 4% Police
    Ongoing work…
    Qualitative data analysis: deliberation process and recommendations
    Surveys’ efficacy shown by raising awareness and
    Survey’s data analysis by participants’ age, gender, session location
    Third wave of UnBias Youth Juries Jan.-Febr. 2018
  • This WP aims to develop a methodology and the necessary IT and techniques for revealing the impact of algorithmic biases in personalisation-based platforms to non-experts (e.g. youths), and for co-developing “fairer” algorithms in close collaboration with specialists and non-expert users.

    In Year 1, Sofia and Michael have been running a task that asks participants to make task allocation decisions. In a situation in which resources are limited, different algorithms might be used to determine who receives what. Participants are asked to determine which algorithm is best suited to make the allocation and this inevitably brings up issues of fairness. Disucssion reveals different models of fairness. These findings will put towards further work on the processes of algorithm design and the possibility to develop a fair algorithm.
  • Data collection is ongoing and involves participants working together in pairs to undertake some simple browsing experiments that require them to interact with algorithms. For instance to search for some information online, find a recommendation for a good restaurant or find a piece of fake news.

    These activities are video recorded and screen capture software is used to record their browsing activities.

    We are interested to attempt to identify how users interact with a particular algorithm and how this shapes their browsing behaviour.

    For instance, when uses conduct a search how far down the results do they look before selecting one; what filters do they apply when browsing on a recommendation platform. How do they incorporate the auto complete suggestions of the search bar into their activities?

    We are seeking to identify moments at which user browsing behaviour is shaped through interaction with the algorithm. Can we pintpoint exact moments in which action is consolidated, altered or transformed through this interaction.

    This data collection and analysis is ongoing. In future work we will also conduct disruption experiments in which we ask participants to conduct similar tasks but put them in a situation in which their normal experience is changed. For instance, they conduct searches within a google profile that is very different to their own identity in terms of location, demographics etc so receive results that appear strange. Or they might use a recommendation platform that appears familiar but gives them unexpected to irrelevant results. These sessions will also be recorded to identify how participants respond to an unusual browsing and algorithmic experience.
  • Our first stakeholder workshop was held on February 3rd 2017, at the Digital Catapult in London.

    The first workshop brought together participants from academia, education, NGOs and enterprises. We were fortunate to have 30 particiipants on the day, which was a great turnout.

    The workshop itself focused on four case studies each chosen as it concerned a key current debate surrounding the use of algorithms and fairness.

    The case studies centred around: fake news, personalisation, gaming the system, and transparency.
  • Before I go on to describe some of the key outcomes in relation to one of these case studies, it is worth touching on the conceptualization of fairness that was used to underpin discussion during the workshop.

    This definition was rated by most stakeholders, in a pre-workshop questionnaire as a good or reasonable starting point.

    Its crucial that we see this a starting point, as it is only with discussion from stakeholders and the other engagement work being done through this project and elsewhere, that we can begin to grasp what fairness means in this context, and the reasoning behind this.

    And this grounded understanding is one of the aims of the UnBias Project

  • As mentioned there were four case studies, of which one was related to transparency and fairness in relation to algorithms.

    Ill just briefly touch on some of the key points raised in this discussion:

    Stakeholders raised key concern in relation to the difference between transparency and meaningful transparency i.e. in making algorithms transparant posting source code is not real transparency if most people are unable to understand it. Transparancy has to be meaningful so that I can be understood by everyone. How this can be accomplished was debated.

    Transparency of an algorithmic process does not overcome bias if the data the algorithm works on is biased. So, there were concerns surrounding the data sets that are used to train algorithms, and how they make then engender biased algorithms.

    Wariness that transparency can lead to ‘gaming the system’; so people may deilbaterely enter certain criteria to elicit outcomes that they would like
  • As well as considering and discussing issues in relation to transparency and algorithmic fairness, participants were asked to consider what potential solutions to facilitate meaningful transparency may be:

    There was the suggestion of an intermediate organisation, so some kind of impartial body, that could audit and analyse algirthms and then inform end users about how fair the algorithm was?

    In relation to the point above- there was the suggestion of certifying algorithms… again a way to provide some verification of the fairness of the algorithm without users having to understand its intricacies

    And finally for organisations using algorithms to apply for certifcates of fairness such as fairtrade, and this would be a markers of their good practice.

    Solutions tended towards avoiding explanation on a general basis of the technical workings of algorithms, but in getting trusted bodies/insititutions to some how vet algorithms on behalf of the end users and provide a marker to suggest good practice if this was evident.
  • .
  • 4 December 2016: Journalist Carole Cadwalladr reports in The Observer that when typing ‘are jews’ into a Google search bar the platform autocompletes the search request with suggestions including ‘are Jews evil’. Selecting this query leads to a page of responses; 9 out of the 10 first responses are from anti-Semitic sites (e.g. Stormfront) that ‘confirm’ that Jews are evil. Cadwalladr reports that searches beginning ‘are women’ and ‘are muslims’ also autocomplete with suggestions for ‘…evil’ with the results of searches listing websites confirming they are evil.
    Cadwalladr suggests that some users in the alt-right community are able to ‘game’ Google’s PageRank algorithm in order to drive traffic to anti-Semitic sites and reinforce bigotry.
    5 December 2016 A Google spokesperson states: “We took action within hours of being notified on Friday of the autocomplete result. Our search results are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query.” Reports suggest that the auto complete terms mentioned by Cadwalladr relating to women and Jews have been removed but not those relating to Muslims. Search results have not been altered.
    11 December 2016 Carole Cadwalladr reports that typing ‘did the hol’ into a Google search bar leads to an auto-complete suggestion ‘did the Holocaust happen’ and that the first item returned from this search is an article from Stormfront ‘Top 10 reasons why the Holocaust didn’t happen.’ She argues that this is enabling the spread of racist propaganda.
    16 December 2016 The Guardian (sister paper to The Observer) reports that it has found ‘a dozen additional examples of biased search results’, stating that its autocomplete function and search algorithm prioritise websites that claim climate change is a hoax, the Sandy Hook shooting did not happen and that being gay is a sin.
    17 December 2016 Cadwalladr reports that she has been able to replace Stormfront’s article at the top of search results by paying to place an advertisement at the top of the page. She concludes that Google are prioritising commercial gain over fact.
    20 December 2016 Search Engine Land report that Google have now made alterations to its algorithm so that Holocaust denial sites are no longer the first results from a search for ‘Did the Holocaust really happen?’ It quotes Google as saying:
    “Judging which pages on the web best answer a query is a challenging problem and we don’t always get it right. When non-authoritative information ranks too high in our search results, we develop scalable, automated approaches to fix the problems, rather than manually removing these one-by-one. We recently made improvements to our algorithm that will help surface more high quality, credible content on the web. We’ll continue to change our algorithms over time in order to tackle these challenges.”
    The denial sites remain in search returns but are lower down the order.
    27thDecember 2016 Search Engine Land reports that after a few days on the second page, the Stormfront site has returned to the first page of results for the search ‘Did the Holocaust happen’.
    Elaine’s Idle Mind blog site notes that Stormfront still tops searches with slightly different terms such as ‘Did the Holocaust really happen’ and ‘Did the Holocaust ever happen’
    “Google can’t optimize its search algorithm to account for the 0.000001% of problematic queries that people might type in. There are infinite combinations of words that will inevitably offend somebody somewhere. In this case, Google put forth the minimum possible effort to get people to shut up and go away…”


    May 3rd 2016 Gizmodo.com reports on the role of human curators in determining what appears in Facebook’s ‘Trending news’ section.
    “We choose what’s trending,” said one. “There was no real standard for measuring what qualified as news and what didn’t. It was up to the news curator to decide.”
    May 9th 2016 Gizmodo.com reveals that a former member of the trending news team has told them that “Facebook workers routinely suppressed news stories of interest to conservative readers from the social network’s influential “trending” news section” and that “they were instructed to artificially “inject” selected stories into the trending news module, even if they weren’t popular enough to warrant inclusion—or in some cases weren’t trending at all”
    26 August 2016 Facebook announces that its trending feature will become more automated and no longer require people to write descriptions for trending topics. It emerges that all human members of the team have been fired.
    29th August 2016 Facebook’s fully automated trending news module is referred to as a ‘disaster’ which has been ‘pushing out’ false news, controversial stories and sexually provocative content.
    Early November 2016 Following Donald Trump’s victory in the US presidential elections, various news outlets begin speculating on the role of ‘fake news’ in his success. The New York Times publishes an article ‘Donald Trump won because of Facebook’ It states:
    “The most obvious way in which Facebook enabled a Trump victory has been its inability (or refusal) to address the problem of hoax or fake news. Fake news is not a problem unique to Facebook, but Facebook’s enormous audience, and the mechanisms of distribution on which the site relies — i.e., the emotionally charged activity of sharing, and the show-me-more-like-this feedback loop of the news feed algorithm — makes it the only site to support a genuinely lucrative market in which shady publishers arbitrage traffic by enticing people off of Facebook and onto ad-festooned websites, …The valiant efforts of Snopes and other debunking organizations were insufficient; Facebook’s labyrinthine sharing and privacy settings mean that fact-checks get lost in the shuffle. Often, no one would even need to click on and read the story for the headline itself to become a widely distributed talking point, repeated elsewhere online, or, sometimes, in real life…Tens of millions of people…were served up or shared emotionally charged news stories about the candidates, because Facebook’s sorting algorithm understood from experience that they were seeking such stories.”
    10th November 2016 Facebook CEO Mark Zuckerberg calls claims that fake news on the platform influenced the election ‘pretty crazy’.
    12th November 2016 The New York times reports that despite Zuckerberg’s public statement, senior members at Facebook are worried about the platform’s role in the outcome of the election and are taking confidential steps to investigate it. On the same day Mark Zuckerberg posts an update on his own Facebook page stating that “Of all the content on Facebook, more than 99% of what people see is authentic”.
    Throughout November 2016 news media report potential solutions to the fake news ‘problem’ on Facebook. The Guardian lists three kinds of solution:
    “Most of the solutions fall into three general categories: the hiring of human editors; crowdsourcing, and technological or algorithmic solutions.”
    Bloomberg technology suggests that:
    “Facebook could tweak its algorithm to promote related articles from sites like FactCheck.org so they show up next to questionable stories on the same topic in the news feed”
    Vox.com suggest that Facebook are exercising caution as they fear being seen as biased against conservatives following on from the algorithm controversy in spring 2016.
    15th December 2016 Facebook unveils plans to stop the spread of fake news. Changes include:
    new ways for users to report suspected fake news stories – which will then be sent to fact checking organisations. Some stories will then carry the label ‘disputed’
    adjustment to the News Feed algorithm to decrease the reach of fake news stories – e.g. shares when only a headline have been read will receive less prominence than when a full article has been read.
  • The creation of online echo chambers or filter bubbles. On a social network such as Facebook personalisation algorithms ensure that we are more likely to see content similar to what we have previously ‘like’ or commented on. This can mean that we repeatedly see content that reaffirms our existing views and we are not exposed to anything that might challenge our own thinking.Recent political events such as the election of Donald Trump to the US presidency have led to much debate over the role in echo chambers in modern democratic societies.Concerns over the impact of these echo chambers or filter bubbles have grown so much over the past year that various digital platforms and media organisations have started developing tools to help users counteract them by making sure they exposed to a wider spectrum of content.
    The results of personalisation algorithms may be inaccurate and even discriminatory. Despite the sophisticated calculations underpinning them, the algorithms that recommend or advertise a purchase to us or present us with content we might want to see, might not in fact reflect our own interests. This can be an annoyance or distraction. More seriously, algorithms might alternatively curate content for different users in ways that can be perceived as discriminatory against particular social groups. For instance researchers at Carnegie Mellon University ran experimental online searches with various simulated user profiles and found that significantly fewer female users than males were shown advertisements promising them help getting high paid jobs. A member of the research team commented “Many important decisions about the ads we see are being made by online systems. Oversight of these ‘black boxes’ is necessary to make sure they don’t compromise our values.”
    Personalisation algorithms function to collate and act on information collected about the online user. Many users may feel uncomfortable about this, for instance feeling that it constitutes a breach of their privacy. The impact of this perception can be seen in the emergence of options to opt out of personalisation advertisements on platforms such as Google and the growth of platforms that claim not to track you.
    Algorithms typically lack transparency. In most cases, the algorithms used to drive internet platforms are treated as proprietary and commercially sensitive. In recent times there have been calls for algorithmic transparency on the basis that since algorithms have the capacity to impact many areas of people’s lives, it is important that we are able to know how they function and hold them accountable Transparency of this kind might help us to challenge apparent (discriminatory) bias in the outcomes of search engine queries or work to address the negative consequences of online echo chambers. However, concerns have also been raised that the positive impact of transparency is likely to be limited and could also have negative effects. In the first instance, algorithms are typically so complex that very few would be able to understand them fully. They also tend to undergo continuous change, making them hard to measure and assess. Furthermore, transparency can enable some more knowledgeable users to ‘game’ the system to their own ends, as in the case of Reddit’s move to become open source..
    In December 2015 the Commissioner of the US Federal Trade Commission noted the potential for algorithmic calculations to have effects that might be unfair, unethical or discriminatory but also acknowledged the difficulties involved in making those algorithms transparent to the public. Consequently, she called for companies to carry out proactive and thorough internal auditing of their own data use.
  • discussion of the four case studies raised a number of recurrent key points that were returned to in the plenary discussion that closed the workshop.

    Debate amongst the participants was very productive and highlighted that:
    the ‘problem’ of potential bias and unfairness in algorithmic practice is broad in scope, has the potential to disproportionately affect vulnerable users (such as children), and may be worsened by the current absence of effective regulation in relation to the use of algorithms
    2) the problem is also nuanced as the presence of algorithms on online platforms can be of great benefit to assist users achieve their goals; so it is important to avoid any implication that their role is always harmful;

    that finding solutions to the problem is highly complex. This involved multiple stakeholders with varying perspectives on what fairness means, and other tensions such as that between being open and commercial interests.