O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

Do No Harm: Do Technologists Need a Code of Ethics?

Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio
Anúncio

Confira estes a seguir

1 de 49 Anúncio

Do No Harm: Do Technologists Need a Code of Ethics?

Baixar para ler offline

Nothing is neutral, and the technology we design and build, isn’t objective. How do we ensure that what starts out as a great idea, doesn’t unintentionally (or intentionally) harm? Trolling, racially biased algorithms, surveillance capitalism, how do we assess our creations through an ethical lens so our products don’t amplify social biases? Do we need a code of ethics? How do we build ethics in our practice?

In this talk Sofia explores these questions and builds on the conversations that are happening globally within the technology community. She also talks about the Responsible Tech Playbook that ThoughtWorks is building which collate ethical frameworks and explore how to use them in design and delivery of software.

SPEAKER:
Sofia Woods, Senior Experience Designer, ThoughtWorks
Sofia has over 10 years experience solving complex problems and designing digital products, experiences and services across government, financial services, transport and the private sectors. She’s a multi-disciplined designer, experienced with the whole gamut of Human Centred Design approaches including UX research, user interface design, prototyping/ testing and can apply this approach in large scale software delivery environments. Blending human centred design with strategy and technology, she creates meaningful experiences that transform.

Nothing is neutral, and the technology we design and build, isn’t objective. How do we ensure that what starts out as a great idea, doesn’t unintentionally (or intentionally) harm? Trolling, racially biased algorithms, surveillance capitalism, how do we assess our creations through an ethical lens so our products don’t amplify social biases? Do we need a code of ethics? How do we build ethics in our practice?

In this talk Sofia explores these questions and builds on the conversations that are happening globally within the technology community. She also talks about the Responsible Tech Playbook that ThoughtWorks is building which collate ethical frameworks and explore how to use them in design and delivery of software.

SPEAKER:
Sofia Woods, Senior Experience Designer, ThoughtWorks
Sofia has over 10 years experience solving complex problems and designing digital products, experiences and services across government, financial services, transport and the private sectors. She’s a multi-disciplined designer, experienced with the whole gamut of Human Centred Design approaches including UX research, user interface design, prototyping/ testing and can apply this approach in large scale software delivery environments. Blending human centred design with strategy and technology, she creates meaningful experiences that transform.

Anúncio
Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Semelhante a Do No Harm: Do Technologists Need a Code of Ethics? (20)

Anúncio

Mais de Thoughtworks (20)

Mais recentes (20)

Anúncio

Do No Harm: Do Technologists Need a Code of Ethics?

  1. 1. 1 DO NO HARM Do technologists need a code of ethics? Sofia Woods - Experience Designer @sofiadisrupts
  2. 2. In 2017, an MIT researcher was doing analysis on facial recognition software for a project. 2
  3. 3. It was late, she was getting frustrated because she couldn’t get the software to recognise her face. 3
  4. 4. Her friends convinced her to take a break. It was halloween, she bought a costume mask, and went out. 4
  5. 5. Hours later, coming back to her computer... ...she kept trying but still nothing worked. 5
  6. 6. Then she had an idea... 6
  7. 7. 7
  8. 8. 8 Joy Buolamwini Computer scientist Poet and digital activist PhD Student, MIT Media Lab TED talk: Joy Buolamwini, Fighting Algorithmic Bias
  9. 9. 9 Arts and Culture
  10. 10. 10 Ain’t I a Woman
  11. 11. TECHNOLOGY IS NOT NEUTRAL 11
  12. 12. ● Voice tech reinforces the master/servant dynamic ● The gendering reinforces women in servitude roles ● It doesn’t respond appropriately to abuse 12 VOICE ASSISTANTS: IS ALEXA NEUTRAL? Recoding Voice Technology, Is a Feminist Alexa Possible?
  13. 13. TECHNOLOGY IS NOT NEUTRAL 13
  14. 14. HOW BIASES ENTER There are multiple levels 14 INDUSTRY LEVEL DATA LEVEL © 2020 ThoughtWorks PERSONAL LEVEL
  15. 15. 15
  16. 16. REDUCING HARM IS ABOUT EFFECTIVELY MANAGING RISK 16 ACTING ETHICALLY IS ABOUT MANAGING HARM
  17. 17. A CONSCIENTIOUS DESIGNER 17
  18. 18. ROLE OF THE CONSCIENTIOUS DESIGNER ● Know that they have the power to shape people’s engagement with technology through their values ● Know that digital technologies embody ethical and political values ● By accepting this means they can consider ethical and political concerns alongside typical design and engineering ideals/practises 18 Mary Flanagan and Helen Nissenbaum, Values at Play in Digital Games
  19. 19. 19 Who do you work for?
  20. 20. 20 Conscientious designers... “work for the people affected by the problem. Under no circumstances do we put those people at risk. “ - Mike Monteiro (Ruined by Design)
  21. 21. 21 “We need to value the consequences of our actions more than the cleverness of our ideas” Mike Monteiro, Ruined by Design
  22. 22. CODE OF ETHICS Set of guiding principles to assist us make sound professional judgements 22© 2020 ThoughtWorks A Designer’s Code of Ethics - Mule Design
  23. 23. NUREMBERG CODE 23© ThoughtWorks 2020 1. Voluntary human consent is essential 2. Experiment results should benefit society 3. Experiments should be based on animal experimentation, anticipated results should justify the experiment or research 4. Avoid any unnecessary physical or mental suffering 5. An experiment shouldn’t be run if there’s any chance of death/disability 6. Minimise risk of subjects 7. Proper preparations and facilities to protect people 8. Experiments conducted only by qualified persons 9. Subjects can withdraw/terminate at anytime 10. Terminate experiment if results are known or with best judgement Signaled modern medical ethics https://en.wikipedia.org/wiki/Nuremberg_trials
  24. 24. A DESIGNER’S CODE OF ETHICS 24© ThoughtWorks 2020 1. A designer is first and foremost a human being. 2. A designer is responsible for the work they put into the world. 3. A designer values impact over form. 4. A designer owes the people who hire them not just their labour, but their counsel. 5. A designer welcomes criticism. 6. A designer strives to know their audience. 7. A designer does not believe in edge cases. 8. A designer is part of a professional community. 9. A designer welcomes a diverse and competitive field. 10. A designer takes time for self-reflection. https://muledesign.com/2017/07/a-designers-code-of-ethics
  25. 25. 25 If we were to have a code... “applying the Code to your daily work doesn’t mean just reading it once, but considering it holistically” https://www.infoq.com/articles/acm-code-ethics/
  26. 26. Gaps Between Intention + Output 26© ThoughtWorks 2019 VISION + INTENTION Terms + Conditions Consent FINAL OUTPUT Terms + Conditions Terms + Conditions IMPLEMENTATION Consent Consent
  27. 27. ETHICS APPLICATIONS Framework from an industry governing body to help research meet ethical guidelines. 27© 2020 ThoughtWorks
  28. 28. SAMPLE ETHICS APPLICATION QUESTIONS 28© ThoughtWorks 2020 1. Data Collection/Gathering: What information are you going to collect/gather? 2. Data Collection/Gathering: Impact of, and response, to participant withdrawal (opt out?) 3. Data Management: How will you store, provide access to, disclose, use/re-use, transfer, destroy or archive the information that you collect/gather? 4. Data Analysis: How will you measure, manipulate and/or analyse the information that you collect/gather? 5. Other potential uses: what other uses outside of this project will the data be used for?
  29. 29. ETHICAL TECH FRAMEWORKS Activities and processes to help assess products, features and decisions in technology. 29© 2020 ThoughtWorks
  30. 30. 30© ThoughtWorks 2020
  31. 31. EthicalOS ● Toolkit to help shape strategy or tech ● Assess your product against 8 risk zones ● Activities to address better informed outcomes ● Useful in pre-conception and discovery WHAT IT IS? 31Reference: EthicalOS
  32. 32. ACTIVITY: YOUR TURN 32© 2020 ThoughtWorks
  33. 33. 33 Let’s all practise being ‘conscientious designers’
  34. 34. 34
  35. 35. SCENARIO You’re tasked with creating a new automated voice system for a major telecommunications company. The voice AI will handle all calls to customers who are overdue to remind them of payment and negotiate a payment plan if needed. The company wants to utilise Google’s Natural Language AI (Duplex). Early testing has shown it’s sophisticated enough to use and will integrate well with current systems. 35© 2020 ThoughtWorks
  36. 36. GOALS 36 It’s been identified that $1.5M/ year is spent on direct administration costs associated with managing late payments. Main Objective is to Reduce Operational Costs © 2020 ThoughtWorks
  37. 37. 37 ACTIVITY 1: PARKING LOT Individual activity (2mins)
  38. 38. 38 ACTIVITY 2: CHOOSE Choose risk areas (5mins)
  39. 39. 40 ACTIVITY 3: DISCUSS Discuss risk areas (15mins)
  40. 40. 41 NEXT STEPS Prioritise and strategise
  41. 41. Responsible Tech Playbook 42
  42. 42. 43 Rather than considering how we can reduce harm. How can we actively do good?
  43. 43. QUESTIONS? 44© 2020 ThoughtWorks
  44. 44. 45 ● https://artsandculture.google.com/exhibit/joy-buolamwini-examining-racial-and-gender-bias-in-facial-analysis-software-b arbican-centre/LgKCaNKAVWQPJg?hl=en ● https://www.npr.org/sections/codeswitch/2020/02/08/770174171/when-bias-is-coded-into-our-technology ● http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212 ● http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf ● https://www.refinery29.com/en-ca/2019/08/240757/meal-tracking-apps-eating-disorder PHOTO CREDITS + SOURCES
  45. 45. 46 ACTIVITY 2: PRIORITISE Prioritise risks (5 mins)
  46. 46. 47 1. Use the checklists to choose the risk most critical 2. Once identify them write and put on post its 3. Write the questions/prompts and put them underneath risks ○ Discuss questions ○ Put outcomes on wall PRIORITISE ACTIVITY PRIORITISE RISKS 5min
  47. 47. 48 ACTIVITY 3: STOP OR GO Should you continue? (5 mins)
  48. 48. 49 ● What is the overall impact you’ve identified? ● ????? STOP OR GO SHOULD YOU CONTINUE?

×