In this presentation, Nazanin Gifani discussed some of the ethical and legal issues of automated decision making, including algorithmic fairness, transparency and explainability. The big question here is: can AI help us to make fairer decisions ?
What regulation for Artificial Intelligence?Nozha Boujemaa
Semelhante a Fairness and Transparency: Algorithmic Explainability, some Legal and Ethical Perspectives, presented by Nazanin Gifani, DPO at EURA NOVA (20)
3. ● Automated decision-making: making
a decision solely by automated means
without any human involvement.
● Profiling: automated processing of
personal data to evaluate specific
aspects about an individual.
● Data subject have the right to request
human intervention, to express their
views, or to contest the decision made
solely by automated means.
● Data subject have the right to be
informed about and have access to the
logic of algorithms.
3
GDPR and Rights Related to Automated
Decision-Making & Profiling
4. Why Do Algorithmic Fairness and
Transparency Matter?
Bias and Discrimination
5. Audiences’ Jury
● You want to renovate your house:
○ You contact your bank to see if you can
take out a loan for your project.
○ You are directed to use an algorithm
connected to your bank account to check
your eligibility.
○ The algorithm tells you that you are not
eligible to receive the requested amount or
only at a higher interest rate.
○ As an experiment, you edit your address in
your profile. You are suddenly eligible to
borrow your requested amount at a lower
interest rate.
Why do you think this could happen?
5
6. Audiences’ Jury
● A large multi-national company uses an
algorithm to select the employees who should
be considered for a promotion.
● The idea behind using the algorithm is to
remove human bias from the screening
process.
● Sophie realizes that she has worked for the
company for 5 years but she was never
promoted.
● She sends an email to HR with her resume,
endorsements and achievements and requests
a promotion.
● HR explains there is nothing they can do! The
algorithm impartially decides in promotion
matters. They cannot act on or review any
employee requests!
Is HR making a fair decision?
7. Bias and Discrimination
● Data can be biased. Biases can be the result of
many different factors.
● Even unbiased data collection could yield biased
algorithms, for example, when the underlying
data reflects historical discrimination.
● Biased data and algorithms can lead to
discrimination and can affect people’s lives.
● Even when bias does not lead to direct
discrimination, it can lead to unfair results or
reduce the accuracy of the results.
7
11. Imbalances in Training Data
● Lack of representation in
certain sectors
● Lack of enough data for
certain groups of data
subjects
12. Data Quality
Food for thought: Is more data always better? How
can data minimization and the accuracy principle be
reconciled with big data analytics?
14. Technical Solutions
● Adding more data or design two algorithms
● Change the learning process or data feedstock, or
modify the model after training
● Technical solutions can be divided into:
○ Anti-classification: excludes protected
characteristics from consideration.
○ Outcome and error parity: examine how
members of different groups are treated by
a model.
○ Equal calibration: estimation of the
likelihood of something happening matches
the actual frequency of the event
happening.
15. Data Protection Impact Assessment
● Explain why and how AI is being used for
processing, including:
○ Data collection, storage and use
○ Volume, variety and sensitivity of the
input data
○ Nature of the data controller’s
relationship with data subjects
● Explain how the legitimate interest of the
company is balanced with the rights of the
data subject, including:
○ Risks and mitigation
○ Bias and discrimination
○ Trade-offs
16. The Quality of Data Sets Matters
● Representation: Does the data set offer a balanced representation of all relevant
groups?
● Adequate Quantity: Is the data set large enough?
● Reliability: How much confidence do you have in the accuracy of data? Does the
set include noisy or badly-labeled data?
● Relevant features: Does the data set have relevant features for the purpose of
processing?
● Necessity: Are all the attributes in the data set necessary for the purpose of
processing?
● Impact: What is the impact of the algorithm on the rights and interests of data
subjects?
18. Transparency and Explainable AI
● Under the GDPR, data subjects have the right to access the
logic involved in automated decision-making when the
decision made by an algorithm has a significant impact on
them.
● Why should these algorithms be explainable? Data subjects
have the right to understand the logic and the consequences
of such logic to be able to challenge the legality of the
decision made solely by automated means.
Also,Data subjects should be able to predict the
consequences of their behaviour, and decide how their
behaviour should be changed to achieve their desirable
decision. 18
19. The Good
● A good explanation should provide data
subjects information that allows them to
change their behaviour to achieve a
desirable outcome, or challenge the
decision.
○ What data is used?
○ With what impact?
○ By whom?
○ Which security considerations are applied?
○ Which fairness considerations are applied?
21. The Ugly
Impact explanation: the impact that
the use of an AI system and its
decisions has or may have on an
individual and on the wider society.
22. Context Matters
● Who do you explain the logic to and
for what purpose?
● What kind of algorithm? ( decision
tree, black box)
● What is the purpose of processing
data? (marketing, diagnosis,
recruitment?)
24. Privacy vs Accuracy
● More data = less privacy
● Less data = less accuracy
● No system can achieve high degrees of
privacy and high degrees of accuracy.
● Sometimes accuracy is more important
and provides legitimate grounds for
processing more or sensitive data:
medical diagnosis, justice system
● The choice of technical solutions has to
reflect your understanding of trade-offs.
● Risk mitigation techniques are necessary.
25. Privacy vs Fairness
● Lack of data on certain groups of society
● Lack of sensitive features to audit the algorithms
for discrimination and bias
26. Explainability vs Accuracy
● A false dichotomy?
● Most algorithms are explainable
● Some algorithms are difficult to
explain/understand
27. Explainability vs Security
● Risks
○ Revealing other people’s
personal data
○ Revealing trade secrets
and confidential data
○ making system
vulnerable to
manipulation
28. How to Manage Trade-offs
● Identify the trade-offs
● Choose appropriate technical solutions
that mitigate trade-offs
● Be transparent
● Consult the views of the data subject
Conduct a data protection impact
assessment
30. ● Human-in-the-loop (HITL) refers to the
capability for human intervention in
every decision cycle of the system,
which in many cases is neither possible
nor desirable.
● Human-on-the-loop (HOTL) refers to
the capability for human intervention
during the design cycle of the system
and in monitoring the system’s
operation
● Human-in-command (HIC) refers to the
capability to oversee the overall activity
of the AI system (including its broader
economic, societal, legal and ethical
impact) and the ability to decide when
and how to use the system in any
particular situation. 30
31. Can AI Help Us Make Fairer
Decisions?
Human-Machine Interactions
34. ● The right to AI explainability exists in the GDPR, though in a limited way.
● Bias, discrimination and inaccuracies in data can render the results of
algorithms unreliable, discriminatory and unlawful.
● Building and sustaining trust is essential for wider adoption of AI.
● Bringing humans into the loop can help address some of the legal and
ethical issues regarding fairness and transparency of AI algorithms.
● AI can help us detect patterns of discrimination and bias in our
behaviour. Human-AI interaction can help enrich AI algorithms and
improve human decision-making.
34
Summary