BPAC WITH UFSBI GENERAL PRESENTATION 18_05_2017-1.pptx
Towards an Evidence-Based Policy Framework for Trustworthy AI
1. Towards an Evidence-Based Policy
Framework for Trustworthy AI
Dr. Iulia Cioroianu
Institute for Policy Research, University of Bath
United Kingdom
2. “Trustworthy AI”
Number of articles in international English language
publications mentioning “trustworthy AI” between 2011
and 2019. Data compiled using MediaCloud.
Marked relevance in media and public
AI debates within the last two years.
Consistently in the news in
recent months.
3. Named entities in
“trustworthy AI”
media articles
• The European
Union is currently
dominating the
debate.
• Along with
mentions of private
companies.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
UN
Congress
Bloomberg
Huawei
GDPR
Pentagon
Trump administration
Apple
Reuters
MIT
IBM
Facebook
OECD
White House
Amazon
Microsoft
European Union
Google
European Commission
EU
Percentage of articles mentioning entity in international English language
publications that include “trustworthy AI” in the text between 2011 and 2019.
Data compiled using MediaCloud.
4. Most common
terms in
“trustworthy AI”
media articles
• Focus on the
principles outlined in
the European
Commission
guidelines.
• But also mentions of
private sector
companies.
Main terms in international English language publications that include “trustworthy AI” in
the text between 2011 and 2019. Data compiled using MediaCloud.
5. What is trustworthy AI?
• European Commission High-Level Expert Group on AI – 2019
Ethics Guidelines for Trustworthy Artificial Intelligence.
• Trustworthy AI should be:
• lawful - respecting all applicable laws and regulations
• ethical - respecting ethical principles and values
• robust - from a technical and social perspective
6. Trustworthy AI vs. trust in AI (Bryson, 2020)
• Trust and trustworthiness fundamentally different concepts.
• Three dimensions of trust:
• who trusts
• who is trusted
• what the objective of the trust relation is
• Do you trust this AI-based product to achieve objective X?
• Trust by users or citizens is a human, socially built relation.
• Only humans can be held accountable.
• Main requirement is transparency, so responsibility can be assigned.
• No need for blind trust then!
• Trustworthiness is a property of products, processes or a company.
7. Why should we aim to develop
trustworthy AI?
• Prerequisite of AI public uptake.
• Economic reasons
• realising the potential of AI
• while avoiding detrimental consequences
• frictionless internal market for the further
development and uptake of AI
• European goal: strengthening Europe’s position
in AI
• AI increasingly used by governments
• Loss of trust in government
8. Current governance framework
• Driven by European Union efforts
• Requirements for building trustworthy AI:
• Human agency and oversight
• Technical robustness and safety
• Privacy and data governance
• Transparency
• Diversity, non-discrimination and fairness
• Societal and environmental well-being
• Accountability
• Assessment list to evaluate if these key requirements are satisfied.
9. Enforcement powers?
• Not yet. Compliance is voluntary at the moment.
• But experts view the current framework as the starting point
towards EU regulation of AI.
• Expected to “inspire new and specific regulatory developments”.
• Towards the development of AI standards and certification.
• Key element: Is this AI?
• “systems that display intelligent behavior by analyzing their environment
and taking actions - with some degree of autonomy - to achieve specific
goals” – ML specifically mentioned.
10. Main issue:
• Tension between public and media debates and
academic research.
• A quarter of “trustworthy AI” media articles focus on
governments and politics, a large share on economical
and financial aspects.
• Strong social element, driven by societal response to
specific issues and scandals.
• But the research which has so far informed the
development of the regulatory framework consists
mainly of work by computer scientists and law,
philosophy and ethics scholars.
• Very little input from sociologists, economists, political
scientists, public policy scholars.
• Reflected in a relative lack of evidence-based
recommendations.
11. Main issue (continued):
• AI scandals affecting both the private sector and, more important,
the government sector unveil the deep social aspects of the
issues.
• Understanding the process through which trust is built and lost
requires carefully designed social science research.
• Social scientists can provide:
• an evidence-based understanding of the social world.
• an arsenal of trialled rigorous methods for data collection and analysis
which can be used to inform AI regulation, standardization and
certification.
12. University of Bath CDT in Accountable,
Responsible and Transparent AI (ART-AI)
Truly interdisciplinary
approach – training PhD
students in computer
science, engineering and
social science research.
Integrating social
science research in the
development of
trustworthy AI policies.
Wide range of topics:
algorithms, ethics; robotics
safety, computational and
public policy; probabilistic
machine learning, symbolic
AI; provenance,
transparency and
uncertainty quantification,
intelligibility and trust in
heterogeneous intelligent
systems; reinforcement
learning, emotion in
human-machine
interaction, etc.
13. THANK YOU!
Please email us if you are interested in collaborating on
any of these topics!
i.cioroianu@bath.ac.uk