This document summarizes a presentation on responsible use of AI in governance. It discusses the legal impacts of AI, including on legislation, legal professions, and legal subjects. It also examines AI concepts/methods and the debate around AI hype vs. concerns. The EU and member state initiatives on AI ethics and regulations are outlined, as well as international "soft law" approaches. It concludes by questioning whether traditional lawmaking can adequately address AI and the future of normative frameworks.
1. Responsible Use of AI in Governance
Günther Schefbeck
Samos Summit 2021
28 June 2021
2. Outline
• Legal impacts of the AI revolution
• AI concepts and methods
• AI between hype and hysteria
• Regulations vs. ethics
• Commission and EP initiatives
• Examples from Member States and U.S.
• International “soft law” approaches
• Conclusions
• Normative horizons
3. Legal impacts of the AI revolution
• AI impact on legislation
• Wide scope of legislative challenges, e.g.
• Liability
• Privacy
• Security
• „E-person“
• AI impact on legal professions and legal subjects
• Legal information („Legal Analytics“)
• Legal procedure, e.g.
• E-governance
• Smart contracts
• „E-judge“
4. Impacts of AI on law-making
• AI challenges to law-making
• Substantial challenges to align AI development and applications with basic
legal values
• Procedural challenges to mitigate risks of democratic decision-making
processes being impacted by making use of AI based manipulation of public
opinion
• AI chances for law-making
• „Evidence based“ legislation requires evidence that can be drawn from
quickly growing amounts of data through turning them into knowledge by AI
based methods
• Legal impact assessment may be facilitated by AI based socio-economic and
normative modelling
6. AI between hype and hysteria
• AI hype
• “AI is probably the most important thing humanity has ever worked on. I think of it as
something more profound than electricity or fire.” (Sundar Pichai, World Economic Forum
Annual Meeting 2018)
• Ray Kurzweil announces the “singularity” (merger of man and machine) that “will allow us to
transcend” the “limitations of our biological bodies and brains” (Mark O’Connell, To be a
Machine, New York 2017)
• “If someone can provide a monopoly in the field of artificial intelligence, then the
consequences are clear to all of us: he will rule the world.” (Vladimir Putin, 2017)
• AI hysteria
• "The development of full artificial intelligence could spell the end of the human race.“
(Stephen Hawking, 2014)
• „Before the prospect of an intelligence explosion, we humans are like small children playing
with a bomb.“ (Nick Bostrom, Superintelligence, Oxford 2014, p. 259)
• „According to our estimates around 47 percent of total US employment is in the high risk
category.” (Carl B. Frey/Michael A. Osborne, The Future of Employment, Oxford 2013, p. 44)
7. AI between hype and concerns
• Political reaction to AI hype
• IT strategies to promote and support AI development
• China has announced to invest 150 bio. € in AI until 2025 (analysts guess 400
bio.!)
• EU annual investment in AI 1,5 bio. € for 2018-20, reduced to 1 bio. € for
2021-27 (aim: overall public and private annual investment of EU and
member states 20 bio. € over the next decade)
• Commission initiative on Building a European data economy
• Political reaction to AI concerns
• Regulatory frameworks vs. ethical codes
• Typical example: Communication on Building Trust in Human-Centric Artificial
Intelligence, 8 April 2019 (COM(2019) 168 final)
8. EP initiative
• EP called on the Commission to assess the impact of AI and made
recommendations on civil law rules on robotics, to be complemented by a
guiding ethical framework (report A8-0005/2017, resolution adopted on 16
February 2017)
• Definitions and classifications
• Creating a specific legal status of electronic persons for robots
• Establishing a designated EU Agency for Robotics and Artificial Intelligence
• Commission reply (A8-0005/2017/P8_TA-PROV(2017)0051)
• “It is … important to examine whether and how to adapt civil law liability rules to the
needs of the digital economy.”
• “More analysis is necessary …”
• Establishment of a designated Agency rejected
9. Commission Communication of 2019
• Reference to existing regulatory framework (e.g., GDPR [cf. art. 22]) „that
will set the global standard for human-centric AI“
• New challenges of AI technology to be met by ethics guidelines „that
should be applied by developers, suppliers and users of AI in the internal
market, establishing an ethical level playing field across all Member States”
• Key requirements for trustworthy AI
• Human agency and oversight
• Technical robustness and safety
• Privacy and Data Governance
• Transparency
• Diversity, non-discrimination, and fairness
• Societal and environmental well-being
• Accountability
10. AI HLEG
• High-Level Expert Group on Artificial Intelligence (AI HLEG) established by
the Commission in June 2018 (mandate ended in July 2020)
• 52 experts representing academia, civil society, and industry
• Deliverables:
• Ethics Guidelines on Artificial Intelligence delivered to the Commission in March
2019 (published April 2019): 7 key requirements on trustworthy AI
• A Definition of AI (April 2019)
• Policy and Investment Recommendations for Trustworthy AI (June 2019)
• Assessment List for Trustworthy AI (ALTAI) (July 2020)
• Sectoral Considerations on the Policy and Investment Recommendations (July 2020)
• European AI Alliance (multi-stakeholder forum with over 4000 members)
• AI Alliance Assembly met in June 2019 and October 2020
13. Commission White Paper
• Published on 19 February 2020 „to foster a European ecosystem of
excellence and trust in AI” (COM(2020) 65 final), together with Report on
the Safety and Liability Aspects of AI, IoT and Robotics (COM(2020) 64 final)
• Main building blocks:
• Policy framework “to mobilise resources to achieve an ‘ecosystem of excellence’ along
the entire value chain”
• Key elements of a future regulatory framework “that will create a unique ‘ecosystem
of trust’”
• “... there is a need to examine whether current legislation is able to address the risks of AI and
can be effectively enforced, whether adaptations of the legislation are needed, or whether new
legislation is needed.”
• Mandatory requirements discussed for “high-risk” AI applications, voluntary labelling scheme
for others
• Member States, stakeholders, and citizens invited to take part in public
consultation (February to June 2020)
15. Commission White Paper
• Partcipation in the public consultation: 1215 contributions (406 individuals,
352 companies/business organisations, 160 NGOs/trade unions, 152
academic/research institutions, 73 public authorities)
• Main concerns about AI:
• 90 % breach of fundamental rights
• 87 % discriminatory outcomes
• 82 % danger to safety
• Addressing concerns:
• 42 % need for new legislation
• 33 % need for modification of current legislation
• 3 % current legislation sufficient
• Divergence between stakeholder types as to need for new legislation:
citizens 56 % vs. business 19 %
16. EU draft regulation
• Commission AI „package“ of 21 April 2021
• Communication on Fostering a European Approach to Artificial Intelligence
(COM(2021) 205 final)
• Coordinated Plan with Member States: 2021 update (annex to COM(2021) 205 final)
• Proposal for a Regulation laying down harmonised rules on artificial intelligence
(Artificial Intelligence Act) (COM(2021) 206 final)
• Announcement of additional regulatory instruments:
• EU rules to address liability issues related to new technologies, including AI systems
(last quarter 2021-first quarter 2022)
• Revision of sectoral safety legislation (e.g., Machinery Regulation, General Product
Safety Directive) (second quarter 2021)
• Ordinary legislative procedure 2021/0106/COD started on 22 April 2021: so
far, discussions within the Council‘s preparatory bodies
18. EU draft regulation
• Out of five policy options outlined, decision in favour of a proportionate
risk-based approach with a regulatory framework for high-risk AI systems
only, along with voluntary codes of conduct for non-high-risk AI systems
• Four risk levels:
• Unacceptable risk: contravening Union values, e.g. manipulation of behavior, “social
scoring” by public authorities, real-time remote biometric identification for law
enforcement purposes in publicly accessible spaces (with exceptions)
• High risk: AI used in law enforcement, justice, democratic processes, access to
education, employment, migration management, essential services, critical
infrastructures, product safety
• Limited risk: AI systems with specific transparency obligations, e.g. chatbots
• Minimal risk
• AI systems for military purposes excluded from the scope
19. EU draft regulation
• AI systems listed in the „unacceptable risk“ category prohibited
• High-risk AI systems permitted on the European market subject to compliance
with certain mandatory requirements and an ex-ante conformity assessment
• Legal requirements for high-risk AI systems based on HLEG Ethics Guidelines
• Technical solutions to achieve compliance with requirements to be provided by
standards or other technical specifications
• Compliance and enforcement system:
• Existing mechanisms for safety components with third party conformity assessment
procedures already established under the relevant sectoral product safety legislation
• New system for stand-alone high-risk AI systems: internal ex-ante conformity assessment by
provider, registration in an EU database, ex-post supervision by competent authorities
• Commission may expand list of high-risk AI systems used within certain pre-
defined areas, by applying a set of criteria and risk assessment methodology
20. EU draft regulation
• Transparency obligations for AI systems
• interacting with humans
• detecting emotions or characteristics based on biometric data
• generating or manipulating content
• „Regulatory sandboxes“: controlled environments to test innovative technologies
on the basis of agreements with national competent authorities
• Governance system:
• European Artificial Intelligence Board (coordination, advice to Commission)
• One or more national competent authorities, in particular the national supervisory authority
• European Data Protection Supervisor as supervisory authority for EU bodies
• Monitoring and reporting obligations for providers of AI systems
• GDPR-style penalties
• Providers of non-high-risk AI systems may create and implement codes of conduct
21. Reactions
• Mostly ambivalent, e.g.
• „Ambitious but disappointingly vague“ (TechMonitor)
• „A major step with major gaps“ (AlgorithmWatch)
• „Mixed bag“ (EDRi – European Digital Rights)
• „Weak on consumer protection“ (BEUC - The European Consumer
Organisation)
• „More work required“ (German Federal Commissioner for Data Protection
and Freedom of Information)
• While civil rights organisations criticize, e.g., exemptions for public
security and lack of discrimination prevention, business lobbyists
deplore, e.g., the scope of the (implicit) definition of AI going too far
22. Examples from Member States
• Estonia: debate focuses on „black box“ issue and liability issues;
tendency to avoid sector-specific regulations (e.g., autonomous
driving) and go for general algorithmic liability
• France: Commission Nationale de l’Informatique et des Libertés in
December 2017 delivered report on ethical issues with regard to AI
• Germany: Ethics Commission on Automated and Connected Driving in
August 2017 published report containing 20 ethical guidelines; Data
Ethics Commission provided opinion in October 2019
• Italy: Agency for Digital Italy in March 2018 published White Book
also recognizing need to update the regulatory framework for AI
23. Examples from U.S.
• Classical Easterbrook-Lessig debate over internet regulation now transferred to
AI: applying/adapting rules on privacy, discrimination, vehicle safety, etc. on AI vs.
designing legislation specifically directed to the use of AI (cf. Karl Manheim/Lyric
Kaplan, Artificial Intelligence: Risks to Privacy & Democracy, Yale J.L. & Tech. 21,
2019)
• California legislature in 2018 formally adopted the Asilomar Principles on
beneficial AI (Assemb. Con. Res. 215, 2017-18 Leg. (Cal. 2018)), thus laying the
foundation for a regulatory framework taking into account values like information
privacy, accountability, judicial transparency, and respect for human dignity
• Bill for an Algorithmic Accountability Act (116s1108is) failed in both Houses of the
U.S. Congress in 2019
• Virginia Consumer Data Protection Act of March 2021 (sections 59.1-571 through
581 of the Code of Virginia) requires assessments for certain types of high-risk
algorithms
24. International „soft law“ approaches
• OECD Council at Ministerial level on 22 May 2019 adopted a
Recommendation on Artificial Intelligence (OECD/LEGAL/0449)
• One-year preparation process: draft by expert group, internal consultation
• Five principles for “responsible stewardship of trustworthy AI”
• G20 AI Principles of June 2019 based on OECD
• OECD AI Policy Observatory launched in February 2020
• UNESCO is preparing a Recommendation on the ethics of artificial
intelligence
• Two-year process: Ad-hoc expert group produced draft text, multi-stakeholder
consultation, currently revision by governmental experts
• Envisaged adoption by General Conference at the end of 2021
• „Member States should …“
25. Conclusions
• Awareness for the requirements of responsible use of AI has significantly
raised over the past few years
• Still no universally agreed definition of the scope of „AI“
• Catalogues of ethical standards, e.g. EU HLEG and OECD/G20, share core
concepts
• EU: shift from ethics towards regulations, following a risk-based approach
• U.S: still afraid of „overregulation“
• China: „governmentality“ approach
• Claim of EU to be „spearheading the development of new global norms”
(Margrethe Vestager, 21 April 2021) questionable
26. Normative horizons
• Is traditional law-making still adequate to meet the requirements of
globalized economy and information society?
• Will national jurisdictions be superseded by trans-/supranational ones?
• Will territorial principle of application of laws be replaced/complemented
by personal principle?
• Will „soft law“ overcome „hard law“?
• Will normative text be turned into algorithms (or the other way round)?
• Will legislative procedure become more (or even less?) deliberative?
• How will the demand for „evidence based“ legislation be met?
27. Thank you for your attention!
More information:
guenther.schefbeck@parlament.gv.at