This presentation discusses the hard legal questions raised by the research, development and implementation of Articificial Intelligence technologies. It discusses ethics, ex post v ex ante regulation, data biases, and the legal framework for various categories social harms.
Nicolas Petit 27 september 18 - Hard Questions of Law and AI
1. HARD QUESTIONS OF LAW AND REGULATION OF
ARTIFICIAL INTELLIGENCE (AI)
Future Law – Budapest, 27 September 2018
Prof Nicolas Petit, ULiege (BE) and UniSA (AU)
2. www.lcii.eu
EU Commission High Level
Expert Group on AI
Communication on AI
HLG: 2 subgroups
One tasked with drafting of
ethical guidelines on AI
The other tasked with
recommendations for policy
an investment in AI
Through mid 2019 + EU AI
Alliance:
https://ec.europa.eu/futuriu
m/en/eu-ai-alliance
Appetite for regulation?
Parliament resolution, 16 Feb
2017, Civil Law Rules on
Robotics
“Shortcomings” of legal framework
“questions whether the ordinary rules
on liability are sufficient or whether
it calls for new principles and rules
to provide clarity on the legal
liability”
Supports the “designation of a
European Agency for Robotics and
Artificial Intelligence in order to
provide the technical, ethical and
regulatory expertise needed to support
the relevant public actors”
3. www.lcii.eu
Hard questions of law & AI
1. AI in a nutshell
2. Ex ante v ex post regulation?
3. Normative issues: illustration from ALE
4. Ethics (before law): selected issues
5. Proposed framework: who, when and how to
regulate?
4. www.lcii.eu
1. AI in a nutshell
Core ideas
Any physical process
including the mind process
can be modelized as a
computable algorithm
(Church Turing thesis)
Machines can learn:
“Learning is any process by
which a system improves
performance from
experience” (Herbert
Simon)
Today
Brute force computational
power now available (due to
Moore’s law)
Zettabytes (or Yottabytes) of
data now available, and
distributed, and pre labelled
(cloud)
« End of theory »
Deep learning, neural networks, etc.
Use cases: autonomous
vehicles, predictive justice,
predictive medicine,
automated law enforcement
5. www.lcii.eu
Reactive
Mostly common law process
By courts
Standards (content of law
unspecified ex ante)
Preemptive
Mostly civil law process
By technocratic or
democratic
By expert or non expert
organs
Rules (content of law
specified ex ante)
2. Ex ante v ex post regulation of AI?
« Collingridge paradox »
Too early too late
quandary?
6. www.lcii.eu
Paradoxes of ex ante regulation
Disciplinary approach
Lawyers need frictional case studies
to apply deductive reasoning
One problem:
End up looking at sci-fi to generate
case studies (EP, 2017)
Or end-up looking at the past to
generate case studies
Techno-speculation: driverless cars
not the dominant hypo in 1970s to
2000s Sci-Fi, but flying cars
Human-speculation:
teleconferencing v conferencing
Risk of irrelevant law
Technological approach
Emergent behavior hypothesis of
AIs
Inevitable assumption that there
are gaps in the legal system. New
law is needed. Law of the horse.
Two problems:
Specificity of AI and robots
emergent conduct is to emulate
human level intelligence (Minsky,
1984) + emergent behavior
relates to human conduct =>
existing law OK
Tendency “to attribute programs far
more intelligence than they actually
possess” (Yazdani, 1986)
Risk of redundant or extreme law
7. www.lcii.eu
Can the law ever be agile?
5 to 10 years to regulate
tech that’s already 5 to 10
years old
Paradoxes of ex post regulation
« Treacherous turn »
(Bostrom)
Dominant design problem
9. www.lcii.eu
Case study: Automated Law Enforcement
“any computer-based system that
uses input from unattended sensors
to algorithmically determine that a
crime has been, or is about to be,
committed and then takes some
responsive action, such as to
prevent the crime to inform the
appropriate law-enforcement
agency or to impose some form of
punishment” (Shay et al.)
Surveillance, analysis, aggregation
and action
Police (telemetric policing) but
other applications
UK: Connect (tax); US: Loomis
case
10. www.lcii.eu
Should « full enforcement » be the goal?
«Full enforcement » means perfect surveillance and punishment
of infringements + perfect pre-emption
Legal: undercover criminality by law enforcement; civil
disobedience; whistleblowing
Logical: paradox of disappearance of crime, and relaxation of
social consensus against crime
Economics: imperfect enforcement is useful (Stigler, 1970)
Sociological: acceptability of technology is driven by context. Full
enforcement more desirable in countries with endemic crime and
corruption (Honduras v Greece)
Psychological: offender wishes to “to 'have his say' on the matter”
11. www.lcii.eu
Human dignity
Transparency and
informed consent
Democracy
Autonomy and self-
determination
Safety and Cybersecurity
Transparency, legibility
and explainability
Non bias
Responsibility and
accountability
Compliance with HRs
Red lines? But red rights?
4. Ethics
Artificial ethics:
hardwired by design in
robots
Natural ethics: rules
that apply to
researchers
12. www.lcii.eu
Ongoing developments
Private ordering
Standardization: data practices certification
Unilateral products:
IBM just launched a software service that scans AI systems
as they work in order to detect bias and provide explanations
Works with Watson tech, as well as Tensorflow, SparkML,
AWS SageMaker, and AzureML
Public ordering: soft law and guidelines
13. www.lcii.eu
5. Proposed framework
Discrete externality (micro; personal)
Negative: harm
Positive: benefits
For harms, ex post resolution, abstract standards, impact assessment
Systemic externality (macro; societal)
Negative
Substitution effect in jobs
Privacy
Positive
Complementarity effect
Generative or General Purpose technologies
For harms, more ex ante, prescriptive, impact assessment
« Existernality »
Negative: existential risk (Terminator)
Positive: pure human enhancement
For harms, more ex ante, proscriptive, deontological (no impact assessment or fat
tail risks)
More at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2931339
14. Liege Competition and Innovation Institute (LCII)
University of Liege (ULg)
Quartier Agora | Place des Orateurs, 1, Bât. B 33, 4000 Liege, BELGIUM
Thank you
Nicolas.petit@uliege.be
Twitter: @CompetitionProf