1.
TRANSPARENT AI
OpenEthics.ai
November 2020charliepownall.com
CHARLIE POWNALL
2.
• Independent reputation and
communications advisor, trainer,
speaker focused on AI, cyber, and
social media
• Technology & telecoms;
healthcare, pharma & life sciences;
business & professional services;
start-up & early stage
• UK-based, with Europe, Middle-
east and Asia-Pacific experience
and footprint
About Charlie Pownall
• Author, Managing Online Reputation
(Palgrave Macmillan, 2015)
• Fellow, Royal Society of Arts
• Faculty, Center for Leadership &
Learning, Johnson & Johnson
• Chairman, Communications &
Marketing Committee, American
Chamber of Commerce Hong Kong,
2012-2015
• Formerly: European Commission,
Reuters, SYZYGY AG, WPP plc,
Burson-Marsteller
2
3.
3
Perceptions of AI
• Widely divergent views on AI between different stakeholders,
notably industry and general public/consumers
• High general public/consumer awareness, low understanding
• Widespread concerns about privacy, cyber attacks,
manipulation, dis/misinformation, equality and human rights,
(un)employment
5.
5
Why trust in AI is low
• ‘Black box’ AI/algorithmic systems and opaque research
• Inadequate understanding of AI functionality, competence,
risks, limitations
• Sensationalist/alarmist media coverage on (un)employment,
surveillance, killer robots, geo-politics, etc
• Many myths and misconceptions
7.
7
Changing AI landscape
• AI use is fully embedded in everyday life
• AI/algorithms are regular headline news
• More general public/consumer/end user backlashes
• Plethora of active NGOs/civil society organisations
• Pressure on AI developer/manager and academic/researcher
responsibility and accountability
• Prospect of government intervention
• Broader, deeper understanding of AI limitations
8.
8
Components of trustworthy AI are emerging
• Traceability/verifiability and explainability/interpretability help
experts make AI systems safer and fairer, and understand AI
decision-making
• Strong governance and ethics help organisations develop and
manage more appropriate AI systems and be more accountable
• AI/algorithms remain fundamentally opaque and confusing to
the general public/consumers, and perceived accountability
remains low
9.
9
Principles of AI transparency
• Put values, ethics, and transparency at the centre of AI governance
• Involve directly impacted stakeholders in AI design and testing
• Monitor and strengthen AI governance and technology continuously
• Communicate clearly, honestly, consistently, regularly, from the start
• Think laterally about unexpected consequences
10.
10
AI transparency in practice – communications
• Governance
communication
– People, policies,
protocols, process,
strategy, etc
– Stakeholder
involvement,
incl. suppliers
– Datasets, toolkits,
tools
• Product/service
communication
– Purpose, objectives,
context, technology,
data processing,
outcomes
– Risks/limitations,
privacy, human
oversight,
ownership
• Incident and crisis
management
– Preparation,
response, recovery
– Learning, and acting
upon lessons learned
11.
11
The dangers of AI transparency
• Loss of IP
• Loss of competitive advantage
• Failure to meet higher expectations
• Reputational risk
12.
Useful resources
12
• AI and robotics perception research
Recent research studies on perceptions on and trust in AI and robotics
amongst the general public, consumers, patients, politicians, business,
employees and other stakeholders
• AI and algorithmic incident and controversy repositry
An open registry of AI-driven incidents and controversies used by
journalists, researchers, NGOs, businesses and others for reference,
research and product/service development
Parece que tem um bloqueador de anúncios ativo. Ao listar o SlideShare no seu bloqueador de anúncios, está a apoiar a nossa comunidade de criadores de conteúdo.
Odeia anúncios?
Atualizámos a nossa política de privacidade.
Atualizámos a nossa política de privacidade de modo a estarmos em conformidade com os regulamentos de privacidade em constante mutação a nível mundial e para lhe fornecer uma visão sobre as formas limitadas de utilização dos seus dados.
Pode ler os detalhes abaixo. Ao aceitar, está a concordar com a política de privacidade atualizada.