13. IEEE Global Initiative for Ethical
Considerations in Artificial Intelligence and
Autonomous Systems
• Personal Data and Individual Access Control
•
• Digital Personas
• Regional Jurisdiction
• Agency and Control
• Transparency and Access
• Symmetry
• Children’s Issues
• Appendices
15. • 以下のようなWGが基準つくり作業をしている
•
• IEEE P7002: Data Privacy Process
• IEEE P7004: Standard on Child and Student Data
Governance
• IEEE P7005: Standard on Employer Data
Governance
• IEEE P7006: Standard on Personal Data AI Agent
Working Group
•
26. Digital Persona: Birth to Death
• Pre-birth to post life digital records (health data)
• Birth and the right to claim citizenship (government data)
• Enrolment in school (education data)
• Travel and services (transport data)
• Cross border access and visas (immigration data)
• Consumption of goods and services (consumer and loyalty data)
• Connected devices, IoT and wearables (telecommunications data)
• Social and news networks (media and content data)
• Professional training, internship and work (tax and employment data)
• Societal participation (online forums, voting and party affiliation data)
• Contracts, assets and accidents (insurance and legal data)
• Financial participation (banking and finance data)
• Death (digital inheritance data).
27. • Issue:
• How can AI interact with government authorities to facilitate law enforcement and intelligence collection while respecting rule of law
and transparency for users?
•
Background:
• Government mass surveillance has been a major issue since allegations of collaboration between technology firms and signals intelligence
agencies such as the US National Security Agency and the UK Government Communications Headquarters were revealed. Further
attempts to acquire personal data by law enforcement agencies such as the US Federal Bureau of Investigation have complicated settled
legal methods of search and seizure. A major source of the problem concerns the current framework of data collection and storage, which
puts corporate organizations in custody of personal data and detached from the generators of that information. Further complicating this
concern is the legitimate interest that security services have in trying to deter and defeat criminal and national security threats.
•
Candidate Recommendations:
• Personal privacy AIs have the potential to change the data paradigm and put the generators of personal information at the centre of
collection. This would re-define the security services’ investigative methods to pre-Internet approaches wherein individuals would be able
to control their information while providing custody to corporate entities under defined and transparent policies. (Note – applications as
described below could also be performed by an AI agent or Guardian as described above, and will be assessed for efficacy by the IEEE P7006
working group.
•
Such a construct would mirror pre-Internet days in which individuals would deposit information in narrow circumstances such as banking,
healthcare, or in transactions.
•
The personal privacy AI agent would include root-level settings that would automatically provide data to authorities after they have
satisfied sufficiently specific warrants, subpoenas, or other court issued orders, unless authority has been vested in other agencies by local
or national law. Further, since corporately held information would be used under the negotiated terms that the AI agent facilitates,
authorities would not have access unless legal exceptions were satisfied. This would force authorities to avoid mass collection in favor of
particularized efforts:
•
28. Symmetry and Consequences
• Issue:
• Could a person have a personalized privacy AI or
algorithmic Agent or Guardian?
• Candidate Recommendations:
• Algorithmic guardian platforms should be
developed for individuals to curate and share their
personal data.
•
29. • Issue:
• Consent is vital to information exchange and innovation in the
digital age. How can we redefine consent regarding personal
data so it respects individual autonomy and dignity?
• Candidate Recommendations:
• The asymmetric power of institutions (including public interest)
over individuals should not force use of personal data when
alternatives such as personal guardians, personal agents, law-
enforcement-restricted registries, and other designs that are not
dependent on loss of agency. When loss of agency is required by
technical expedience, transparency needs to be stressed in order
to mitigate the asymmetric power relationship.
•
30. • Issue:
• Data that is shared easily or haphazardly can be used to make
inferences that an individual may not wish to share.
•
Candidate Recommendation:
• The same AI/AS that parses and analyzes data should also help
individuals understand how personal information can be used. AI can
prove granular-level consent in real-time. Specific information must be
provided at or near the point (or time) of initial data collection to
provide individuals with the knowledge to gauge potential privacy risks
in the long-term. When the user has direct contact with a system: data
controllers, platform operators, and system designers must monitor for
consequences. Positive, negative, and unpredictable impacts of
accessing and collecting data should be made explicitly known to an
individual to provide meaningful consent ahead of collection.
•
31. Agency and Control
• Agency(代理人)の仕事の範囲を決めるために、
personally identifiable information (PII)の定義の
明確化が必要。
• 個人データの収集と転移に対して、GDPRの精神に
則ったポリシーに依拠すべき
• Most of the Western Hemisphere is expected to
rely indirectly on GDPR compliance requirements to
correct corporate policy contrary to consent ethics
in collection and transfer process of personal data.
32. 個人は信頼できるID認証にアク
セスできること
• 認証されたIDで金融、政府、通信などのサービス
を受けることを保証するAI
• Individuals should have access to trusted identity
verification services to validate, prove and support
the context specific use of their identity. Regulated
industries and sectors such as banking, government,
telecommunications should provide data
verification services to citizens and consumers to
provide greatest usage and control for individuals.
33. Transparency and Access
• Issue:
• It is often difficult for users to determine what information
a service provider collects about them and the time of
such aggregation/collection (at the time of installation,
during usage, even when not in use, after deletion) and for
them to correct, amend or manage the information.
•
Candidate Recommendation:
• Service providers should ensure that personal data
management tools are easy to find and use within their
service interface.
•
34. 同意の取り方
• Issue:
• Many AI/AS systems will collect data from individuals that
do not have a direct relationship with, or are not
interacting directly towards said system. How can
meaningful consent be provided in these situations?
•
Candidate Recommendations:
• Where the subject does not have a direct relationship with
the system, consent should be dynamic and must not rely
entirely on initial Terms of Service or other instruction
provided by the data collector to someone other than the
subject. We recommend AI/AS systems be designed to
interpret the data preferences, verbal or otherwise, of all
users signalling limitations on collection and use, discussed
further below.
35. • Issue:
• How do we make better User Experience and consent education
available to consumers as standard to express meaningful
consent?
•
Candidate Recommendation:
• Tools, settings or consumer education are increasingly available to
be utilized now to develop, apply, and enforce consumer consent.
• Provide ‘Privacy Offsets’ as business alternative to the personal
data exchange –
• Apply ‘Consent’ to further certify Artificial Intelligence legal and
as ethics doctrine
36. • Issue:
• In most corporate settings, employees do not have clear consent
on how their personal information (including health and other
data) is used by employers. Given the power differential
between employees and employers, this is an area in need of
clear best-practice.
•
Candidate Recommendation:
In the same way that companies are doing Privacy Impact
Assessments for how individual data is used, companies need to
create Employee Data Impact Assessments to deal with the
specific nuances of corporate specific situations. It should be clear
that no data is collected without the consent of the employee.
•
37. • Issue:
• People who may be losing their ability to understand what kinds of processing
are done in server side computers of IT services on their private data are
unable to meaningfully consent to online terms. The elderly and mentally
impaired adults are vulnerable in terms of consent, presenting consequence to
data privacy.
•
Candidate Recommendations:
• Researchers or developers of AI/AS have to take into account this issue of
vulnerable people and try to work out an AI/AS which alleviates their helpless
situation to prevent possible damage caused by misuse of their personal data.
• Build an AI advisory commission, comprised of elder advocacy and mental
health self-advocacy groups, to help developers produce a level of tools and
comprehension metrics to manifest meaningful and pragmatic consent
applications.