Artificial Intelligence - intersection with compliance. How AI principles work with compliance principles around data protection. AI and Compliance. AI - SYSC 13.7 - FCA Compliance. AI and regulation. AI and FCA regulation. AI and ICO regulation.
4. Artificial Intelligence (“AI”) Regulation
A vast array of sets of AI principles have been published over the years as ethical guides to the
use of AI.
DEFINING AI
Artificial Intelligence is a term shaped by socio-behavioural rationales of human capabilities –
essentially, expectations that machines could emulate human cognition and behaviour. Artificial
intelligence is intelligence that machines display in the way they use data to solve problems
VISIT US AT - WWW.INCILLATION.COM
5. AI and Intelligence
Intelligence in the world of AI is not merely book learning, a narrow academic skill, or test-taking
smarts. Rather, it reflects a broader and deeper capability for comprehending our
surroundings—‘catching on’, ‘making sense’ of things, or ‘figuring out’ what to do” (Gottfredson,
1997).
This characterises AI as a suite of technologies, exhibiting some degree of autonomous learning
and enabling: It does this through
● Pattern detection - by recognising regularities and irregularities in data
● Decision-making by generating rules from general data and apply specific profiles against
those rules
VISIT US AT - WWW.INCILLATION.COM
6. Trustworthy AI
Trust is a principle that affects the use and adoption of technology across the world.
Trust affects how we perceive our data being used, the ability of technology to be used to make
decisions about ourselves which is accepted as not infringing on our rights and compromising on
security.
The EU recently released a paper on Trustworthy AI – aimed as a guide towards AI policy and
regulatory development. In this paper a series of principles underpinning trust in AI were
identified.
VISIT US AT - WWW.INCILLATION.COM
7. Key Principles – Trustworthy AI
1. Lawfulness
2. Ethics
3. Accountability
4. Safety
5. Human Oversight
6. Diversity / Fairness
7. Transparency
8. Privacy
9. Non Discrimination
10. Societal wellbeing
VISIT US AT - WWW.INCILLATION.COM
8. How is this relevant?
The key principles identified around the use of AI relate to factors that affect peoples ability to trust its usage.
They are also key to data design aimed at protecting personal data and complying with rules around personal data
processing.
These principles are interconnected and take into account the major risks and problems surrounding the use of
AI.
E.G – Privacy.
Privacy is important because all data relates to an identifiable thing and most times to identifiable persons. This
raises the need to balance the collection of information with the personal rights of the person whose information
is being collected and used. Forms on the internet all require the collection of a lot of personal information and
access to services require this as a matter of operational fact. As such it is an important principle to consider and
is often cited together with accountability and safety (trust) as the overarching principles surrounding the ethical
use of AI in today’s society.
VISIT US AT - WWW.INCILLATION.COM
9. Societal Wellbeing
Autonomy is one of the key factors that affect the use and governance around AI today. Autonomy is important
because it determines a course of action. In the context of AI , autonomy is crucial because it affects how
decisions are made.
There are two modes of decision making in AI.
◦ Human Decision Making
◦ Automated Decision making
Autonomy around automated decision making affects how data is used or processed and is a key aspect to
regulations like the GDPR which protects basic rights around data usage.
Societal well being reflects the impact of AI use on society and therefore involves impact on rights such as privacy,
the ability of human to retain control of AI, the ethical use of AI and the ability to retain accountability around AI
usage. This are the areas around which AI is indirectly regulated under the auspices of data protection.
VISIT US AT - WWW.INCILLATION.COM
10. Regulation
The UK financial services regulator requires that “A firm should establish and maintain appropriate systems and controls for managing
operational risks that can arise from inadequacies or failures in its processes and systems (and, as appropriate, the systems and processes
of third party suppliers, agents and others). In doing so a firm should have regard to:
(1) the importance and complexity of processes and systems used in the end-to-end operating cycle for products and activities (for
example, the level of integration of systems);
(2) controls that will help it to prevent system and process failures or identify them to permit prompt rectification (including pre-approval
or reconciliation processes);
(3) whether the design and use of its processes and systems allow it to comply adequately with regulatory and other requirements;
(4) its arrangements for the continuity of operations in the event that a significant process or system becomes unavailable or is destroyed;
and
(5) the importance of monitoring indicators of process or system risk (including reconciliation exceptions, compensation payments for
client losses and documentation errors) and experience of operational losses and exposures”.
VISIT US AT - WWW.INCILLATION.COM
11. Compliance
Compliance in response to regulations like SYSC 13.7 of the FCA Handbook covered above means that
firms that use or are contemplating the use of AI need to be able to understand both the principles
underpinning its ethical use, key regulations affecting its use in operations and controls that mitigate
the key risks attached to its use.
For most firms this means having a strong grasp of their data management systems / infrastructure as
well as rules around the use of data generally where available.
The use of non personal data is mainly unregulated.
However personal data which exposes individuals to the possibility of breaches and right
infringements is increasingly regulated. Documents which assess risks to personal data and which
document how companies comply with regulations are key controls that mitigate the risk of
inappropriate systems and controls around the operational use of AI technology.
VISIT US AT - WWW.INCILLATION.COM
13. DATA INPUT
AND OUTPUT
Output refers to the output or the finished
product after data is processed – it could be
websites, databases, platform. e.t.c
Input refers to what we put into the computers
that we use. This is usually in the form of data –
including personal data and operational data
(data needed to run a system – e.g alogirthms).
VISIT US AT - WWW.INCILLATION.COM
14. What is a Data Management System?
A data management system refers to the infrastructure used to manage data in an organisation.
It includes the IT system, filing systems, the software used for data input and its outputs and all
of the controls used by a company to ensure that it meet legal requirements around the
processing of data and IT security.
Data management systems deal with infrastructure, data quality, and compliance or governance
around the data.
VISIT US AT - WWW.INCILLATION.COM
15. INPUT AND DATA QUALITY
Input consists of data around objects, subjects, variable and operations.
This means that data needed to run a system can refer to any one of the above. As such to
ensure that the system has adequate data, there is a need to map out data requirements and
ensure that relevant data is made available to the system.
Simply put – you need the right input for the right output. This ensures accuracy in the system.
Data Quality : Deals with the accuracy, usefulness and breadth (representativeness) of the data
that you use. It deals with the question of whether the data is understandable, easy to refer to
and is extensive enough to cover the areas it needs to cover to make it fit for purpose.
Issues around data quality touch on ideas such as data labelling, diversity and data design.
VISIT US AT - WWW.INCILLATION.COM
17. Data Compliance Infrastructure
This refers to the infrastructure that supports the data management system. It includes the IT
systems, people who run the systems and the rules that impact the system and ways in which
the system is run to make it accountable – such as
Software
Hardware
Policies
Procedures
Regulatory risk assessments
Governance
Data protection officer
Data Protection teams (Analysts / Managers / Data Protection Officer (DPO)
VISIT US AT - WWW.INCILLATION.COM
18. Governance
Governance is all the processes of interaction be they through the laws, norms, power or language of an
organized society is imposed on and regulated within a social system.
In the world of AI – software creation and programming is not regulated however the use of personal data is
increasingly subject to regulation. Software development while unregulated is subject to assurance.
Regulation : Within the EU – personal data is protected by the GDPR (Data Protection Act 2018 in the UK). This
brings it within the regulatory scope of all companies – where it is usually covered by data protection teams.
Assurance : Software testing - Software Testing helps find and fix already existing mistakes and Quality Assurance
helps avoid them. Both of these processes are important and if you want to meet the best quality criteria, you
cannot have one without the other. This is a form of governance around the use of AI as – AI enabled software
can be developed and tested to ensure that it is works and meets operational principles and guidelines.
Governance focuses on data quality assurance and data compliance.
VISIT US AT - WWW.INCILLATION.COM
19. The key questions to ask around data use
Who uses the data ? – this question reveals the data controller (data owner) and processor
(user).
When and where is it used? – this question deals operationally with the issues of time and
jurisdiction. Jurisdiction determines the law affecting the processing of personal data.
Why is it used? – this question refers to the legality of usage – the reason needs to be legal and
fall within the legal basis of data processing such as consent and legitimate interest.
How is it used ? – this question touches on data processing principles such as data accuracy,
minimalization of data use and the protection of personal data through data rights for subjects
of data processing.
VISIT US AT - WWW.INCILLATION.COM
20. Data Quality / Data Compliance
Data quality
◦ This refers to how fit for purpose the data held by a controller is.
◦ Assurance systems such as data quality assurance
◦ Software testing
◦ Data design principles
◦ Data quality policies
Data Compliance
◦ Refers to the meeting of regulatory rules and requirements.
◦ Governance processes such as data protection risk reviews – e.g. DPIA’s
◦ Data protection policies and infrastructure aimed at upholding data rights and meeting data protection principles.
◦ Regulations protecting the processing or use of data – eg personal data.
VISIT US AT - WWW.INCILLATION.COM
21. Data rights
Focus on principles that resonate both around AI trustworthiness and its ethical use as well as personal
data protection. These rights within the GDPR are
◦ the right to be informed about the collection and the use of their personal data
◦ the right to access personal data and supplementary information
◦ the right to have inaccurate personal data rectified, or completed if it is incomplete
◦ the right to erasure (to be forgotten) in certain circumstances
◦ the right to restrict processing in certain circumstances
◦ the right to data portability, which allows the data subject to obtain and reuse their personal data for their own
purposes across different services
◦ the right to object to processing in certain circumstances
◦ rights in relation to automated decision making and profiling
◦ the right to withdraw consent at any time (where relevant)
◦ the right to complain to the Information Commissioner
◦ The right to be informed
VISIT US AT - WWW.INCILLATION.COM
22. Corresponding AI principles
Transparency : This involves the data protection principles of purpose limitation, data minimisation and data storage minimisation. Data right such as
the right to be informed, right to access personal data and the rights to erasure / object to processing - all come within the principle to transparency as
as they are all designed to aid transparency within the use of personal data.
Accountability : This involves the data protection principle that makes controllers and processors accountable for the personal data that they process.
Corresponding data rights – include the rights to restrict processing, access processed data, rights in relation to automated decision making and all of
of these are underpinned by the data protection principle that requires a lawful basis for the protection of personal data.
Safety : This involves the security data protection principle such as that in the GDPR which states that “ you must ensure that you have appropriate
security measures in place to protect the personal data you hold”. This requires that breaches of data protection such as unauthorised use of data,
unauthorised access to data, and hacking are reported to data regulators. Companies can be fined for breaches of personal data protection thus
ensuring that they are more likely to comply with regal requirements. Safety measures around the protection of personal data include the use of data
data protection risk / impact assessments , Data protection policies, a clear desk policy, cyber / antivirus protection. IT security policies.
In conclusion, much of the input that goes into AI and therefore, AI itself remains unregulated . However, personal data protection means that there is an element of
governance which affects the use of AI today. Data protection is largely where AI meets Compliance in todays regulated world. There have been many arguments for
the governance of AI itself – with commentators putting forth arguments such as AI itself being ungovernable – and it usage being the are subject to governance.
While there is agreement on key principles around trustworthiness and the ethical use of AI this is a changing space which could be subject to governance in the near
future.
VISIT US AT - WWW.INCILLATION.COM