[DSC Europe 22] AI Ethics and AI Quality By Design - Muthu Ramachandran
AI Ethics and AI Quality
Dr Muthu Ramachandran PhD FBCS Senior Fellow of Advance HE, MIEEE, MACM
Visiting Professor @ University of Southampton
Research & Educational Consultant @ AI Tech, UK.
Google Scholar: https://scholar.google.com/citations?user=RLmKWYYAAAAJ&hl=en
Amazon Author, https://tinyurl.com/Muthu-Amazon-Author
Artificial Intelligence and machine learning applications have reached their adaptation maturity as
we have witnessed in all applications and devices we use. However, we need to integrate AI Ethics
and AI Quality into AI software by design. Therefore, we need a strategic and business-driven
framework for the design, development, implementation, testing and quality of AI products. Hence,
I propose a software engineering framework for AI & ML applications (SEF-AI&ML)
AI & DS for SE & SE for AI & DS: Rationale
AI Application Characteristics
AI Ethics Principles, Guidelines & Governance
Traditional Software Quality vs. AI Quality Attributes/Characteristics
AI Application Development Process: AIOps and MLOps
AI Project Management, Strategic SE & Reference Architecture for AI Applications
Evaluation of AI Architecture: Chatbot Case Study
• One of the aims is to provoke some thoughts on Software Engineering challenges for AI
& ML applications development, data & model development validation & verification.
Some of those challenges are different from traditional software development that we
have seen so far for the past 50 years or so.
• Challenge 1: Learning from real-time data which will be fed back to ML & AI models is
difficult to predict & specify its behaviours
• Challenge 2: Difficult to test data, debug AL, ML & DL systems
• Challenge 3: Even more challenging is Deep Learning (DL), where not only the number of
parameters can be of the order of millions, but where, typically, the representation of
the data is learned separately from the inferential models and can consist of different
nested levels of abstraction.
• Challenge 4: what is the Software Engineering process for developing & delivering AI,
ML, & DL (AMD) applications?
• Challenge 5: Some of the key attributes of AI quality are fairness, accountability,
explainability, responsibility, transparency and how do we specify & validate all the
• How should software development teams integrate the AI model lifecycle (training, testing, deploying,
evolving, etc.) into their software development process?
• What new roles, artifacts, and activities of ML development process come into play to affect SD process
• How do we distinguish between SE for ML vs. ML for SE?
• How do we integrate those new roles, artifacts, and activities tie into existing agile or DevOps process?
• How is SE research for AI-based systems characterized?
• What are the characteristics of AI-based systems (used terms, scope, and quality goals)?
• Which SE approaches for AI-based systems have been reported in the scientific literature?
• What are the existing challenges associated with SE for AI-based systems?
• Challenge 6: How do we integrate AI Ethics and AI Quality attributes into AI Software?
of SE, AI &
• 60 software development methodologies
• 50 static analysis tools
• 40 software design methods
• 37 benchmark organizations
• 25 size metrics
• 20 kinds of project management tools
• 22 kinds of testing and dozens of other tool variations.
• Minimum of 3000 programming languages software consisted,
even though only 100 were frequently used. New programming
languages are announced every 2 weeks, and new tools are out
more than one in each month. Every 10 months new
methodologies are discovered.
• Established ML & AI Algorithms (There are four types of machine
learning algorithms: supervised, semi-supervised, unsupervised
• Statistics and Visualisation Techniques
• How SE can help Data Science, AI, ML, RL, DL Applications
Development to achieve desired quality?
• How AI can help to further develop SE?
: AI, ML, SE,
DevOps, and IT
What if there was a better way? Machine Learning Operations
(MLOps) will get your AI projects out of the lab and into production
where they can generate value and help transform your business. In
this instalment of four Data Science Central Podcasts on MLOps, we
explore best practices in Production Model Monitoring. To make AI
and ML successful then it needs to be continuously integrate into a
software or any production environment such as automated
manufacturing, etc. Podcast link,
MLOps: AI, ML,
The Three Ways of DevOps Principles
Kim, G (2022) The Three Ways: The Principles Underpinning DevOps, https://itrevolution.com/articles/the-three-ways-principles-underpinning-devops/
The First Way emphasizes the performance of the entire system, as opposed to the performance of a
specific silo of work or department. Outcome: putting the First Way into practice include never passing a
known defect to downstream work centres, never allowing local optimization to create global
degradation, always seeking to increase flow, and always seeking to achieve profound understanding of
the system (as per Deming).
The Second Way is about creating the right to left feedback loops. The goal of almost any process
improvement initiative is to shorten and amplify feedback loops so necessary corrections can be
continually made. Outcome: Understanding and responding to all customers, internal and external,
shortening and amplifying all feedback loops, and embedding knowledge where we need it.
The Third Way is about creating a culture that fosters two things: continual experimentation, taking risks
and learning from failure; and understanding that repetition and practice is the prerequisite to mastery.
Outcome: allocating time for the improvement of daily work, creating rituals that reward the team for
taking risks, and introducing faults into the system to increase resilience.
The worldwide popularity score of various types of
ML algorithms (supervised, unsupervised, semi-
supervised, and reinforcement) in a range of 0
(min) to 100 (max) over time where x-axis
represents the timestamp information and y-axis
represents the corresponding score
Sarkar, H. I (2021) Machine Learning: Algorithms, Real-World Applications and Research Directions,
SN Computer Science (2021) 2:160 https://doi.org/10.1007/s42979-021-00592-x
Human Intelligence vs AI
Isaac Asimov once said, “The saddest aspect of
life right now is that science gathers knowledge
faster than society gathers wisdom.” For a
change, let’s use our AI knowledge to improve
Components of intelligence. Can’t we improve
human intelligence by using the lessons
learned from AI? We cannot change the
architecture but we can improve the training.
AI Ethics disciplinary landscape
It is imperative to build ethics into algorithms, otherwise AI will make unethical choices by design (Gov.au)
Zhou, J. et al (2020) A Survey on Ethical Principles of AI and Implementations, IEEE ETHAI 2020, December, DOI:
Towards Achieving Integrated Frameworks for Ethical AI
•Build Ethics In (BEI) with six pillars of
trustworthiness such as safety, security, privacy,
reliability, business integrity & resiliency, and
formalised and standardised ethical & risk
•Formal methods verification & Validation for
Robotics and high Integrity AI Applications such as
•Full proof AI ethics assessment
•Ethics Maturity Models
•Ethical Management of AI Frameworks
•Verify & Validate five pillars of trustworthiness
(Security, Privacy, reliability, Business integrity &
Formalised ethical & Risk assessment)
•Human-Level AI (Representation and Computation
of Meaning in Natural Language, Jackson, C. P
(2019)) – problem solving, learning and gaining
knowledge, self-improving, self-recovery, creative,
inventive and discovery. Artificial general intelligence
(AGI), machine learning approaches toward
achieving fully general artificial intelligence
•Follow Agile & DevOps
•Verify & validate security, safety
Frameworks for AI
• Explainable AI (XAI) needs to be
designed keeping in mind the
business, end users, stakeholders and
• Explainable features
• Explainable selection of models
• Explainable what if scenarios verified
formally and validate using simulation
models such as BPMN
• Integrating ethics into AI/ML/DL
requirements, Design, & Test (AIDevOps) &
SE for AI approaches
• Establishing & integrating accountability,
fairness, security, safety, trustworthiness, &
• Data source authenticity, cleaning, &
• Model identification, validation, verification
• Algorithm identification, validation &
Human Level AI
3S AI Ethics
Security AI Safety
AI, Sustainable AI:
Engineering for AI
Integrated Ethical AI: Conversational AI, Explainable AI, Responsible AI, Trustworthy AI (CHERT AI’s), Safety AI, Security AI, and AIDevSecOps
AI ETHICS GUIDELINES: The eight key themes were:
safety and security
fairness and non-
human control of technology,
promotion of human
11 Normative Principles: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence,
freedom and autonomy, trust, sustainability, dignity and solidarity
AI Ethics: Right to Intelligence
• Right to Intelligence: Use of our Data by AI Applications & Businesses
AI Ethics: 5
•Right to Intelligence -The right to protect human intellectual capabilities from being displaced
by intelligent innovative systems.
•Purpose Driven - Purpose driven design to augment people’s current capabilities, innovation
should solve unexplored and unaddressed problems thereby ensuring innovations add value.
•Disruption Prevention - Measures for displacement protection should be in place to minimise
social disruption by taking measures to create seamless and sustainable innovations for
•Risk Evaluated - To reassure innovations are designed for safe, ethical, and inclusive adoption,
and the risks are managed by the designer and the user equally
•Accountable Re-design - Accountable Redesign allows every innovation a fair opportunity to
be people-centric, sustainable, and accountable by ensuring the designer and innovators
produce robust technology. Also, allowing existing innovations to embed accountability in their
design thinking process and reverse the existing damage.
Examples of Capturing AI Ethics Requirements Based on Ethics
Principles & Guidelines
• The European Commission’s draft ethical guidelines for trustworthy AI 
lists five such principles: Autonomy (respect for human dignity),
Beneficience (doing good to others), Nonmaleficence (doing no harm to
others), Justice (treating others fairly), Explicability (behaving transparently
towards others). For example,
• from the Principle of Autonomy one may derive “Respect for a person’s
• and from that an ethical requirement “Take a photo of someone only after
her/his consent” for a phone camera.
• As another example, from Nonmaleficence, we may derive a functional
requirement “Do not drive fast past a bystander” for a driverless car.
Datta, A (2022) AI
Quality – the Key to
Driving Business Value
What are AI Quality Attributes?
• In short, AI Quality encompasses not just model performance metrics, but a
much richer set of attributes that capture how well the model will
generalize, including its conceptual soundness, explainability, stability,
robustness, faireness, reliability, Unbiased Outcome, Human-Centredness,
and data quality. Model performance or accuracy is a key attribute of AI
Quality. Conceptual soundness (Model
Performance and Accuracy)
Reproducibility and auditability
Compliance with Governance and
AI Quality Attributes
Therefore, a new perspective is presented as quality engineering fields.
Breu, Kuntzmann-Combelles, and Felderer (2014) proposed four fields to measure quality such as knowledge management, automation,
data analysis, and collaborative processes.
In addition, they insist on continuous delivery as a mandatory attribute for business agility.
Surprisingly, more recently, the term assurance has changed its meaning completely for modern complex
systems and disruptive technologies such as AI and ML.
Bloomfield (2019) defines assurance as the claims, arguments, and evidence (CAE) framework as they define Claims are assertions put forward
for general acceptance. Another biggest property of any disruptive technologies such as AI and ML systems is the behavioural uncertainty in
a real-time scenario, bias based on existing data, safety in self-driving cars, etc.
AI and ML Systems Characteristics vs. SE Best Practices
To this end, Martínez-
Fernández et al. (2021) have
characteristics of AI systems
Agile vs. AI & Machine Learning Lifecycle
Engineering for AI &
(ML) Applications and
Services & Identify
Agile Software Engineering Lifecycle
AI & ML Model
business & ethical
Cleaning, & Data
Lifecycle (Model, Data-
Oriented, and Data
AI/ML Requirements Engineering
Business Process Modelling Notations (BPMN) & simulation to validate key performance
requirements such as resource, effort & cost. BPMN allows us to identify service
requirements as well as non-function requirements and decision points at a business
level with emphasis on organizational factors and business strategies.
RE4AI (Requirements Engineering for AI) identified by Admad et al. [1 & 10] such as:
UML / SysML /
Non-Functional Requirements for AI & ML Applications
Performance requirements for probabilistic models and algorithms
Informed Decision making
Project Management & Cost Estimation: Modified COCOMO Model
for AI, ML & NLP Applications and Apps
It is important to calculate the effort required for specified AI applications. In the era of cloud and mobile computing, most AI applications are integrated
with a cloud and social media. Therefore, we can adopt weighting factors identified for cloud computing when adopting the COCOMO cost estimation model.
In addition, Guha (2014) has proposed a modified cloud COCOMO model with weighting for service-oriented projects are a = 4, b = 1.2, c = 2.5, d = .3.
Therefore, the effort and cost estimation equations are:
𝐴𝐼/𝑀𝐿 𝑝𝑟𝑜𝑗𝑒𝑐𝑡 𝑒𝑓𝑓𝑜𝑟𝑡 𝑎𝑝𝑝𝑙𝑖𝑒𝑑 𝐸𝐴 = 𝑎 × 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐴𝐼&𝑀𝐿 𝑆𝑒𝑟𝑣𝑖𝑐𝑒 𝑃𝑜𝑖𝑛𝑡𝑠 𝑏
× 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐴𝑢𝑡𝑜𝑚𝑎𝑡𝑒𝑑 𝐴𝐼&𝑀𝐿 𝐷𝑒𝑐𝑒𝑠𝑖𝑜𝑛 𝑃𝑜𝑖𝑛𝑡𝑠 ×
(𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐻𝑢𝑚𝑎𝑛 𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑃𝑜𝑖𝑛𝑡𝑠) (𝐻𝑢𝑚𝑎𝑛 𝑀𝑜𝑛𝑡ℎ𝑠) ---- (1)
𝐴𝐼/𝑀𝐿 𝑑𝑒𝑣𝑒𝑙𝑜𝑝𝑚𝑒𝑛𝑡 𝑡𝑖𝑚𝑒 𝐷𝑇 = 𝑐 × 𝐸𝑓𝑓𝑜𝑟𝑡 𝐴𝑝𝑝𝑙𝑖𝑒𝑑 𝑑
𝑀𝑜𝑛𝑡ℎ𝑠 ---- (2)
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓𝐴𝐼&𝑀𝐿 & 𝑆𝑜𝑓𝑡𝑤𝑎𝑟𝑒 𝐸𝑛𝑔𝑖𝑛𝑒𝑒𝑟𝑠 𝑅𝑒𝑞𝑢𝑖𝑟𝑒𝑑 = 𝐸𝑓𝑓𝑜𝑟𝑡 𝐴𝑝𝑝𝑙𝑖𝑒𝑑 (𝐸𝐴) 𝐷𝑒𝑣𝑒𝑙𝑜𝑝𝑚𝑒𝑛𝑡 𝑇𝑖𝑚𝑒 (𝐷𝑇) ---- (3)
𝐴𝐼&𝑀𝐿 𝑠𝑖𝑧𝑒 =
𝐴𝐼&𝑀𝐿 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠 × 𝐶𝑙𝑜𝑢𝑑 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡𝑠 (𝑚𝑜𝑑𝑖𝑓𝑖𝑒𝑑 𝐶𝑂𝐶𝑂𝑀𝑂) ×
𝐴𝐼&𝑀𝐿 𝐴𝑢𝑡𝑜𝑚𝑎𝑡𝑒𝑑 𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑃𝑜𝑖𝑛𝑡𝑠 ×
𝐻𝑢𝑚𝑎𝑛 𝐷𝑒𝑐𝑖𝑠𝑖𝑜𝑛 𝑝𝑜𝑖𝑛𝑡𝑠 × 𝐴𝐼&𝑀𝐿 𝐴𝑙𝑔𝑜𝑟𝑖𝑡ℎ𝑚𝑖𝑐 𝐶𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦
Strategic Software Engineering Framework for AI & ML Applications
AI Business: AI Strategies and success factors, Information Architecture (Ontologies), Knowledge Engineering, Domain-
Specific modelling & needs analysis, and Business Risk Analysis
Process: Business Process Driven Service Development Lifecycle (BPD-SDL)
AI Class Identification and Analysis (Conversational AI, Human Centred AI, Explainable AI, Responsible AI, Trustworthy AI
(CHERT AIs )
Methods and Design Principles: service components with soalML
SE Lifecycle for AI (SE4AI) and Reference Architecture for ML/AI Applications (REF4ML)
SE Tools (SAS, Visual Paradigm, BonitaSoft, Bizagi Studio, Tabulea, Mathematica, Azure/ML)
Application of AI in SE: SOSE4BD as a Service (SOSEaaS), BDaaS, Big Data Adoption Framework as a Service (BAaaS),
Software Engineering Analytics as a Service(SEAaaS), SE Prediction Model as a Service (SEPaaS), Bug Prediction as a
Service with Azure/ML (MLaaS), BD Metrics as a Service (BDMaaS)
SE4AI Adoption Models
Evaluation & Improvement
Reference Architecture for AI & ML Applications
User Applications Services: Chatbots, Autonomous Driving, Image Recognition as a Service, Service Security & Safety, etc.
Analytics & Predictive Modelling
AI & Machine Learning Models
Data Acquisitions & Validation
Cloud, Edge, IoT, IIOT, Blockchain Services
Knowledge, Patterns, Solutions, & Reuse
AI Categories: Responsible AI, Explainable AI, Human-Centered & Trustworthy AI, and Cognitive & Conversational AI
REF4AIML Service Bus
BPMN Simulation: Performance Metrics for Chatbot Application
For 100 users accessing this simulated chatbot took approximately 0.17 seconds. In addition, there is a promising result that
shows about 98% of resource utilization for most of the tasks of Ai Scientists, AI security, Chatbots, Cloud resources,
Knowledge Discovery, Machine learning Analytics and AI models. In addition, we can also use the BPMN models in our cost
effort estimation equations to see the exact complexity and cost-benefit analysis.
• Artificial Intelligence and machine learning
applications have reached their adaptation
maturity as we have witnessed in all applications
and devices we use.
• However, we need a systematic approach for the
design, development, implementation, and testing
of AI products. Hence this article proposes a
software engineering framework for AI & ML
• The framework has been validated using a case
study on Chatbot a conversational AI using BPMN
modelling and simulation and the results show
validation of performance and resource
requirements for AI chatbot cloud-driven services
with 98% utilization and time efficiency.
• The results show a promising outcome for the
application of systematic software engineering
principles to achieve the desired AI quality.
Sommerville, I (2016) Software Engineering, 10th Edition, Pearson
Breu, R., Kuntzmann-Combelles, A., and Felderer, M (2014) New perspective on software quality, IEEE Software, January/February 2014
Bloomfield, R. et al. (2019) Disruptive Innovations and Disruptive Assurance: Assuring Machine Learning and Autonomy, IEEE Computer, August 2019.
Al Alamin, A., M and Uddin, G (2021) Quality assurance challenges for machine learning software applications during software development life cycle phases, 2021 IEEE International Conference on Autonomous Systems (ICAS), 11-13 August 2021, IEEE Proceedings, Montreal, QC, Canada
Ramachandran, M (2019) SOSE4BD: Service-oriented software engineering framework for big data applications. In: Proceedings of the 4th International Conference on Internet of Things, Big Data and Security - Volume 1: IoTBDS. SciTePress, pp. 248-254. ISBN 9789897583698 DOI:
Khomh, F et al. (2018) Software Engineering for Machine-Learning Applications, The Road Ahead, IEEE Software, September/October 2018
SEMLA (2018) The First Symposium on Software Engineering for Machine Learning Applications (SEMLA), http://semla.polymtl.ca/program/, Polytechnique Montreal, June 12 – 13, 2018
Rashid, E., Patnayak, S., and Bhattacherjee, V (2012) A Survey in the Area of Machine Learning and Its Application for Software Quality Prediction, ACM SIGSOFT Software Engineering Notes, September 2012 Volume 37 Number 5
Zhang, D and Tsai, J.J.P (2007) (editors), Advances in machine learning applications in software engineering, IGI
Microsoft (2018) “The Team Data Science Process,” https://docs.microsoft.com/enus/ azure/machine-learning/team-data-science-process/, accessed: 2018-09-24.
U. Fayyad, G. Piatetsky-Shapiro, and P. Smyth, “The KDD process for extracting useful knowledge from volumes of data,” Communications of the ACM, vol. 39, no. 11, pp. 27–34, 1996.
R. Wirth and J. Hipp, “CRISP-DM: Towards a standard process model for data mining,” in Proc. 4th Intl. Conference on Practical Applications of Knowledge Discovery and Data mining, 2000, pp. 29–39.
Amershi S, Begel A, Bird C, DeLine R, Gall H, Kamar E, Nagappan N, Nushi B, Zimmermann T (2019) Software Engineering for Machine Learning: A Case Study. International Conference on Software Engineering (ICSE 2019) - Software Engineering in Practice track
Colomo-Palacios, R (2019) Towards a Software Engineering Framework for the Design, Construction and Deployment of Machine Learning-Based Solutions in Digitalization Processes, Research & Innovation Forum 2019, Visvizi, A and Lytras, M. D (eds), Springer
Martínez-Fernández, S. et al. (2021) Software Engineering for AI-Based Systems: A Survey, https://arxiv.org/abs/2105.01984
Mahmood, Y., et. Al (2022) Software Effort Estimation Accuracy Prediction of Machine Learning Techniques: A Systematic Performance Evaluation, https://arxiv.org/ftp/arxiv/papers/2101/2101.10658.pdf, accessed on 02/05/22
Bencomo, N., et. Al. (2022) The Secret to Better AI and Better Software (Is Requirements Engineering), IEEE Software, January/February 2022
Ahmad, B. F., and Ibrahim, M. L (2022) Software Development Effort Estimation Techniques Using Long Short Term Memory, International Conference on Computer Science and Software Engineering CSASE, Duhok, Kurdistan Region – Iraq, 2022
Ryan, M., and Stahl, B. C (2020) Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications, AI Ethics Guidelines, https://www.emerald.com/insight/1477-996X.htm
R.Guizzardi, G.Amaral, G.Guizzardi and J.Mylopoulos (2020) Ethical Requirements for AI Systems, /https://www.researchgate.net/profile/Giancarlo-Guizzardi-2/publication/339886423_Ethical_Requirements_for_AI_Systems/links/5e6a76e9458515e555762ce0/Ethical-Requirements-for-
SEERENE (2022) Study: AI for Software Engineering: AI-based approaches for the management of complex software projects, https://www.seerene.com/ai-for-software-engineering, Video Talk, https://www.youtube.com/watch?v=gRLJapTvj24&t=177s
SEI (2022) Artificial Intelligence Engineering, https://www.sei.cmu.edu/our-work/artificial-intelligence-engineering/
Barenkamp et al. (2020) Applications of AI in classical software engineering, AI Perspectives (2020) 2:1, https://doi.org/10.1186/s42467-020-00005-4