2. The ethics of artificial
intelligence is the branch of the
ethics of technology specific to
artificially intelligent systems. It
is sometimes divided into a
concern with the moral
behavior of humans as they
design, make, use and treat
artificially intelligent systems,
and a concern with the
behavior of machines, in
machine ethics.
About
3. The Major Steps Involved in Applying HCD to AI System
Design
1. Recognize the need for people to define the problem.
2. Inquiring as to whether artificial intelligence (AI)
provides any value to their possible solutions
3. By taking into account the potential for AI to cause
harm,
4. Start with non-AI solutions when prototyping.
5. Make it possible for people to truly dispute the
system.
6. Include safety features.
4. SIX TYPES OF BIAS
1. Historical bias : IT occurs when the state of the world in which the data was
generated is flawed.
2. Representation Bias: It occurs when building datasets for training a model, if those
datasets poorly represent the people that the model will serve.
3. Measurement Bias: It occurs when the accuracy of the data varies across groups.
This can happen when working with proxy variables, if the quality of the proxy
varies in different groups.
4. Aggregation Bias : It occurs when groups are inappropriately combined, resulting
in a model that does not perform well for any group or only performs well for the
majority group.
5. Evaluation Bias : Evaluation bias occurs when evaluating a model, if the benchmark
data does not represent the population that the model will serve.
6. Development Bias: It occurs when the problem the model is intended to solve is
different from the way it is actually used. If the end-users don’t use the model in
5. Criteria for Fairness
• Demographic parity means that the people chosen are representative of the
membership percentages in the applications.
• Equal opportunity: this is the model's sensitivity or true positive rate (TPR). Equal
opportunity fairness ensures that for each group, the proportion of people who
would be selected by the model ("positives") are accurately selected by the model
is the same.
• Equal Accuracy: We may also verify that the model is equally accurate for each
group. That is, for each category, the percentage of proper classifications, or
people who should be refused but aren't, and people who should be accepted but
aren't, should be the same. If a model is 90% accurate for individuals in one group,
it should be 90% accurate for other groups as well.
• Group Unaware (Fairness via Ignorance): Group unaware fairness removes all
information about group membership from the dataset. For example, we can
remove gender variables from the model to make it more gender-neutral. We can
also erase information regarding race and age. This is true in relation to any other
variable.
6. Model Cards
This is a method by which the teams attempt to reach a large audience with crucial information about their
AI model. Model Cards are useful for increasing the model's transparency.
Model details: This covers details such as the developer and model version.
Intended use ; This clarifies what use cases are in scope, who your target users are, and what use cases
are out of scope, among other things.
Factors: What factors influence the model's effectiveness? The findings of the smiling detection model,
for example, are affected by demographic characteristics such as age, gender, and ethnicity, as well as
environmental elements such as illumination and rain, and equipment such as camera type.
Metrics: This part basically measures What metrics are you using to measure the performance of the
model and the reason why we picked those metrics.
Evaluation Data: This part basically considers the datasets that we have taken into consideration for
evaluation and questions whether the dataset taken are a true representative of anticipated test cases.
Training data : Asks which data the model was trained upon.
Quantitative Analyses: How did the model perform on the metrics you chose? Break down performance
by important factors and their intersections.
Ethical Considerations: Here we consider the aspects such as sensitive data used to train the model,
whether the model has implications for human life, health, or safety, how risk was mitigated, and what
harms may be present in model usage.
Recommendations: add things that were not covered elsewhere in the model card.