1. AI risk assessment
In this lesson, we will understand the AI-associated risks before integrating them into AI strategy.
2. Ethical guardrails
United Nations Educational, Scientific and Cultural Organization, or UNESCO, emphasizes the significance of ethical guardrails to prevent the AI model risks emerging from real-world biases and discrimination, threatening fundamental human rights.
3. AI risks
The National Institute of Standards and Technology, or NIST, defines AI risks as the “composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event”.
4. Data risks
Most of these risks originate from data – the core of any AI endeavor, which could contain sensitive information infringing on user privacy. Moreover, if the data is biased, it can perpetuate societal biases, potentially leading to the misrepresentation of diverse groups.
5. AI gone wrong
An instance of biased data was observed when a recruitment tool trained on imbalanced data favored male candidates, inadvertently penalizing female profiles.
Such biases have also led to discrimination against a race by delaying their mortgage loans.
6. Data privacy breach
A lapse in preserving user data privacy can significantly harm business, similar to when an organization was fined heavily for illegally processing images from the internet to develop facial recognition software, resulting in a breach of the GDPR.
7. Business implications
Hence, before making significant investments, organizations must be aware of these risks and their implications, leading to legal,
8. Business implications
reputational,
9. Business implications
and financial harms.
10. Ethical frameworks
Numerous frameworks, including those from the Ethics Centre,
11. Ethical frameworks
UNESCO,
12. Ethical frameworks
and Microsoft provide essential guardrails to realize AI benefits while minimizing societal harm.
Note: The Ethics Centre, an independent organization, provides a forum to promote and explore ethical decision-making.
13. Key pillars of risk assessment
Largely, they converge to four key pillars: data privacy, fairness, transparency, and accountability.
Let's understand how to incorporate these principles by asking the right questions during AI strategy formulation, alongside exploring ways to alleviate associated risks.
14. Regulations and compliance
Regulations and compliance are crucial before initiating any AI project. Hence, we will discuss them before delving deeper into three broad stages of an AI project lifecycle – data collection and preparation, model development and evaluation, model deployment and monitoring.
The following questions help assess regulatory risk.
Who is accountable if AI systems go rogue? It requires effective AI governance that clearly defines the roles and responsibilities of associated stakeholders.
Is the model compliant with data privacy regulations such as GDPR? If not, establish compliance mechanisms by seeking user consent and anonymizing data.
15. Data collection and preparation
Let us check the questions to assess and mitigate the risks during the data stage.
Is the requisite data available internally in the organization, or must it be externally procured? Establish additional processes to guarantee data integrity if sourced from vendors.
Does data include sensitive information such as Personal Identifiable Information or PII? If so, adopt data anonymization techniques like identifier encryption or synthetic data generation.
Does the data appropriately include the underrepresented population? Augment data by oversampling or generating new data synthetically to create a balanced dataset.
16. Model development and evaluation
Next, let's ensure model development and evaluation risk is assessed appropriately.
Explain how the model arrived at the outcomes. Interpretability frameworks can help understand model decisions and identify potential sources of bias, if any.
Evaluate its output across different categories - does it treat them fairly, or prone to biased results?
17. Model deployment and monitoring
Let us monitor the deployed model with questions like - What is the degree of impact if the predictions go wrong?
For example, the cost of error is high when it directly impacts human lives, such as in the healthcare or finance sector.
If so, establish mitigation strategies to realign model behavior to ensure equitable results.
18. Let's practice!
These questions evolve over time, resulting in a comprehensive framework for organizations to address AI-associated risks.
Let us check our understanding.