1. Additional tools for RAI
Welcome. In this video, we will explore the pivotal role of AI risk and impact assessments in the development of responsible AI systems.
2. Risk versus impact assessments
Risk assessments in AI are critical, they allow us to foresee potential issues and mitigate them before they escalate.
However, understanding the impact of AI requires a broader lens.
This is where AI impact assessments come into play.
While risk assessments focus on the potential negatives, impact assessments allow us to evaluate both the positive and negative consequences AI systems can have on society.
Let's look at risk assessments in detail first.
3. Risk assessments
Risk assessments help us anticipate harmful impacts on individuals and society and to mitigate them proactively.
The U.S. National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework (AI RMF) is a prime example at a national level.
The accompanying AI RMF Playbook provides a dynamic guide that evolves with best practices. All these are accessible online for everybody.
Additionally, regulations for AI assessment are also being formulated at the state level in the U.S., such as New York City’s Local Law 144, mandating the annual auditing of AI recruitment tools.
France’s CNIL has developed an AI risk self-assessment tool, while the OECD’s AI Risk Evaluation Framework offers a broader, more global perspective.
However, despite these advances, a universally recognized approach for AI risk assessments is still lacking.
Standards for risk assessment like ISO/IEC 31000 and IEEE 7010-2020 offer guidance but are not tailored to AI’s unique compliance challenges.
4. Impact assessments
AI Impact assessments are crucial besides Risk assessments because they help organizations:
Identify the broader effects of AI on different stakeholders and the environment.
Ensure alignment with human-centered values by evaluating how AI systems enhance or detract from the quality of life.
Foster trust and transparency, which are vital for public acceptance and the sustainable integration of AI into society.
Examples of impact assessments are:
The AI Impact Assessment (AIIA) of the Dutch Government.
Microsoft offers Responsible AI Impact Assessment Standard, Template and Guide.
IBM® watsonx™ offers different tools regarding the impact of AI.
And AI impact assessments are also offered by a large variety of consultancy firms.
AI impact can also be measured by using a Human Rights Impact Assessment (HRIA), such as published by the Danish Institute for Human Rights and the even more recent one from Oxfam.
The Responsible AI Institute launched RAISE Benchmarks to operationalize and scale Responsible AI policies, all publicly accessible.
In support of the new ISO/IEC 42001 on AI governance, the drafted ISO/IEC 42005 provides guidance on how to perform impact assessments.
5. Common pitfalls
Pitfalls are challenges that can undermine responsible AI if not addressed.
They represent the gaps between intention and practice, the misalignment of marketed ethical AI versus actual implementation, and the risks of deploying AI without thorough evaluation.
Ethical blind spots:
These are areas or issues that organizations and individuals fail to recognize as potential ethical problems, often due to implicit bias or cultural norms.
In the context of AI, this might involve overlooking how a system might reinforce stereotypes or disadvantage certain groups.
Bluewashing or ethics washing:
Similar to the concept of "greenwashing" in environmental contexts, bluewashing refers to a company portraying its products or services as ethically sound or responsible in AI practices when they may not meet those standards.
It's a façade of ethical commitment without substantive action.
AI shopping:
This term can describe the process of selecting AI solutions that appear to be the best or most cost-effective without due diligence on their ethical implications or suitability for the task they are meant to perform.
Shadow AI:
This refers to AI applications or systems developed and used within an organization without explicit organizational approval or oversight.
Shadow AI can lead to significant risks, including security vulnerabilities and non-compliance with ethical standards.
Research by Salesforce shows about 49% of people have used generative AI, with over one-third using it daily, something that requires governance deciding what generative AI usage to permit or restrict.
In conclusion, integrating AI impact assessments alongside risk assessments ensures a comprehensive evaluation of AI systems.
It encourages organizations to not only avoid harm but also actively contribute to societal good.
As AI becomes increasingly ubiquitous, the responsibility falls on us, the creators and implementers, to ensure it serves as a force for positive impact.
6. Let's practice!
Please join me for the next exercise!