1. Identifying AI risks
Great to see you back! We are going to focus on identifying the inherent risks throughout the AI life cycle. Understanding these risks is crucial for developing AI responsibly and ethically.
So let’s get started.
2. Stakeholders impacted by AI
AI affects everyone. It can affect individuals, groups, nations and societal structures at large.
For instance, a model that makes hiring decisions could inadvertently discriminate, impacting both the individual applicant and the demographics they represent.
We therefore must consider how AI's benefits and risks are distributed across different demographics and communities, avoiding the creation of new divides or the exacerbation of existing inequalities.
3. Design, development, and deployment
An AI life cycle consists of three phases: Design, Development, and Deployment. Each phase presents different risks.
4. Design
This “design” phase involves defining the AI system's objectives and its business value, planning the data architecture, figuring out budget impact and checking support from internal and external stakeholders.
It's about conceptualizing the solution, identifying key requirements, determining how the AI will fit into the existing workflows or create new ones and assessing potential risks.
5. Develop
Development. In this stage, the AI model is built, trained, and tested.
Developers collect and preprocess data, train the model to make predictions or decisions, and then validate its performance through testing to ensure it meets the predefined objectives.
In this phase, it's where risks such as overfitting and underfitting can arise and where presenting data could lead to models that perform well in testing but fail in real-world scenarios.
6. Deploy
This final phase involves integrating the AI model into the production environment where it starts processing live data.
The model's performance is monitored closely, and it may be updated or refined based on feedback and changing conditions to ensure it continues to meet its objectives effectively.
7. The spectrum of AI risks
AI technologies pose risks across various domains, impacting organizations and individuals.
Here's a summary with examples:
Security and operational risks include:
Hallucinations: The Avianca Airlines incident where lawyers submitted a 10-page brief that cited more than half a dozen non existing court decisions. Turns out the lawyers had used ChatGPT and it had invented everything, highlighting the need for verification due to ChatGPT's potential for generating inaccurate information.
Data poisoning: Microsoft's Tay chatbot became inappropriate after user manipulation, showcasing how through Twitter, the chatbot was taught to be a racist in less than a day.
Data breaches and leakages: in general with enormous reputational and legal consequences. Sometimes it just affects a small group but in some cases like the breach concerning Indian Council of Medical Research, 815 million medical records were compromised.
Privacy risks:
London's King's Cross facial recognition technology sparked privacy concerns, emphasizing the need for consent and regulation in AI applications.
Business risks:
Reputational risks: Amazon's AI recruitment tool bias towards male candidates revealed the importance of addressing AI biases to maintain reputation and trust.
Financial risks: Amazon's Rekognition misidentified Congress members, illustrating financial and reputational dangers of algorithmic bias.
These examples highlight the importance of thorough testing, ethical standards, and compliance in AI development and use.
8. Let's practice!
The next time we meet, we'll delve into the rules and regulations that have been developed to manage these risks, providing structure and guidance in the realm of AI.
Join me for some exercises!