Get startedGet started for free

Challenges of responsible AI

1. Challenges of responsible AI

Understanding responsible AI principles highlights the complexities and challenges in applying them.

2. Responsible data management practices in the real world

Responsible data dimensions offer guidance, but their real-world application is complex, involving trade-offs and challenges that demand professional judgment.

3. Common trade-offs

Trade-offs occur between responsible data management, technical metrics, and business factors like market pressures and goals. Additionally, tensions can arise within the dimensions themselves, such as the push for more data to reduce bias and the limitations of privacy and consent.

4. Business factor trade-offs

Businesses strive to increase revenue, reduce costs, and outperform competitors, often resulting in trade-offs across responsible data dimensions. Companies may deploy AI solutions without thorough fairness testing or disregard privacy and consent issues, especially when pressured for time in an effort to stay competitive. This can undermine bias mitigation, data security, and compliance with regulations.

5. Pre-trained models

Another challenge arises from using pre-trained models such as the ones listed here. Model training is expensive, and using pre-trained models can significantly reduce costs compared to training from scratch, saving both time and resources by eliminating the need for extensive data collection and training. Pre-trained models offer efficiency and advanced capabilities but may inherit biases from their training data. Additionally, they often lack transparency due to undisclosed details.

6. Using pre-trained models

Working with pre-trained third-party models requires additional steps to ensure a responsible approach. Such steps may include due diligence on the model source to ensure a good reputation and credibility, a detailed review of the model documentation, and additional tests for fairness and bias.

7. Accuracy trade-offs

Balancing metrics like accuracy and responsible data management is a complex task. Often, prioritizing fairness in AI models can lead to a reduction in overall accuracy, especially when models are adjusted to represent minority groups fairly. This phenomenon can occur even in datasets that are balanced across protected groups.

8. Accuracy trade-offs

For example, the equal outcomes metric aims to ensure fairness by granting all eligible groups an equal chance of accurate identification. However, facial recognition AI systems may exhibit lower accuracy for specific groups, like females with darker skin tones. While the metric aims to enhance fairness by ensuring equal true positive rates across groups, it may not consider variations in data quality or quantity for underrepresented groups. Another example is in medical diagnosis AI, where anonymizing patient data for privacy can decrease diagnostic accuracy, particularly for underrepresented groups. Enforcing privacy, especially through data minimization, reduces accuracy further.

9. Robustness trade-offs

Balancing robustness may cause further trade-offs between technical metrics and responsible data management. Making a model robust by training it on adversarial or noisy data can make it less sensitive to changes. However, it might also make biases from the larger datasets more pronounced. Conversely, prioritizing fairness through balanced training data can diminish the model's robustness to real-world, uncurated data. For instance, OpenAI's GPT-3, aimed at reducing biases for fairer language generation, encountered difficulties in robustness, struggling to interpret and respond accurately to complex queries. Given all complexities in the application, professional judgment and code of conduct become a must.

10. Professional conduct and duties of care

Responsible data management is part of the professional conduct and duties of care of AI professionals, as outlined in codes of ethics conduct set by organizations like the Association for Computing Machinery (ACM), a leading international computing body. Enforcement of these guidances varies by country and organization. Professional conduct and duties of care emphasize responsibility, non-harm, fairness, user privacy, and confidentiality, as well as a positive impact on society. AI professionals are expected to maintain high standards of competence and integrity, develop robust and secure systems, and foster an inclusive, non-discriminatory environment that contributes positively to the broader society.

11. Let's practice!

Real-world scenarios are complex, let's practice!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.