1. Rules and regulations
Welcome back as we continue our journey through Responsible AI Practices. This video explores the global tapestry of more than 70 countries putting forward AI guidelines and regulations, which are designed to make artificial intelligence robust, protect people's rights, and hopefully encourage new ideas and advancements.
2. Global overview of AI regulations
Keep in mind that while AI legislation is in its early stages, AI is already subject to existing laws on data, privacy, healthcare, employment, product safety, and human rights.
Globally, there a variety of regulatory approaches.
The European landscape is marked by its commitment to digital human rights and privacy, with the General Data Protection Regulation (GDPR) at its core.
The Digital Services Act (DSA) further expands this framework, focusing on digital transparency and accountability.
But on AI the EU stands out with its comprehensive AI Act, introducing a risk-based approach with extensive requirements for what are called developers (users) and deployers (providers) of high-risk AI.
Across the Atlantic, the American approach, as reflected in the Executive Order on AI, establishes new standards for AI safety and security and directs a major program of work across the U.S. government.
The American Data Privacy and Protection Act serves as an important policy, and the White House published a Blueprint for an AI Bill of Rights.
Also the Algorithmic Accountability Act was re-introduced by a group of Senators at the end 2023.
And this is just a sample of what was presented at a federal level.
In March 2023, Canada introduced the Artificial Intelligence and Data Act (Bill C-27). For generative AI the voluntary guidelines laid down in the Canadian Code of Conduct offer important guidance.
The UK was able to provide an AI White Paper that same year, it updated its Guidance on AI and Data Protection and presented the Bletchley Declaration, but will still have to present its broader proposals on regulatory reform.
In China multiple laws that govern AI are already in place and more were recently presented.
When comparing different regulatory environments, we see different approaches. The UK's post-Brexit strategy is poised between European alignment and a unique path, while China's rights-based approach differs from the overall rules-based approach.
Canada's focus on ethical AI sets a different tone, highlighting the diverse philosophies underpinning AI legislation. The American focus on stimulating innovation is leveraged with new standards for safety and security.
3. EU AI Act risk levels
The EU AI Act is a pioneering legislative framework designed to govern the development, deployment, and use of artificial intelligence. It categorizes AI systems based on their risk levels.
Minimal risk: The AI act allows the free use of minimal-risk AI. Most AI systems fall into this category, such as AI-enabled video games or spam filters.
Limited risk: AI systems that require specific transparency obligations to users fall under this level. An example includes chatbots; users should be aware that they are interacting with an AI so they can make informed choices.
High risk: This category encompasses AI systems used in critical sectors (e.g., education, law enforcement, and employment) and those that can significantly affect individuals' rights or safety.
High-risk AI systems are subject to strict compliance requirements before they can be put on the market.
Unacceptable risk: This level identifies AI practices that pose a clear threat to people's safety, livelihoods, and rights, leading to a prohibition of such systems.
Examples include government social scoring systems and real-time biometric identification systems in publicly accessible spaces for law enforcement purposes, except in strictly defined circumstances.
The Act also contains minimum standards for all foundation models referred to as General-Purpose AI (GPAI) models regarding transparency, including watermarking and adherence to copyright provisions.
4. The next wave in AI regulation
Anticipating more AI regulation, we see trends towards structural oversight in the US and EU, focusing on licensing, liability reform, and taxation.
The feasibility of a global AI law for private and public organizations is debated, given the need for international cooperation.
Global AI regulations are evolving, with significant initiatives like the EU AI Act and varying approaches in over 70 countries.
5. Let's practice!
In the upcoming videos, we’ll continue to chart the course of Responsible AI Practices by looking at different frameworks.
But first, let’s practice what we’ve learned so far.