Applying governance requirements to systems
1. Applying governance requirements to systems
Now, we’ll move from understanding AI governance frameworks to their application. We’ll cover how organizations identify “high-risk” AI under regulations, what legal governance actions are required—like risk classification, conformity assessments, and documentation—and explore how companies put these into practice. Jordan: Hi Simla, how do organizations apply governance requirements to AI systems they’re building or using? How do they know which rules apply? Simla: That’s the key next step. Under frameworks like the EU AI Act or Canada’s AIDA, organizations must determine if an AI system qualifies as “high risk.” Jordan: How do they determine that? What criteria are used? Simla: Regulations define categories and criteria. The EU AI Act, for example, flags high-risk uses in sectors like infrastructure, education, employment, public services, and areas involving law enforcement or justice. If an AI system fits one of these and poses potential harm, it's likely high risk. Jordan: So it’s not just the sector, but also the application and its potential impact? Simla: Exactly. For instance, AI for basic bank customer service might not be high risk, but one making loan decisions likely is. Regulations define features that trigger high-risk classification, like legally binding decisions or impacts on fundamental rights. Jordan: So first, there’s a risk assessment based on use and impact. What happens next if a system is high risk? What governance steps are legally required? Simla: Once a system is high risk, legal obligations apply. First is formal risk classification—documenting the assessment and reasoning. Next are conformity assessments. Jordan: What are those? Simla: They evaluate whether the system meets regulatory requirements internally or through third parties. The goal is to confirm safety and compliance before use. Jordan: So, it's like a check to ensure the AI meets the necessary standards before deployment? Simla: Precisely. Another requirement is thorough documentation of the AI’s design, data, algorithms, testing, and purpose—available to regulators, and sometimes users. Jordan: We talked about documentation as a component of governance earlier. Now it sounds like it can also be a legal requirement. Simla: Exactly. Governance best practices often become legal obligations for higher-risk systems. Other legal requirements can include establishing human oversight mechanisms, ensuring transparency about the system's capabilities and limitations, maintaining accuracy and robustness, and ensuring cybersecurity. In some cases, registration with a regulatory body might also be mandatory before deployment. Jordan: That’s a lot for high-risk AI. How do companies apply these rules in practice? Any examples? Simla: Sure. Say a company builds an AI for screening job applicants—likely high risk under the EU AI Act. They’d start with a risk assessment and document the rationale. Then, conduct a conformity assessment, including testing for algorithmic bias and recording the results. Jordan: So, they'd have to actively look for and try to mitigate bias? Simla: Absolutely. They would also need documentation covering training data, model architecture, evaluation methods, and limitations. They must define when and how human reviewers can step in. They would also need to ensure transparency by providing candidates with information about how the AI is used in the hiring process. Furthermore, they would need robust data governance practices to ensure the quality and integrity of the data used to train and operate the AI. Depending on the regulation, they may also need to register the system before deployment. Jordan: That clarifies how abstract rules turn into concrete company actions. It clearly requires serious investment in process and expertise. What about low-risk AI? Are there any governance requirements? Simla: That's a good clarifying question. Legal obligations are lighter for low-risk AI, but that doesn’t mean governance isn’t needed. Even for systems with minimal user interaction or low harm potential, responsible AI practices still apply. This might include basic documentation of the system's purpose and capabilities, ensuring data privacy principles are followed, and maintaining a level of transparency where appropriate. Jordan: So low-risk AI is more about best practices than strict mandates? Simla: Exactly. While the EU AI Act, for example, has minimal transparency obligations for certain low-risk AI (like AI-enabled games or spam filters), organizations might still choose to provide some information to users about how these systems work to build trust. Furthermore, as AI technology evolves and our understanding of its potential impacts grows, even low-risk systems might face greater scrutiny. So, it’s wise to set a baseline of responsible practices across all AI systems. Jordan: That makes sense. It's a tiered approach where the level of governance is proportional to the risk, but there's still a baseline of responsibility for all AI. Simla: Precisely. And understanding this distinction is crucial for organizations to allocate their governance resources effectively, focusing the most rigorous processes on the systems that pose the greatest potential risks.2. Let's practice!
Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.