AI frameworks
1. AI frameworks
Welcome back. In this video, we focus on the governance frameworks that guide the responsible design, development, and deployment of AI, categorized into three main types: voluntary guidelines, including impact assessments, standards, and certifications. Where in the previous chapter we have discussed rules and regulations that are or will become mandatory within different nations and regions, here we will take a look at tools that are provided to build a governance structure that fits your organization.2. Voluntary guidelines
Voluntary guidelines serve as compasses, guiding responsible AI without legal enforcement as is the case when dealing with legislation that is in force. But we need to keep in mind that it is not a case of “one size fits all”: even though these frameworks and guidelines are super useful to build your organization’s governance, they cannot be applied uniformly across all AI applications, irrespective of context. There are a large variety available today so let’s take a look at few important ones: The OECD's AI Principles promote innovation, trust, and respect for human rights. Based on these principles, the Global Partnership on Artificial Intelligence (GPAI), with 29 member countries, offers a network of experts and guidelines. UNESCO's ethical AI recommendations set global standards, advocating for AI that upholds human rights and diversity. The UN principles are outlined by the AI Advisory Body, which is pushing for a Global AI framework. The Federal Trade Commission's guidelines in the U.S. emphasize consumer protection and product safety, advocating for transparency and fairness in AI applications. The Asilomar AI Principles, coordinated by Future of Life Institute (FLI), are one of the earliest sets of AI governance principles. The Alan Turing Institute launched a series of workbooks to help the public sector apply AI ethics and safety to the design, development, and deployment of algorithmic systems, which are useful to any organization. The National Institute of Standards and Technology (NIST) in the U.S. developed an AI Risk Management Framework as a guideline for allocating roles, responsibilities and authority. The HUDERIA Methodology outlined by the Committee on Artificial Intelligence of the Council of Europe offers insights into Human rights, Democracy, and Rule of law impact assessment.3. AI standards
Standards in AI establish universal norms for quality and consistency, serving as a foundation for interoperability and best practices. In contrast, frameworks and guidelines offer more detailed, voluntary guidance focused on ethical considerations and operational specifics, tailored to diverse organizational and cultural contexts. The ISO's AI standards, for instance, offer benchmarks for ethical considerations, robustness, and safety in AI technologies. ISO/IEC42001 specifies the requirements and provides guidance for establishing, implementing, maintaining and continually improving an AI. CEN/CENELEC, the European Committee for Electrotechnical Standardization, is currently developing European standards which, in the future, would be able to provide manufacturers the presumption of conformity with the upcoming Artificial Intelligence Act. The Institute of Electrical and Electronics Engineers (IEEE) Standards Association's Ethically Aligned Design are a set of standards developed by the IEEE that provides a framework for aligning technological development and deployment with ethical principles, particularly in the realm of autonomous and intelligent systems.4. AI certifications
Certifications are third-party validations that an AI system meets certain standards or ethical guidelines. ISO certifications assurance that AI products and services adhere to international best practices. The Data Protection Impact Assessments (DPIA) required under GDPR serve as a form of certification, ensuring that AI applications respect user privacy and data protection norms. Lastly the Responsible AI Institute offers a Certification Program for AI Systems, and AI Governance professionals can achieve an ISO certification through the International Association of Privacy Professionals (IAPP).5. The path forward
As we navigate the complexities of AI governance, these frameworks offer valuable insights and tools for ensuring AI serves the greater good. They highlight the collective wisdom of the global community, emphasizing innovation, ethical responsibility, and human rights. In conclusion, understanding these international examples of voluntary guidelines, standards, and certifications is essential for anyone involved in AI. They not only guide responsible AI development but also shape the dialogue on how AI technologies can be developed and deployed in ways that are ethical, safe, and beneficial for all.6. Let's practice!
Thank you for joining me. As we continue to explore the multifaceted world of AI, let's keep these frameworks and guidelines at the forefront of our efforts to ensure a responsible AI future, so let's try this in an exercise.Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.