The EU is nearing the adoption of its first comprehensive horizontal AI legislation, with the draft EU AI Act currently pending plenary voting scheduled for 10-11 April 2024. This legislation employs a tiered risk-based approach, categorising AI applications by the potential risks to public safety and fundamental rights, ranging from unacceptable-risk systems that are prohibited to limited risk systems requiring minimal transparency.
Australia is currently formulating its AI regulatory landscape, having published an interim response to the 2023 consultation on safe and responsible AI. It highlights key-stakeholder positions and unveils the government’s plans for bringing measures in high-risk settings, including setting up an advisory body to guide legislation on ‘high risk’ AI applications. Australia emphasises the applicability of existing laws to AI. It is also undertaking reviews and bringing new legislation on misinformation and sector-specific guidelines indicating a risk-based regulatory approach.
Brazil is actively shaping its AI legislative and regulatory framework, with the AI Bill 2338/2023 at its core, focusing on transparency, bias mitigation, and regulation of high-risk AI applications, drawing inspiration from the EU AI Act. Existing internet and data protection laws, including the Marco Civil da Internet and the LGPD, complement AI regulations. The draft AI Bill leans towards a mixed, primarily horizontal regulatory framework without detailed provisions for key sectors or compliance mechanisms. On the international stage, Brazil’s participation in forums such as the GPAI, G20, OECD, UN, ITU, WIPO, and BRICS reflects its commitment to aligning with global AI governance standards, promoting ethical AI development and responsible innovation in line with human rights and equitable growth.
Japan lacks a central AI law but has established various guidelines and principles, such as the Social Principles of Human Centric AI and AI Governance Guidelines, with Draft AI Guidelines set for adoption in March 2024. Japan prefers sector-specific, vertical soft regulations and is actively reforming laws to address AI-related concerns.
The African Union (AU) is developing the African Union Artificial Intelligence (AU-AI) Continental Strategy to harness Artificial Intelligence for socio-economic development, aligning with Agenda 2063 and the Sustainable Development Goals (SDGs). This strategy, part of broader initiatives like Science, Technology and Innovation Strategy for Africa 2024 (STISA) and the Digital Transformation Strategy for Africa, aims to address AI’s technological, ethical, economic, security, and social implications. Although specific AI laws are not yet established, the Malabo Convention oversees the automated processing of personal data, laying a foundation for future AI regulation. The AU’s approach signifies a unified and strategic effort to integrate AI in a way that is responsible, safe, and beneficial for the continent’s development.
China lacks a nodal AI law. Instead, China has come up with multiple sector-specific AI laws such as: provisions for algorithmic recommendations, deep synthesis internet information services, and the management of generative AI services, employing a mix of risk-based and prohibitive approaches to mitigate AI-related harms. These regulations are vertical, targeting specific AI applications.
India’s approach to AI regulation is centred around principles of transparency, fairness, privacy, and security, as outlined in its National Strategy for Artificial Intelligence and various government-released position papers on Responsible AI. Moreover, the regulatory landscape for AI is shaped by the Information Technology Act, 2000, and the IT Rules, 2021, which provide a legal framework for managing AI applications. Additionally, the Digital Personal Data Protection Act, 2023, and the Consumer Protection Act, 2019, regulate data handling and consumer rights in AI applications. India adopts a mixed regulatory approach, integrating risk-based and sector-specific guidelines to ensure AI’s ethical use across various domains.
Canada is proactively shaping its legislative landscape to ensure the responsible development and use of AI through the introduction of the Artificial Intelligence and Data Act (AIDA), as part of Bill C-27. This act mandates developers and operators to adopt risk management practices, ensure transparency, and evaluate the systems’ impacts regularly. Alongside AIDA, Canada’s regulatory landscape incorporates existing laws for personal data protection and consumer safety, extending to healthcare, finance, and other sectors. Further, Canada’s AI regulation strategy builds on existing laws covering personal data protection and consumer safety across sectors such as healthcare, finance, and vehicle safety. Canada has also introduced a Voluntary Code of Conduct for advanced generative AI systems and established guiding principles for AI in medical devices, in collaboration with international partners.
The U.S. is shaping its AI regulatory environment with a 2023 Executive Order from the Biden-Harris Administration, focusing on responsible AI use in areas like safety and privacy. This initiative, building on existing efforts like the Blueprint for an AI Bill of Rights, signifies a comprehensive approach to AI governance. Key legislative measures include the AI Training Act (2021) for workforce training in executive agencies and the National Artificial Intelligence Initiative Act (2020) to bolster U.S. leadership in AI R&D. Sector-specific actions, including USPTO’s 2024 guidelines for AI inventions and the FCC’s crackdown on AI-generated voice scams, along with the FTC’s focus on fair AI practices and the NIST’s risk management framework, highlight a targeted and comprehensive approach to AI governance, emphasising innovation alongside ethical use.
The United Kingdom does not have a specific nodal law for AI regulation. It follows a pro-innovation and context-based approach, emphasising the need for understanding AI risks before implementing regulations. Existing laws like the UK General Data Protection Regulations, Data Protection Act 2018, Equality Act 2010, apply to AI, with sector-specific explorations in competition/antitrust, data protection, anti-discrimination, IP rights, and safety. The government adopts a risk-based approach, promoting cross-sectoral principles and context-specific frameworks for AI governance. Key sectors like public procurement, public administration, and law enforcement are encouraged to adopt AI with guidance from various regulatory bodies.
Sign up for the AIKC newsletter to be at the forefront of understanding how AI is shaping our world and how policies are evolving to meet the challenges and opportunities it presents.
Sign up to: