AI Knowledge Consortium
  • About Us
    • Members
    • Charter
    • Code of Conduct
  • Insights
    • AI Regulation Tracker
    • Publications
    • Newsletters
  • Events
  • Contact Us
Menu
AI Knowledge Consortium
  • About Us
    • Members
    • Charter
    • Code of Conduct
  • Insights
    • AI Regulation Tracker
    • Publications
    • Newsletters
  • Events
  • Contact Us
Menu

Success Stories

Join us We unite voices to shape inclusive, responsible AI, fostering collaboration for a digitally equitable future in India.

B40, Block B, Soami Nagar South,
Soami Nagar, New Delhi,
Delhi 110017

[email protected]

 

Secretariat managed by Koan Advisory Group

© 2025 AI Knowledge Consortium
Go to Top
  • About Us
    • Members
    • Charter
    • Code of Conduct
  • Insights
    • AI Regulation Tracker
    • Publications
    • Newsletters
  • Events
  • Contact Us

European Union

 

The EU is nearing the adoption of its first comprehensive horizontal AI legislation, with the draft EU AI Act currently pending plenary voting scheduled for 10-11 April 2024. This legislation employs a tiered risk-based approach, categorising AI applications by the potential risks to public safety and fundamental rights, ranging from unacceptable-risk systems that are prohibited to limited risk systems requiring minimal transparency. 

1. Legislative Development

  • Nodal AI Law Status
    • Status:  Draft text is in process of adoption. The EU is in the final stages of adopting first-ever comprehensive legislation for regulating AI.  
    • Details: The draft text of the EU AI Act has reached political agreement. The draft text currently awaits plenary voting provisionally scheduled for 10-11 April, 2024. 

2. Regulatory Framework Analysis

  •  Internet Laws 
    • Status: Applied 
    • Details: The EU General Data Protection Regulation (GDPR) is applicable on AI and processing of personal data. Other relevant laws/ regulations include: Digital Services Act (DSA), and Digital Markets Act (DMA).  
  • Sectoral and  Subject Specific AI Laws
    • Presence: NA 
    • Sectors: NA 

3. Regulatory Approach

  •  Risk-Based vs. Prohibition
    • Approach: Mix 
    • Details: EU has applied a risk-based approach to AI regulations. It involves categorising AI applications on the basis of risks they may pose to public safety and fundamental rights. These include (i) unacceptable-risk; (ii) high-risk; (iii) high-impact general-purpose and generative AI; and (iv) limited risk. The unacceptable-risk AI systems are considered a threat to people and are prohibited. The high-risk AI systems are those that negatively affect safety or individual rights and are subjected to assessment prior and post deployment. The high-impact general-purpose AI that pose systemic risks are subjected to evaluations. The limited risk AI systems need to comply with minimal transparency obligations to enable users to make informed decisions. 
  • Horizontal vs. Vertical Regulation
    • Type: Horizontal 
    • Details: EU AI act will be the first comprehensive horizontal legislation pertaining to AI. 

4. Application in Key Sectors

  • Public Procurement and Usage by Public Authorities 
    • Status: NA
    • Details: The European Union has developed model contractual clauses for the procurement of Artificial Intelligence (AI) systems, tailored for use by public organisations. These clauses, known as the EU model contractual AI clauses, have been peer-reviewed and are available in two versions: one for high-risk AI systems and another for non-high-risk AI systems. The first and second iteration of these clauses was made available for public organisations to pilot and provide feedback. 
  •  Public Administration/ Law Enforcement
    • Usage: NA
    • Regulations:NA 

5. Compliance and Enforcement

  • Licensing Requirements
    • Requirement: NA
    • Details: NA
  • Monitoring and Reporting
    • Mechanisms: NA

6. International Alignment

  • Multilateral Forum: The European Union (EU) actively engages in several major international Artificial Intelligence (AI) initiatives, illustrating its commitment to shaping global norms and standards on AI. The EU is a member of the Global Partnership on AI (GPAI),  a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities. In the Organisation for Economic Co-operation and Development (OECD), the EU supports the OECD Recommendations on AI.  The EU’s involvement extends to the United Nations (UN), including UNESCO, where it contributes to discussions on ethical AI, helping to guide the development of global standards and frameworks. The EU plays a vital role in G20 dialogues and agreements that aim to harness the benefits of AI for economic development while ensuring adequate safeguards against risks. 

×

Australia

Australia is currently formulating its AI regulatory landscape, having published an interim response to the 2023 consultation on safe and responsible AI. It highlights  key-stakeholder positions and unveils the government’s plans for bringing measures in high-risk settings, including setting up an advisory body to guide legislation on ‘high risk’ AI applications. Australia emphasises the applicability of existing laws to AI. It is also undertaking reviews and bringing new legislation on misinformation and sector-specific guidelines indicating a risk-based regulatory approach.

1. Legislative Development

  • Nodal AI Law Status:  
    • Status:  Unclear
    • Details: Australia has recently published its interim response to the safe and responsible AI consultation held in 2023. The interim response outlines the position held by key stakeholders such as public,  academia, private sectors. The government has expressed that it will take testing, transparency and accountability measures to prevent harms from occurring in high-risk settings; and clarifying and strengthening laws to secure citizens. Further, the Australian government announced that it will set up an advisory body of industry and academic experts to work with the government to devise a legislative framework around ‘high risk’ AI applications.

2. Regulatory Framework Analysis

  • Internet Laws
    • Status: Applied 
    • Details: Australia’s Interim Response to the Safe and Responsible AI Consultations acknowledges that existing laws such as the Privacy Act 1988, IPR laws (Copyright Act 1968, Patents Act 1990, Data Availability and Transparency Act 2021) apply to address known harms to AI. Government is also in process of reviewing Online Safety Act 2021 and is introducing new laws on misinformation and disinformation. 
  • Sectoral and  Subject Specific AI Laws
    • Presence: Exploratory Stage 
    • Sectors: 
      • Education: House Standing Committee on Employment, Education and Training adopted an inquiry into the use of generative AI in the Australian education system on 24 May 2023. The Australian Framework on Generative Artificial Intelligence in School has also been unveiled. It seeks to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools, guardians and others.  
      • Security, Cybersecurity and Crime Prevention: Australia’s eSafety Commissioner will register an online safety code (Draft Industry Standards) that will require internet search engines to take important steps to reduce the risk that material like child abuse material is returned in search results and that AI functionality integrated with the search engines are not used to generate “synthetic” versions of this material. It has reached the stage of public consultations. 
      • IPR: Australian Government has established an AI and Copyright Reference Group to consider copyright issues in a careful and consultative manner.  

3. Regulatory Approach

  •  Risk-Based vs. Prohibition
    • Approach: Risk-Based 
    • Details: Australia has adopted a risk based approach to governance of AI. This is evident from sectoral guidelines (such as Australian Framework on Generative Artificial Intelligence in School, online safety code etc.) which primarily focus on addressing and reducing the AI risks. 
  • Horizontal vs. Vertical Regulation
    • Type:Mix 
    • Details: Australia interim response to the safe and responsible AI consultation shows an inclination towards regulating AI through  testing, transparency and accountability measures to prevent harms from occurring in high-risk settings. In addition sector-specific frameworks, codes and other standards (soft-law obligations) have also been put in place. Australia will likely apply a mix of horizontal and vertical regulations. 

4. Application in Key Sectors

  • Public Procurement and Usage by Public Authorities 
    • Status: Applied (Voluntary in nature)
    • Details: The Digital Transformation Agency of Australia has released an interim guidance on government use of public generative AI tools. 
  • Public Administration/ Law Enforcement
    • Usage: NA
    • Regulations: NA

5. Compliance and Enforcement

  • Licensing Requirements
    • Requirement: NA
    • Details: NA
  • Monitoring and Reporting
    • Mechanisms: NA

6. International Alignment

  • Multilateral Forum: Australia is  participating in multiple international AI initiatives like the Global Partnership on AI (GPAI), G20, Indo-Pacific Economic Framework for Prosperity (IPEF), and the Quadrilateral Security Dialogue (Quad). Within GPAI, Australia collaborates with international partners to ensure AI advancements are human-centric and benefit society widely, focusing on innovation, data governance, and building trust in AI technologies. In the G20, Australia contributes to discussions on digital economy policies, emphasising the importance of an inclusive digital future and the ethical challenges of AI. The IPEF sees Australia engaging with Indo-Pacific nations to drive economic growth and digital innovation, including AI, within the region, ensuring that technological advancements support prosperity and security. Within the Quad framework, Australia works alongside the US, India, and Japan to address regional security challenges, including those emerging from AI and technology advancements, underscoring its role in promoting a stable and secure Indo-Pacific region.

×

Brazil

Brazil is actively shaping its AI legislative and regulatory framework, with the AI Bill 2338/2023 at its core, focusing on transparency, bias mitigation, and regulation of high-risk AI applications, drawing inspiration from the EU AI Act.  Existing internet and data protection laws, including the Marco Civil da Internet and the LGPD, complement AI regulations. The draft AI Bill leans towards a mixed, primarily horizontal regulatory framework without detailed provisions for key sectors or compliance mechanisms. On the international stage, Brazil’s participation in forums such as the GPAI, G20, OECD, UN, ITU, WIPO, and BRICS reflects its commitment to aligning with global AI governance standards, promoting ethical AI development and responsible innovation in line with human rights and equitable growth.

1. Legislative Development

  • Nodal AI Law Status:  
    • Status: Currently under consideration 
    • Details: The AI Bill 2338/2023, seeks to regulate AI in Brazil, focusing on transparency, bias mitigation, and public impact assessments for AI systems, especially those deemed as high-risk. Drawing inspiration from the EU AI Act, it establishes a risk-based approach for regulation, outlines individual rights regarding AI interactions, and sets strict guidelines for high-risk applications in sectors like healthcare and criminal investigation. The bill bans excessively risky AI practices such as biometric identification and social scoring and imposes significant penalties for violations, aiming to ensure safe, transparent, and accountable AI use.

2. Regulatory Framework Analysis

  • Internet Laws 
    • Status: Applied 
    • Details: 
      • Marco Civil da Internet (Civil Rights Framework for the Internet), General Data Protection Law (LGPD), and the Consumer Protection Code are applicable in relation to AI. 
  • Sectoral and Subject Specific AI Laws
    • Presence: Unclear
    • Sectors: NA

3. Regulatory Approach

  • Risk-Based vs. Prohibition
    • Approach: Unclear (Mixed – As per the draft AI Bill)
  • Horizontal vs. Vertical Regulation
    • Type: Unclear (Horizontal – As per the draft AI Bill)

4. Application in Key Sectors

  • Public Procurement
    • Status: NA
    • Details: NA
  • Law Enforcement
    • Usage:NA 
    • Regulations: NA

5. Compliance and Enforcement

  • Licensing Requirements
    • Requirement: NA 
    • Details: NA
  • Monitoring and Reporting
    • Mechanisms: NA

6. International Alignment

  • Multilateral Forums: Brazil’s engagement in international forums such as the Global Partnership on AI (GPAI), G20, OECD, and the United Nations (UN) showcases its dedication to collaborative efforts in AI governance and ethical development. By participating in these platforms, Brazil influences and aligns with global standards and policies on AI, emphasizing responsible innovation and adherence to human rights. Its involvement also extends to the International Telecommunication Union (ITU) and the World Intellectual Property Organization (WIPO), as well as economic and digital innovation discussions within the BRICS group. This broad participation ensures Brazil’s AI policies are informed by global best practices, promoting equitable and sustainable AI development worldwide.

×

Japan

Japan lacks a central AI law but has established various guidelines and principles, such as the Social Principles of Human Centric AI and AI Governance Guidelines, with Draft AI Guidelines set for adoption in March 2024. Japan prefers sector-specific, vertical soft regulations and is actively reforming laws to address AI-related concerns. 

1. Legislative Development

  • Nodal AI Law Status: No 
    • Status: NA
    • Details: Japan does not have a nodal law on AI. The official documents released on AI include, Social Principles of Human Centric AI, and AI Governance Guidelines for Practice of AI Principles have been released. Further, Draft AI Guidelines Guidelines have also been formulated and will be adopted in March, 2024. 

2. Regulatory Framework Analysis

  • Internet Laws 
    • Status: Applied 
    • Details: The laws pertaining to data protection (Act on the Protection of Personal Information), consumer protection (Unfair Competition Prevention Act ), Civil Code, Product Liability Act, Act on Prohibition of Private Monopolization and Maintenance of Fair Trade and others  govern AI. 
  • Sectoral and  Subject Specific AI Laws
    • Presence: Sectoral Reforms  Stage
    • Sectors: 
      • Competition Law:  In 2019, amendments to the Unfair Competition Prevention Act were made. It regulates the usage of raw data utilised by AI. 
      • IPR: The Japan Patent Office has also amended the Examination Guidelines for Patent and Utility Models several times to cover AI-related cases. The Copyright Act was amended in 2017 to promote the use of data in machine learning. The amendment clarified that downloading or processing data through the internet or other means to develop AI models is not an infringement of copyright. 
      • Finance: The Financial Instruments and Exchange Act requires businesses engaging in algorithmic high-speed trading to register with the government and requires them to establish a risk management system and maintain transaction records.  Further,  the Installment Sales Act was revised in 2020 to enable a “certified comprehensive credit purchase intermediary” to determine credit amounts using data and AI
      • Transportation: In 2020, Japan enacted the revised Road Traffic Act and Road Transport Vehicle Act which allowed  Level 3 automated driving (i.e., conditional automation) on public roads.

3. Regulatory Approach

  •  Risk-Based vs. Prohibition
    • Approach: Risk Based 
    • Details: Japan has no laws or regulations that  constrain or prohibit usage of AI. Japan is currently reforming numerous laws (Copyright Act, Unfair Competition Act) and bringing in different guidances (METI’s Governance Guidelines for Implementation of AI Principles,  Guidebook on Corporate Governance for Privacy in Digital Transformation,  Guidebook for Utilization of Camera Images, Draft AI Guidelines for Business etc. ) to address specific issues and concerns. 
  • Horizontal vs. Vertical Regulation
    • Type: Vertical Soft Regulations
    • Details: The AI Governance Guidelines for Practice of AI Principles (AI Governance in Japan Ver 1.1 ) states that legally-binding horizontal requirements for AI systems are deemed unnecessary at the moment. However, Japan’s sector specific laws (Digital Platform Transparency Act, Financial Instruments and Exchange Act) require the private sector to take certain measures and disclose information about risks. 

4. Application in Key Sectors

  • Public Procurement and Usage by Public Authorities 
    • Status: NA 
    • Details: NA
  •  Public Administration/ Law Enforcement
    • Usage: NA
    • Regulations: NA 

5. Compliance and Enforcement

  • Licensing Requirements
    • Requirement:NA 
    • Details: NA
  • Monitoring and Reporting
    • Mechanisms: NA 

6. International Alignment

  • Multilateral Forum:  Japan is showcasing a deep engagement in global AI initiatives, including the G7, G20, Global Partnership on AI (GPAI), Asia-Pacific Economic Cooperation (APEC), and the United Nations (UN). In the G7 and G20 forums, Japan contributes to high-level discussions on the economic, social, and ethical implications of AI, advocating for a harmonised approach to AI governance that balances innovation with the protection of human rights and societal values. At Global Partnership on AI (GPAI), Japan demonstrates its dedication to responsible and human-centric AI development. Within the Asia-Pacific Economic Cooperation (APEC), Japan engages with member economies to foster economic growth and prosperity in the Asia-Pacific region through digital innovation, including AI. At the United Nations, Japan supports initiatives and discussions related to AI and digital technologies, contributing to the development of global standards and ethical guidelines for AI.

×

African Union

The African Union (AU) is developing the African Union Artificial Intelligence (AU-AI) Continental Strategy to harness Artificial Intelligence for socio-economic development, aligning with Agenda 2063 and the Sustainable Development Goals (SDGs). This strategy, part of broader initiatives like Science, Technology and Innovation Strategy for Africa 2024 (STISA) and the Digital Transformation Strategy for Africa, aims to address AI’s technological, ethical, economic, security, and social implications. Although specific AI laws are not yet established, the Malabo Convention oversees the automated processing of personal data, laying a foundation for future AI regulation. The AU’s approach signifies a unified and strategic effort to integrate AI in a way that is responsible, safe, and beneficial for the continent’s development.

1. Legislative Development

  • Nodal AI Law Status: No 
    • Status: In process 
    • Details: The African Union (AU) is actively developing a comprehensive strategy, known as the African Union Artificial Intelligence (AU-AI) Continental Strategy, to harness the transformative power of Artificial Intelligence for socio-economic development across the continent.This strategy is designed to guide African nations towards an inclusive and sustainable AI-driven transformation that aligns with the broader objectives of Agenda 2063 and the Sustainable Development Goals (SDGs). The development of the AU-AI Continental Strategy is a part of the broader Science, Technology and Innovation Strategy for Africa 2024 (STISA) and the Digital Transformation Strategy for Africa. This initiative is aligned with the goals of the AU Working Group on AI, which underscores the need for a unified, coordinated continental approach to AI. The proposed strategy focuses on:
      • Addressing multiple facets of AI: This includes technological, ethical, economic, security, and social aspects, ensuring AI’s use is responsible, safe, and beneficial.
      • Establishing foundational elements: The strategy aims to define guiding principles, vision, mission, pillars, and strategic objectives for a Continental AI Strategy.

2. Regulatory Framework Analysis

  • Conventional Law Regulation
    • Status: N/A
    • Details: N/A
  • Sectoral and  Subject Specific AI Laws
    • Presence: N/A
    • Sectors: 
      • Automated Processing of Personal Data: The Malabo Convention, officially known as the African Union Convention on Cyber Security and Personal Data Protection, now in effect, serves as a crucial regulatory framework within the African Union. This convention plays a key role in overseeing aspects of Artificial Intelligence, particularly the automated processing of personal data. 

3. Regulatory Approach

  •  Risk-Based vs. Prohibition
    • Approach: NA
    • Details: NA
  • Horizontal vs. Vertical Regulation
    • Type: NA
    • Details: NA

4. Application in Key Sectors

  • Public Procurement and Usage by Public Authorities 
    • Status: NA
    • Details: NA
  •  Public Administration/ Law Enforcement
    • Usage: NA
    • Regulations: NA

5. Compliance and Enforcement

  • Licensing Requirements
    • Requirement: NA
    • Details: NA
  • Monitoring and Reporting
    • Mechanisms: N/A

6. International Alignment

  • Multilateral Forums: The AU’s involvement is expected to span across established international bodies such as UNESCO, ITU, and the WEF, Additionally, the Global Partnership on Artificial Intelligence (GPAI) represents a potential platform for the AU to contribute to responsible AI development dialogues. 

×

China

China lacks a nodal AI law. Instead, China has come up with multiple sector-specific AI laws such as: provisions for algorithmic recommendations, deep synthesis internet information services, and the management of generative AI services, employing a mix of risk-based and prohibitive approaches to mitigate AI-related harms. These regulations are vertical, targeting specific AI applications. 

1. Legislative Development

  • Nodal AI Law Status: No 
    • Status:  NA 
    • Details: China does not have a nodal legislation for regulation of  AI.  

2. Regulatory Framework Analysis

  • Conventional Law Regulation
    • Status: Applied 
    • Details: The existing legal regulations on data protection (China Personal Information Protection Law (PIPL)), cyber security (Cybersecurity Law of People Republic of China), consumer protection (Law of the People’s Republic of China on the Protection of Consumer Rights and Interests) and other relevant laws are applicable on AI. 
  • Sectoral and  Subject Specific AI Laws
    • Presence: Exploratory Stage 
    • Sectors: 
      • Provisions on the Management of Algorithmic Recommendations in Internet Information Services: The Provisions apply to any entity that uses algorithm recommendation technologies to provide Internet information services within Mainland China. The regulation includes many provisions for content control, as well as protections for workers impacted by algorithms, among others. It also created the “algorithm registry” used in future regulations.
      • Provisions on the Administration of Deep Synthesis Internet Information Services: The Regulations impose obligations on the providers and users of so-called “deep synthesis technology” (deep learning, machine learning and other algorithmic processing systems). It prohibits the generation of “fake news” and requires synthetically generated content to be labelled. 
      • Measures for the Management of Generative Artificial Intelligence Services: These rules regulate ‘generative AI’ (GAI) services that are offered to the ‘public’ in mainland China.It requires providers to ensure that both the training data and generated content be “true and accurate.”

3. Regulatory Approach

  •  Risk-Based vs. Prohibition
    • Approach: Mix
    • Details: The various regulations and norms identified in the above section use a mix of prohibition and risk-based approaches to address harms accruing from AI.
  • Horizontal vs. Vertical Regulation
    • Type: Vertical 
    • Details:  The various regulations and norms identified in the above section are vertical in their approach, each targeting a specific type of AI.

4. Application in Key Sectors

  • Public Procurement and Usage by Public Authorities 
    • Status: NA 
    • Details:NA 
  •  Public Administration/ Law Enforcement
    • Usage: NA
    • Regulations:NA 

5. Compliance and Enforcement

  • Licensing Requirements
    • Requirement: NA
    • Details: NA
  • Monitoring and Reporting
    • Mechanisms: NA

6. International Alignment

  • Multilateral : China engages in international AI initiatives such as the United Nations (UN), G20, BRICS, Asia-Pacific Economic Cooperation (APEC), and the Shanghai Cooperation Organisation (SCO). At the UN, China actively contributes to discussions on AI governance, emphasising the importance of international collaboration in harnessing AI for sustainable development. In the G20, China participates in shaping the digital economy and AI policies. BRICS (Brazil, Russia, India, China, South Africa) offers China another avenue to collaborate on AI development and digital infrastructure projects, Through APEC, China engages with economies around the Pacific Rim to discuss the integration of AI in promoting sustainable economic growth, enhancing the digital economy, and addressing social challenges through innovation. Within the Shanghai Cooperation Organisation (SCO), China advances AI cooperation among member states, focusing on security, economic development, and addressing challenges arising from the digital transformation.

×

India

India’s approach to AI regulation is centred around principles of transparency, fairness, privacy, and security, as outlined in its National Strategy for Artificial Intelligence and various government-released position papers on Responsible AI.  Moreover, the regulatory landscape for AI is shaped by the Information Technology Act, 2000, and the IT Rules, 2021, which provide a legal framework for managing AI applications. Additionally, the Digital Personal Data Protection Act, 2023, and the Consumer Protection Act, 2019, regulate data handling and consumer rights in AI applications. India adopts a mixed regulatory approach, integrating risk-based and sector-specific guidelines to ensure AI’s ethical use across various domains.

1. Legislative Development

  • Nodal AI Law Status: No
    • Details: India’s National Strategy for Artificial Intelligence emphasises the development and deployment of AI tools and technologies grounded in principles of transparency, fairness, privacy, and security. India’s commitment and dedication towards Responsible AI policymaking is also evident in the various position papers released by the Govt namely:  
      • Responsible AI Approach Document for India (Principles for Responsible AI- February 2021); 
      • Responsible AI for All – Approach Document for India – Part 2: Operationalizing Principles for Responsible AI published in August 2021; and 
      • Responsible AI for All – Adopting the Framework: A Use-Case Approach on Facial Recognition Technology published in November 2022.

2. Regulatory Framework Analysis

  • Internet Laws
    • Status: Yes
    • Details: 
      • Classification of AI systems based on risks: Under Section 69A of the Information Technology Act, 2000, the Indian government has the authority to block access to information on grounds of national sovereignty, security, and maintaining public order. Additionally, Section 79 of the IT Act provides a “safe harbour” protection to intermediaries, including AI platforms, shielding them from liability for third-party content they host, as long as they do not actively participate in modifying or selecting this content. However, this immunity is conditional upon the platform’s compliance with the due diligence requirements set by the IT Rules, 2021. 
      • Algorithmic Transparency: Rule 4(4) of the IT Rules, 2021, mandates significant social media intermediaries to implement mechanisms for periodically reviewing their automated tools. These reviews are aimed at proactively identifying content related to rape, child sexual abuse, or similar explicit acts, as well as content identical to previously removed or disabled information under Rule 3(1)(d). Additionally, these reviews must assess the automated tools’ accuracy, fairness, potential for bias and discrimination, and their impact on privacy and security. 
      • Data sharing, quality, access and use: Several provisions of the Digital Personal Data Protection Act, 2023. 
      • Consumer Protection: Any violation of consumer protection rights, i.e., unfair trade practices and deficiency in product or service (where an AI platform provides a particular service or a product) would be covered under the Consumer Protection Act 2019 and the rules thereto such as the Consumer Protection (E-Commerce) Rules, 2020. 
  • Sectoral and  Subject Specific AI Laws
    • Presence: Yes 
    • Sectors: NA

3. Regulatory Approach

  •  Risk-Based vs. Prohibition
    • Approach: Mix
    • Details: NA
  • Horizontal vs. Vertical Regulation
    • Type: Vertical 
    • Details: NA

4. Application in Key Sectors

  • Public Procurement and Usage by Public Authorities 
    • Status: NA
    • Details: NA
  •  Public Administration/ Law Enforcement
    • Usage: NA
    • Details: NA

5. Compliance and Enforcement

  • Licensing Requirements
    • Requirement: NA
    • Details:NA
  • Monitoring and Reporting
    • Mechanisms: NA

6. International Alignment

  •  Multilateral 
    • Forum: India’s active participation in multilateral forums such as the Global Partnership on AI (GPAI), G20, the United Nations, the Indo-Pacific Economic Framework for Prosperity (IPEF), and the Quadrilateral Security Dialogue highlights its commitment to contributing to global discussions on Artificial Intelligence. Additionally, India’s involvement extends to other collaborative efforts, including initiatives under the BRICS grouping and the Commonwealth, where it further emphasizes the importance of cooperation in AI for technological innovation, economic growth, and security. Through these diverse international engagements, India positions itself as a key player in the global AI landscape, advocating for responsible and inclusive AI development.

×

Canada

Canada is proactively shaping its legislative landscape to ensure the responsible development and use of AI through the introduction of the Artificial Intelligence and Data Act (AIDA), as part of Bill C-27.  This act mandates developers and operators to adopt risk management practices, ensure transparency, and evaluate the systems’ impacts regularly. Alongside AIDA, Canada’s regulatory landscape incorporates existing laws for personal data protection and consumer safety, extending to healthcare, finance, and other sectors. Further, Canada’s AI regulation strategy builds on existing laws covering personal data protection and consumer safety across sectors such as healthcare, finance, and vehicle safety. Canada has also introduced a Voluntary Code of Conduct for advanced generative AI systems and established guiding principles for AI in medical devices, in collaboration with international partners. 

1. Legislative Development

  • Nodal AI Law Status: No
    • Status: In process (AIDA Act)
    • Details: The Artificial Intelligence and Data Act (AIDA), part of Canada’s Bill C-27, marks the country’s inaugural comprehensive legislation aimed at the responsible development and use of AI. The Act focuses on regulating “high-impact” AI systems, which are likely to pose significant risks in areas such as health, safety, or human rights. Moreover, under AIDA, developers and operators of high-impact AI systems are obligated to implement and maintain measures to identify, assess, and mitigate risks of harm or biassed outcomes. They are also required to regularly evaluate the effectiveness of these measures, transparently describe their AI systems on a public platform, and promptly inform the Minister of Innovation, Science, and Industry in case the AI system is or might cause substantial harm.

2. Regulatory Framework Analysis

  • Conventional Law Regulation
    • Status: Applied
    • Details: Canada’s legal landscape includes comprehensive frameworks that extend to various applications of Artificial Intelligence. Key among these is the Personal Information Protection and Electronic Documents Act, which sets standards for how businesses handle personal information. Additionally, several existing laws also apply to AI usage, ensuring consumer, health, safety, and rights protection. These include:
      • The Canada Consumer Product Safety Act
      • The Food and Drugs Act
      • The Motor Vehicle Safety Act
      • The Bank Act
      • The Canadian Human Rights Act 
      • The Criminal Code
  • Sectoral and  Subject Specific AI Laws
    • Presence: Yes  
    • Sectors: 
      • Generative AI: In September 2023, Canada announced a Voluntary Code of Conduct for the Responsible Development and Management of Advanced Generative AI systems. This interim code offers Canadian firms a set of common standards to demonstrate responsible AI usage prior to the enactment of formal regulations. Developed from consultation feedback, it aims to bolster public confidence in these systems. The code outlines key measures for accountability, safety, fairness, transparency, human oversight, and system robustness. Companies commit to these principles, ensuring their AI systems are developed and managed ethically and securely. 
      • Health: Health Canada, in collaboration with the U.S. FDA and the UK’s MHRA, has established a set of 10 guiding principles for the application of Good Machine Learning Practice (GMLP) in the development of medical devices.These principles aim to ensure the safe, effective, and high-quality development of medical devices using artificial intelligence and machine learning (AI/ML). The principles cover various aspects, including leveraging multi-disciplinary expertise, implementing good software engineering and security practices, ensuring representative clinical study participants and datasets, and maintaining independent training and test datasets. They also focus on model design tailored to available data, performance of the human-AI team, and thorough testing under clinically relevant conditions. Moreover, they emphasise clear user information provision and performance monitoring of deployed models. 
      • Finance: The Office of the Superintendent of Financial Institutions (OSFI) in Canada is in the process of revising its model risk management guidelines, as outlined in Guideline E-23. This update is specifically focused on addressing the challenges and risks posed by new technologies, including Artificial Intelligence. The goal is to ensure that federally regulated financial institutions and pension plans effectively manage the risks associated with the use of advanced and increasingly complex technologies in their operational models.

3. Regulatory Approach

  •  Risk-Based vs. Prohibition
    • Approach: Risk-based 
    • Details: The AIDA mandates stringent measures for managing risks in high-impact AI systems. These measures encompass principles of human oversight, monitoring, transparency, fairness, equity, safety, accountability, validity, and robustness. Businesses are required to implement governance mechanisms for compliance and are accountable for internal policies ensuring adherence to AIDA. These requirements are tailored to the risks of specific activities in the AI system’s life cycle, with regulations being developed through consultation and based on international standards to balance innovation and risk management
  • Horizontal vs. Vertical Regulation
    • Type: Horizontal 
    • Details: N/A

4. Application in Key Sectors

  • Public Procurement and Usage by Public Authorities 
    • Status:  N/A
    • Details: N/A
  •  Public Administration/ Law Enforcement
    • Usage: Yes (Directive on Automated Decision Making)
    • Details: The Government of Canada’s directive focuses on employing artificial intelligence in administrative decisions to enhance service delivery while adhering to principles like transparency, accountability, and procedural fairness. The directive aims to ensure that automated decision systems are used responsibly, minimising risks and ensuring decisions comply with Canadian law. It expects results such as data-driven and fair decisions, reduced negative impacts of algorithms, and public availability of data on AI use in federal institutions. This directive is applicable only to AI systems in actual operation, excluding those in testing phases.

5. Compliance and Enforcement

  • Licensing Requirements
    • Requirement: N/A
    • Details: N/A
  • Monitoring and Reporting
    • Mechanisms:N/A

6. International Alignment

  • Multilateral Forums: Canada’s active participation in key multilateral forums related to AI underscores its commitment to shaping and aligning with international standards and practices in AI governance. By engaging with the Global Partnership on AI (GPAI), G20, G7, Organisation for Economic Co-operation and Development (OECD), United Nations (UN), World Economic Forum (WEF), International Panel on AI (IPAI), IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and the North Atlantic Treaty Organization (NATO), Canada positions itself at the forefront of global AI discussions.

×

USA

The U.S. is shaping its AI regulatory environment with a 2023 Executive Order from the Biden-Harris Administration, focusing on responsible AI use in areas like safety and privacy. This initiative, building on existing efforts like the Blueprint for an AI Bill of Rights, signifies a comprehensive approach to AI governance. Key legislative measures include the AI Training Act (2021) for workforce training in executive agencies and the National Artificial Intelligence Initiative Act (2020) to bolster U.S. leadership in AI R&D. Sector-specific actions, including USPTO’s 2024 guidelines for AI inventions and the FCC’s crackdown on AI-generated voice scams, along with the FTC’s focus on fair AI practices and the NIST’s risk management framework, highlight a targeted and comprehensive approach to AI governance, emphasising innovation alongside ethical use.

1. Legislative Development

  • Nodal AI Law Status: No (Federal)
    • Status: Executive Order (EO) Released, 
    • Details: In October 2023, the U.S. government, under the Biden-Harris Administration, issued an Executive Order to enhance AI regulation and management, building on previous initiatives such as voluntary company commitments and the Blueprint for an AI Bill of Rights. This EO covers a broad spectrum of AI-related areas such as the AI Safety and Security Standards, Privacy Protection, Equity and Civil Rights, Consumer, Patient, and Student Protection, Workforce Support, Innovation and Competition Promotion, International Leadership and Effective Government Use of AI. 

2. Regulatory Framework Analysis

  • Conventional Law Regulation
    • Status: Yes
    • Details: 
      • AI Training Act (2021): This Act requires the Office of Management and Budget to establish or otherwise provide an AI training program for the acquisition workforce of executive agencies (e.g., those responsible for program management or logistics), with exceptions.
      • National Artificial Intelligence Initiative Act (2020): This Act establishes the National Artificial Intelligence Initiative (NAII) to ensure continued U.S. leadership in AI R&D.
  • Sectoral and  Subject Specific AI Laws
    • Presence: Yes
    • Sectors: 
      • Patents: On February 13, 2024, the United States Patent and Trademark Office (USPTO) issued guidelines for patent inventorship issues regarding AI-assisted inventions. These guidelines clarify that while AI-assisted inventions are not automatically unpatentable, a human must significantly contribute to every aspect of the invention for it to be patentable. The USPTO emphasises the role of human ingenuity in the patent system, even as AI plays an increasing role in innovation. 
      • AI Generated Voices: On February 8, 2024, the Federal Communications Commission (FCC) unanimously passed a Declaratory Ruling under the Telephone Consumer Protection Act (TCPA), classifying calls made using AI-generated voices as “artificial”. This ruling, effective immediately, outlaws the use of voice cloning technology in widespread robocall scams. This development empowers State Attorneys General across the U.S. with additional means to prosecute the entities responsible for these malicious robocalls. 
      • Trade: The Federal Trade Commission (FTC) is actively addressing the challenges posed by artificial intelligence (AI) in various sectors, emphasizing the importance of fairness and transparency in AI-driven decision-making. The FTC enforces several laws that are crucial for AI developers and users:
        • FTC Act: This act prohibits unfair or deceptive practices, including the sale or use of racially biased algorithms.
        • Fair Credit Reporting Act (FCRA): Relevant when algorithms are used to deny people employment, housing, credit, insurance, or other benefits.
        • Equal Credit Opportunity Act (ECOA): Makes it illegal to use biased algorithms that result in credit discrimination based on race, color, religion, national origin, sex, marital status, age, or public assistance receipt.
        • Commerce: The NIST AI Risk Management Framework is a comprehensive guide developed by the National Institute of Standards and Technology (NIST) in the United States to address the management of risks associated with the design, development, use, and evaluation of artificial intelligence systems. The framework is designed to be adaptable and applicable across various sectors and AI technologies. Key components of the framework include AI Lifecycle Phases, Risk Management, Ethical Considerations, Governance and Culture, Collaboration and Communication and Continuous Improvement. The NIST AI Risk Management Framework serves as a valuable tool for organizations to responsibly manage AI risks and align their AI practices with ethical and legal standards

3. Regulatory Approach

  •  Risk-Based vs. Prohibition
    • Approach: NA
    • Details: NA
  • Horizontal vs. Vertical Regulation
    • Type: Mix
    • Details: NA

4. Application in Key Sectors

  • Public Procurement and Usage by Public Authorities 
    • Status:  NA
    • Details: NA
  •  Public Administration/ Law Enforcement
    • Usage: NA
    • Details: NA

5. Compliance and Enforcement

  • Licensing Requirements
    • Requirement: NA
    • Details: NA
  • Monitoring and Reporting
    • Mechanisms: NA

6. International Alignment

  • Multilateral Forums: The United States plays a significant role in shaping global AI policy and standards through its active participation in various multilateral forums and initiatives. Its involvement in the Global Partnership on AI (GPAI), G20, and the OECD facilitates international collaboration on ethical AI development, governance, and innovation. The U.S. also engages in security-focused groups like the Five Eyes intelligence alliance and the US-EU Intelligence Collaboration, where AI’s implications for national and international security are key discussion points. Furthermore, its participation in the United Nations Convention on Certain Conventional Weapons (UN CCW) addresses the challenges and opportunities of AI in the context of global arms control. Beyond these, the U.S. contributes to discussions in other forums like the WEF and the ITU, which focus on the societal and technical aspects of AI, respectively.

×

United Kingdom

The United Kingdom does not have a specific nodal law for AI regulation.  It follows a pro-innovation and context-based approach, emphasising the need for understanding AI risks before implementing regulations. Existing laws like the UK General Data Protection Regulations, Data Protection Act 2018, Equality Act 2010, apply to AI, with sector-specific explorations in competition/antitrust, data protection, anti-discrimination, IP rights, and safety. The government adopts a risk-based approach, promoting cross-sectoral principles and context-specific frameworks for AI governance. Key sectors like public procurement, public administration, and law enforcement are encouraged to adopt AI with guidance from various regulatory bodies.

1. Legislative Development

  • Nodal AI Law Status: No 
    • Status: NA 
    • Details: The United Kingdom does not have a nodal law to regulate AI. A Pro-innovation Approach to AI Regulation: Government Response (February, 2024) advocates and favours a pro-innovation and context based framework. The document recognises that some mandatory measures will be required across all jurisdictions to address potential AI related harms. However, it states that it requires a clear understanding of AI risks and the UK will proceed with regulations when it deems appropriate. 

2. Regulatory Framework Analysis

  • Internet Laws 
    • Status: Applied 
    • Details:  The General Data protection Regulations and the Data Protection Act 2018, Equality Act 2010, and laws pertaining to competition, consumer protection, law enforcement and others are applicable in relation to AI. 
  • Sectoral and  Subject Specific AI Laws
    • Presence: Yes/ Exploratory 
    • Sectors: 
  • Competition/ Antitrust: Competition and Markets Authority (CMA) published a review of foundation models to understand the opportunities and risks for competition and consumer protection.
  • Data Protection: The Information Commissioner’s Office (ICO) updated guidance on how data protection laws apply to AI systems to include fairness. Further, The UK’s data protection framework, which is being reformed through the Data Protection and Digital Information Bill (DPDI), will complement the pro-innovation, proportionate, and context-based approach to regulating AI.
  • Anti Discrimination: The ICO has updated guidance on how our strong data protection laws apply to AI systems that process personal data to include fairness and has continued to hold organisations to account, for example through the issuing of enforcement notices. The Office of the Police Chief Scientific Adviser published a Covenant for Using AI in Policing which has been endorsed by the National Police Chiefs’ Council and should be given due regard by all developers and users of the technology in the sector. CDDO has come out with the Algorithmic Transparency Recording Standard to assist the public sector for  algorithmic tools support decisions.
  • IPR: The Department for Culture, Media and Sport (DCMS) is working closely with key stakeholders including publishers, the music industry, and other creative businesses to understand the impact of AI on these sectors, with a view to mitigating risks and capitalising on opportunities. The Intellectual Property Office (IPO) has also convened a working group on interaction between copyright and AI. 
  • Miscellaneous: safety of AI, regulators such as the Office of Gas and Electricity Markets (Ofgem) and Civil Aviation Authority (CAA) are working on AI strategies. Medicines and Healthcare products Regulatory Agency (MHRA) Software and AI launched Medical Device Change Programme 2021 on requirements for software and AI used in medical devices. 

3. Regulatory Approach

  •  Risk-Based vs. Prohibition
    • Approach: Risk Based 
    • Details: The Pro-innovation Approach to AI Regulation: Government Response outlines a risk-based approach to AI governance. 
  • Horizontal vs. Vertical Regulation
    • Type:Mix 
    • Details: The pro-innovation document by the UK endorses a cross-sectoral principles (outlined under AI Regulation White Paper March, 2023) context-specific framework. The Department for Science, Innovation and Technology (DSIT) released Introduction to AI assurance to provide an accessible introduction to both assurance mechanisms and global technical standards, to help industry and regulators better understand how to build and deploy responsible AI systems. The document will be revised periodically. 

4. Application in Key Sectors

  • Public Procurement and Usage by Public Authorities 
    • Status: Yes 
    • Details: Cabinet Office (CO) is leading on establishing the necessary underpinnings to drive AI adoption across the public sector, by improving digital infrastructure and access to data sets, and developing centralised standards. The Central Digital and Data Office (CDDO) has published guidance on the procurement and use of generative AI for the UK government. Other relevant bodies are: National Cyber Security Centre (NCSC) and Department for Science, Innovation and Technology (DSIT). 
  •  Public Administration/ Law Enforcement
    • Usage: Yes
    • Regulations:The ICO has updated guidance on how our strong data protection laws apply to AI systems that process personal data to include fairness and has continued to hold organisations to account, for example through the issuing of enforcement notices. The Office of the Police Chief Scientific Adviser published a Covenant for Using AI in Policing which has been endorsed by the National Police Chiefs’ Council and should be given due regard by all developers and users of the technology in the sector. CDDO has come out with the Algorithmic Transparency Recording Standard to assist the public sector for  algorithmic tools support decisions. 

5. Compliance and Enforcement

  • Licensing Requirements
    • Requirement: NA
    • Details: NA 
  • Monitoring and Reporting
    • Mechanisms: NA 

6. International Alignment

  • Multilateral : The United Kingdom actively participates in international AI initiatives. As a member of the Global Partnership on AI (GPAI), the UK collaborates with other nations to guide responsible AI practices and policies. In forums like the G7, G20, OECD, United Nations, Five Eyes Alliance, and Council of Europe, the UK engages in high-level dialogues and policy-making processes, aiming to promote AI innovation that aligns with democratic values, human rights, and public good. 

 

×