Home Knowledge A Practical Guide to the Extraterritorial Reach of the AI Act

A Practical Guide to the Extraterritorial Reach of the AI Act

The European Union’s Artificial Intelligence Act (AI Act) is poised to become a landmark piece of legislation in AI regulation, with its extraterritorial scope being one of its most significant and far-reaching aspects.

This article examines the extraterritorial provisions of the AI Act and their implications for global AI governance, focusing on the obligations of providers of high-risk AI systems and general-purpose AI (GPAI) models and the crucial role of authorised representatives.

Scope and Extraterritorial Reach

The scope of application for the AI Act is outlined in Article 2, encompassing actors both within and outside the EU. According to Article 2(1)(a), the AI Act applies to providers placing on the market or putting into service AI systems or GPAI models in the EU, irrespective of whether those providers are established or located within the EU or in a third country.

The extraterritorial nature of the AI Act is particularly evident in its application to providers and deployers of AI systems established in third countries. Article 2(1)(c) specifically addresses this extraterritorial reach by including providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the system is used in the EU. This output-based jurisdiction is a key aspect of the AI Act’s extraterritorial scope.

Role of Authorised Representatives for Providers Established Outside the EU

A key mechanism for enforcing the extraterritorial reach of the AI Act is the concept of the “authorised representative.” An ‘authorised representative’ is defined under Article 3(5) of the AI Act as a natural or legal person located or established in the EU who has received and accepted a written mandate from a provider of an AI system or a GPAI model who is established in a third country to perform and carry out on its behalf the obligations and procedures established by the AI Act.

The AI Act distinguishes between authorised representatives for high-risk AI systems (Article 22) and those for GPAI models (Article 54). However, the underlying principle remains the same: ensuring accountability within the EU’s jurisdiction.

High-Risk AI Systems

Article 22 of the AI Act mandates that providers of high-risk AI systems established in third countries must appoint an authorised representative in the EU before making their systems available on the EU market. The authorised representative’s responsibilities include:

  • verifying the EU declaration of conformity and technical documentation;
  • keeping these documents and other relevant information at the disposal of competent authorities for ten years;
  • providing information and documentation to competent authorities upon request;
  • cooperating with authorities on risk mitigation actions; and
  • complying with registration obligations where applicable.

Importantly, Article 22(4) of the AI Act empowers the authorised representative to terminate the mandate if they believe the provider is acting contrary to the AI Act, and requires them to inform the relevant market surveillance authority and notified body about such termination.

General-Purpose AI Models

Similarly, Article 54 of the AI Act requires providers of GPAI models established in third countries to appoint an authorised representative in the EU. Their responsibilities include verifying technical documentation, providing information to demonstrate compliance, and cooperating with authorities on actions related to the GPAI model.

These provisions ensure that there is always an entity within the EU’s jurisdiction that can be held accountable for compliance with the AI Act, regardless of where the AI system or model originates.

Obligations for Providers

Article 53 of the AI Act outlines specific obligations for providers of GPAI models, which apply regardless of the provider’s location. These include maintaining technical documentation, providing information to AI system providers who intend to integrate the model, complying with EU copyright law, and publishing a summary of training content.

Context and Justification

Recital 22 of the AI Act provides crucial context for understanding the extent of the AI Act’s extraterritorial application, stating that certain AI systems should fall within the scope of the AI Act even when they are not placed on the market, put into service, or used in the EU. This includes scenarios where an EU-based operator contracts services to a third-country operator involving an AI system that would qualify as high-risk.

Recital 106 of the AI Act further extends the extraterritorial reach in the context of copyright compliance, stipulating that providers of GPAI models must comply with EU copyright law, regardless of where the training of these models takes place. This ensures a level playing field among providers, preventing competitive advantages based on less onerous legal obligations applying outside the EU.

Implications for Global AI Governance

The extraterritorial scope of the AI Act has significant implications for global AI governance. The AI Act’s broad reach, along with the requirements for authorised representatives and specific obligations for providers, may encourage non-EU companies and countries to align their AI development and deployment practices with EU standards to maintain access to the EU market. This could potentially lead to the AI Act becoming a de facto global standard for AI regulation.

However, these provisions may also create challenges for companies operating across multiple jurisdictions. Non-EU entities will need to carefully assess their AI systems, ensure compliance with the AI Act, establish a presence in the EU through an authorised representative, and meet the various obligations if they intend to serve EU customers or process EU-origin data, even if their AI systems are not directly deployed within the EU.

Conclusion

The extraterritorial scope of the AI Act represents a bold and comprehensive step in regulating AI on a global scale. By extending its reach beyond EU borders, requiring representation for third-country providers, and imposing specific obligations on providers of both high-risk AI systems and GPAI models, the AI Act aims to ensure comprehensive protection for EU citizens and maintain a level playing field for AI providers serving the EU market.

As the AI Act moves towards implementation, its extraterritorial provisions will likely spark further debate and potentially influence the development of AI governance frameworks worldwide. Stakeholders across the global AI ecosystem must closely monitor these developments and adapt their strategies to navigate this evolving regulatory landscape.

For further guidance and support on AI compliance, please contact Barry Scannell, Leo Moore, Rachel Hayes or any member of the William Fry Technology Department.