In a pivotal move for AI governance, the European AI Office has released the first draft of its General-Purpose AI Code of Practice (Draft Code).
The Draft Code represents a cornerstone in the EU’s strategy to shape a robust regulatory framework for artificial intelligence, guiding providers toward compliance, accountability, and societal benefit. The Draft Code, which will undergo review with nearly 1,000 stakeholders, will evolve through several rounds of consultation, with the final version set for publication in May 2025.
Codes of Practice and the AI Act
The AI Act’s codes of practice, as outlined in Article 56, are not legally binding. However, adherence to these codes provides a ‘presumption of conformity’ with the Act’s obligations for providers of general-purpose AI models until formal standards are established. This means that while compliance is voluntary, following the codes can serve as evidence of meeting regulatory requirements during the interim period before official standards are adopted.
Providers can use codes of practice (defined in Article 56 of the AI Act) to demonstrate compliance with certain AI Act obligations until harmonised standards are issued. Article 56 enables the AI Office to facilitate EU-level codes of practice covering these obligations, aiming for collaborative development with relevant stakeholders. These codes must be detailed, regularly monitored, and adaptable to technological changes, ultimately ensuring a high standard of compliance across the EU.
Objectives of the Code
The Draft Code delineates four foundational objectives, each targeting a critical aspect of AI regulation in alignment with the broader mandates of the EU AI Act. First, the Draft Code provides clear compliance pathways for providers by laying out explicit guidelines on how to document and validate adherence to the AI Act. This includes facilitating transparency and visibility for the European AI Office, enabling a thorough assessment of compliance, especially for providers of advanced general-purpose AI models.
Next, the Draft Code aims to enable understanding across the AI value chain. Emphasising the importance of transparency, it establishes a common understanding of model functionalities and limitations, ensuring downstream developers gain the knowledge required for responsible integration and adherence to regulatory obligations. Upholding copyright and related rights is also a significant focus. A substantial portion of the Draft Code is devoted to compliance with Union copyright law, including measures ensuring that models respect the rights of content creators and balancing innovation in AI with IP protection.
Lastly, the Draft Code mandates that providers continuously monitor and assess models with systemic risks. A structured framework ensures these risks are addressed at every stage, from development through deployment.
Responsibilities for Providers
The Draft Code outlines specific responsibilities for providers, recognising the unique role that general-purpose AI models play in underpinning downstream AI systems. It introduces robust rules in documentation and transparency, acceptable use policies, intellectual property compliance, and risk proportionality. Providers are required to maintain technical documentation encompassing training data, model architecture, testing procedures, and results. This documentation will be available to the AI Office and, selectively, to the public, enabling both regulatory oversight and societal transparency. Acceptable use policies must be established, defining acceptable and prohibited uses of the models to prevent misuse.
Intellectual property compliance is also addressed through specific measures in alignment with the Text and Data Mining exception under Directive 2019/790, which requires due diligence to ensure copyright reservations are respected. To prevent infringement on protected content during model training, providers must follow machine-readable rights protocols like robots.txt.
The Code also mandates that obligations be scaled to the size of each provider, with small and medium enterprises subject to proportionate compliance requirements, balancing accountability with innovation across the ecosystem.
Systemic Risk Taxonomy
A key element of the Draft Code is the systemic risk taxonomy, which categorises potential threats posed by advanced AI. This taxonomy establishes a structured basis for risk assessment and serves as a tool for providers of high-risk AI models to identify, assess, and mitigate various types of systemic risk. The Draft Code defines several risk categories, including cybersecurity threats, manipulation and misinformation, environmental and societal impacts, and loss of control.
Cybersecurity risks focus on models that could be exploited for cyber-offensive purposes, such as hacking or vulnerability exploitation, necessitating rigorous security protocols. Manipulation and misinformation concerns address models with the potential to spread large-scale disinformation, especially in areas affecting democratic integrity or public health. Risks to societal and environmental well-being are also considered, with measures targeting models that could harm social stability or environmental sustainability. AI models that risk autonomous behaviour beyond intended applications are categorised under loss of control and are subject to stringent oversight to mitigate potential threats to human safety and autonomy.
The Code mandates that this taxonomy be continually updated to capture emerging risks, ensuring adaptability in risk management as AI capabilities evolve.
Obligations for Providers of High-Risk Models
For providers managing high-risk AI models, the Draft Code imposes elevated obligations to prevent systemic harms. A Safety and Security Framework mandates continuous risk assessment and mitigation throughout a model’s lifecycle, including protocols for systemic risk assessment, safety and security reports, governance structures for accountability, and incident reporting measures.
Rigorous systemic risk assessments are required, encompassing probability and severity analysis, adversarial testing, scenario planning, and simulations, tailored to each model’s risk profile. Safety and Security Reports, compiled at least biannually or whenever significant changes occur, document each model’s systemic risks, mitigation strategies, and outcomes of safety evaluations. Governance structures extend accountability to executive and board levels, ensuring that AI risk management aligns with broader organisational oversight.
The Draft Code mandates secure channels for whistleblowers to report risks or violations without retaliation, while serious incident reporting mechanisms alert the AI Office of incidents with clear post-incident mitigation protocols.
Technical and Governance-Based Safeguards
Providers of systemic-risk AI models are required to implement detailed technical and governance-based safeguards, scaled to the severity of identified risks. Specific measures include safety mitigations based on each model’s risk tier, such as containment measures to prevent misuse, and security controls to protect model data, securing model weights and other proprietary assets from unauthorised access or tampering.
Governance-level accountability ensures that systemic risk ownership extends to executive and board oversight, with dedicated resources allocated to address risk management.
Lifecycle-Based Evidence Collection and Risk Assessment
The Code mandates continuous evidence collection and evaluation, covering all systemic risks associated with general-purpose AI models. Providers must undertake proactive assessments, documenting capabilities, limitations, and potential harms through a range of methods including literature reviews, competitive analyses, and independent evaluations. This evidence collection is integrated with a lifecycle-based risk assessment framework that requires evaluations at key stages such as pre-training, mid-training, and post-deployment.
This lifecycle approach reflects the Code’s commitment to dynamic risk management, recognising that AI risks evolve as models are refined and deployed.
Public Transparency and Documentation Standards
Public transparency is a core component of the Draft Code, which mandates that providers publish summaries of their safety frameworks, including Safety and Security Frameworks and Safety and Security Reports, making this documentation accessible to relevant stakeholders.
Compliance with copyright obligations must also be publicly transparent, including information on text and data mining compliance and any model interactions with protected content. To mitigate compliance burdens, especially for SMEs, the Draft Code advocates for standardised templates for documentation, ensuring the process is streamlined without sacrificing rigour.
Conclusion
The release of the Draft Code signals the EU’s determination to lead in global AI regulation. With measures encompassing transparency, copyright compliance, risk management, and systemic accountability, the Draft Code sets a high standard for responsible AI development. As the consultation process unfolds, this document will undoubtedly evolve, but its foundational principles underscore the EU’s commitment to safe, ethical, and transparent AI deployment.
The next round of consultations, scheduled for later this month, will gather feedback from stakeholders across sectors.