· Insights  · 8 min read

EU AI Code of Practice Draft: What Boards Should Know

The EU AI Office has released a draft General-Purpose AI Code of Practice. Boards of General-Purpose AI providers should be aware of the implications for their organizations.
tl;dr

The EU AI Office published a draft General-Purpose AI Code of Practice. Boards of General-Purpose AI providers should focus on:

  • systemic risk oversight
  • strategic responsibility
  • documentation requirements
  • incident response
  • risk assessment
  • whistleblower protection, and
  • external communication.

The EU AI Office, a regulatory body established by the EU AI Act, has released a draft General-Purpose AI Code of Practice. The AI Act empowered the AI Office to create one or more codes of practice to provide guidance on how AI providers (and downstream providers) can comply with the requirements of the AI Act. The First Draft General-Purpose AI Code of Practice is open for public consultation and feedback until November 28, 2024, and the final version is required to be ready by May 1, 2025.

While the Code of Practice is not legally binding, it is expected to be influential in shaping the AI landscape in the EU. Boards of General-Purpose AI providers should be aware of the implications of the Code of Practice for their organizations. The Code of Practice covers a wide range of topics, including:

  • Transparency
  • Rules related to copyright
  • Taxonomy of systemic risks
  • Safety and security framework
  • Risk assessment
  • Technical risk mitigation
  • Governance risk mitigation

Who is a General-Purpose AI Provider?

The term “general-purpose AI model” is defined in the EU AI Act as an “AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.”1

You’re probably thinking “thanks for answering my question with a large block of regulatory text. Very helpful!” So, in plain English, a general-purpose AI provider is a company that provides AI models that can be used in a wide range of applications, as opposed to a company that sells AI models for a specific purpose, such as an image classification model for manufacturing.

Note that I said a company that provides AI, not sells - this is intentional, because (unless an exemption otherwise exists), the AI Act applies to AI models that are provided for free - such as open sourced models - as well as those that are sold.

Where Should Boards Focus?

With the draft Code of Practice clocking in at 36 pages, where should Boards focus their attention?

  • Systemic risk oversight
  • Strategic responsibility
  • Documentation requirements
  • Incident response
  • Risk assessment
  • Whistleblower protection
  • External communication

Let’s look at each of these elements in more detail below.

Systemic Risk Oversight

Sub-Measure 15.2 of the draft Code of Practice explicitly calls on boards to establish oversight of systemic risks from general-purpose AI models, including through the creation of dedicated risk committees. This particular sub-measure applies only to those general-purpose models with systemic risk, but it’s a good idea for boards to consider how they will address systemic risks in all AI models, regardless of whether or not it’s regulatorily mandated.

Board Consideration:
  • Expertise: Does the current board have sufficient expertise to meet this requirement? If not, how will we address this?

Strategic Responsibility

In a similar vein, the Code establishes the need for board-level responsibility for allocating adequate resources for overseeing systemic risks from AI models within the organization. This would include allocating responsibility for oversight at the executive level, as well as ensuring that the executives are given sufficient budgets to carry out these responsibilities. Executives are responsible for overseeing the makeup of their teams to align expertise with the risks that the organization faces.

Board Consideration:
  • Kill two birds with one stone: Can we address any expertise gaps in a way that also aligns with our strategic goals and developmental roadmap?

Documentation Requirements

Under Measure 21, organizations are required to document their adherence to the Code of Practice and all applicable provisions of the AI Act relating to general-purpose AI models with systemic risk. Boards should oversee this documentation and ensure that it is accurate and up-to-date, as the organization will be required to share it with the AI Office upon request.

The documentation should include:

  • Technical documentation of the AI model, including general descriptions, detailed design specifications, and training data (per Annex XI of the AI Act). Additional technical documentation is required for AI models with systemic risk.
  • Criteria relevant to the classification of the model as a general-purpose AI model with systemic risk (per Annex XIII of the AI Act)
  • Security and safety framework documentation
  • Security and safety reports
  • Evidence collected during risk assessments
Board Consideration:
  • Streamlining: Strive to comply with documentation requirements in a way that reduces redundancy and increases efficiency. Where possible, leverage existing documentation and processes, supplementing them as needed

Incident Response

Hopefully boards have already developed an incident response plan for AI models, but the Code of Practice requires that organizations have a documented incident response plan that enables the organization to identify and track serious incidents that arise from the general-purpose AI models with systemic risk. Organizations should document and report incidents to the AI Office (and the national competent authorities, as needed), and the board should ensure that the organization has the necessary resources to carry out this requirement.

What, pray tell, is a “serious incident”? Great question! I have no answer for you. The current draft has this as an open question, but they’re considering utilizing the AI Act’s definition in Article 3(49): “‘serious incident’ means an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

(a) the death of a person, or serious harm to a person’s health;

(b) a serious and irreversible disruption of the management or operation of critical infrastructure.

(c) the infringement of obligations under Union law intended to protect fundamental rights;

(d) serious harm to property or the environment”

Risk Assessment

Risk assessment is a key focus of the Code of Practice, with 4 Measures relating specifically to how providers of general-purpose AI models with systemic risk should be assessing such risks. The good news is that organizations should already be engaging in these types of risk assessments…right?

Risk identification and analysis of AI models are a core part of the overall AI risk management process. For general-purpose AI models with systemic risk, the Code of Practice requires that organizations engage in a continuous risk assessment process, including before training, during training, during deployment, and post-deployment.

Measure 16 requires periodic assessment specifically of the organization’s adherence to, as well as sufficiency of, their Safety and Security Framework. This is a key area where the board should be involved, as the Safety and Security Framework details the organization’s risk management policies in the context of general-purpose AI models with systemic risk.

Board Consideration:
  • AI Lifecycle: Measure 11 requires continuous assessment through the full development and deployment lifecycle. Does our risk assessment process cover the entire AI lifecycle?

Whistleblower Protection

Boards play an important role in establishing and overseeing whistleblower policies and procedures. The EU Whistleblower Directive2 applies to AI Act infringements, and under the Code of Practice, organizations are required to implement whistleblowing channels and offer protection for such whistleblowers.

Board Consideration:
  • Information on Whistleblowing Are our organization’s employees aware of how they can submit whistleblower complaints?

External Communication

The Code of Practice requires that organizations provide clear and relevant information to the AI Office before training runs, ahead of decisions relating to security and safety, before placing a new model with systemic risk on the market, and if there is reason to believe that a substantial systemic risk could materialize. Boards should ensure that the organization has appropriate processes in place to provide such documentation to the AI Office in a compliant manner

The Sub-Measures relating to these external communications interact with other measures that are relevant to the board, such as the documentation requirements related to internal development of Safety and Security Frameworks and Security and Safety Reports.

Next Steps for Boards

Boards should keep an eye on the development of the Code of Practice over the next few months. While it’s possible for the Code to change significantly following the public comment period, many of the Practices that are outlined are in-line with best practices for AI governance (as I’ve discussed previously, AI requirements come from multiple sources, including regulatory requirements). If organizations have not already considered these matters, now would be a good time to do so. If your board does not have sufficient expertise to address AI governance, consider bringing in an additional board member or engaging in board-level training specific to AI risks.

Related Posts

View All Posts »