· Explainer · 14 min read
Risk Management for AI: a Board Director's Guide
Board directors must lead AI risk management through:
- Setting clear risk appetite and aligning AI with strategy
- Assessing specific risks like data breaches, hallucinations, and unauthorized data access
- Choosing appropriate treatments
- Ensuring comprehensive documentation and incident response plans
- Maintaining stakeholder communication and employee training
- Continuously monitoring as AI technology and regulations evolve
This oversight helps organizations use AI effectively while protecting against both traditional cybersecurity threats and AI-specific risks that could impact operations, compliance, and stakeholder trust.
Does your risk management system address AI?
In 2023 there were over 3,000 separate data breaches1; with the average cost of a data breach reaching nearly $5 million in 2024 (and it’s even worse in the US, where the average cost is closer to $10M2), the risk of a material financial impact from a data breach is very serious. Given the severity of the outcome of a data breach, it’s no surprise that mature organizations have established risk management processes to manage information security risks. But what happens when ✨AI✨ enters the arena?
AI Changes Everything…and Nothing
The widespread proliferation of AI solutions and tools has made overall risk management significantly harder in a number of ways; in particular, the ease with which bad actors can engage in their misdeeds. Popular large language models, which have historically been easily jailbroken, enable nearly anyone to become a hacker through traditional means. AI solutions also present a new risk vector, such as data poisoning attacks or other adversarial attacks.
Conversely, a number of AI-powered tools and products have come to market to help organizations more efficiently and effectively address their risk management strategy. One notable area that has been improved by AI is the identification of the “unknown unknowns” in a risk matrix: algorithms are much better at identifying tail events than humans are.
For the most part, however, the risk management process around AI should be pretty similar to other technology risk management. The trick is to actually follow it. :)
While there are numerous frameworks for risk management, some of the more popular ones include ISO 31000, the Committee of Sponsoring Organizations of the Treadway Commission enterprise risk management (COSO ERM) framework, and the National Institute of Standards and Technology (NIST) Cybersecurity Framework. Regardless of the specific framework used, most risk management includes the following elements:
- Establish Context + Objectives
- Risk Assessment
- Risk Treatment
- Recording + Reporting
- (Communication + Consultation)
- (Monitoring + Review)
Next, let’s walk through how boards can manage AI risk using these elements.
1. Establish Context + Objectives
Risk cannot be effectively managed in a vacuum; instead, it must be contextualized based on an organization’s unique objectives and operating environment. Consequently, the first step in the risk management process is to understand the environment (both internal and external) in which your entity operates. As an example, your organization’s internal AI environment might include a directive to future-proof your AI strategies, and your external environment might include the applicability of the EU AI Act and shareholder pressure to maintain your competitive edge.
Boards frequently consider risk management categorically in terms of strategic, operations, reporting, and compliance objectives.
As a board director, your role is to set the tone for risk management by defining the firm’s risk appetite and ensuring that initiatives align with overall strategy. This step lays the foundation for all subsequent risk management activities.
2. Risk Assessment
Identification
During the risk identification stage, organizations identify potential risks that could impact their objectives. When specifically addressing AI, risk identification should consider both risks specific to a particular tool or use case (e.g., hallucinations in the use of a generative AI product) and more broadly (e.g., processing of confidential information). Organizations should consider risks across a range of categories, including ethical risks, data quality, model explainability and transparency, legal and regulatory compliance, and financial risks.
I regularly post about AI “risk spotting”, which can help your board identify overlooked and emerging risks - you can see a full archive of those posts here.
As I mentioned earlier, the use of AI can help identify “unknown unknown” risks. Machine learning techniques, like natural language processing, for example, can enable processing of vast quantities of unstructured data that a human team would be unable to dedicate sufficient resources to reviewing, resulting in better identification of risks that may exist in (or be alluded to) in such data.
Data Flow Mapping
The process of mapping the flow of data is an essential step in risk identification; in this step organizations identify and document the movement of data through their systems, from its origin to its final destination. This process is traditionally used to help identify where sensitive data is being collected, stored, or transmitted to ensure that it’s not processed in an insecure manner or subject to unauthorized access. Many AI tools or solutions are black boxes without sufficient visibility into what happens to data between the input and output.
Privacy and data protection laws such as the General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) often indirectly require data flow mapping as a means by which an organization can demonstrate that it is managing personal data appropriately.
The mapping process also allows organizations to see what jurisdictions are involved in the flow of data, giving them the ability to better understand which regulatory requirements may apply and to consider the related risks.
Analysis
Once risks have been identified, the next step is to assess the likelihood of their occurrence and the potential resulting impact on the organization. This can be done through risk modeling and analysis, both of which can be supported by AI technologies.
Evaluation
Based on the analysis just performed, the organization prioritizes its response to the risks; the board’s perspective is key here, as they help ensure that the response aligns with both the organization’s overall risk tolerance and the strategic objectives.
The inherent risks (i.e., the risks present when no controls are implemented) were previously analyzed; now the organization needs to engage in a similar assessment as to the residual risks that would remain after the implementation of controls. The board will likely base its prioritization of addressing identified risks based on how effectively the organization can reduce the risk.
3. Risk Treatment
After assessing the risks related to AI solutions, organizations must determine what type of treatment is appropriate. Risks can be addressed through the following ways:
- Avoidance
- Mitigation
- Transfer
- Acceptance
Organizations’ overall risk appetite, existing control environment, and the results of their risk assessment are just some of the factors that contribute to the way(s) in which they choose to address a particular risk or set of risks.
Avoidance
Some organizations have decided to address AI risks by avoiding the use of it entirely. This can be a valuable marketing tactic in some cases; Dove was “first beauty brand to commit to never using AI to represent real people in its advertising” - this approach aligns well with their historical messaging, demonstrating a good strategic decision.
While avoidance certainly reduces risks arising from the use of AI, this approach can introduce an inverse set of risks, such as shareholder dissatisfaction, loss of market share, increased costs relative to competitors, and human error.
Organizations might instead take a partial avoidance approach, limiting the use of AI solutions to certain use cases or data types. For example, a medtech company may opt to use AI solutions related to public health data, but prohibit use of the product with any patient data or internal company documents. This approach also falls under the “mitigation” treatment to the extent that it may involve establishing internal policies related to the tools or development of compensating controls.
Mitigation
The most common way to treat risks is by mitigating them; this enables organizations to reduce severity and/or likelihood of risks to an acceptable level. Implementation of mitigation measures often include both technical controls (such as a firewall) and governance controls (such as policies and procedures or board oversight).
Many organizations recognize the importance of developing internal guidance for the use of AI tools. Policies relating to appropriate use of AI may fall anywhere on the “allowance” spectrum: from a strict prohibition to broadly permissive use. Brightline rules might be used, such as those related to the types of tasks that can be performed with AI-enablement. For example, a fintech company may decide that a chatbot can be used for internal purposes (such as answering employee FAQs), but may not be used for public-facing matters. Alternatively, organizations may decide to limit the use of AI-enabled systems to a certain subset of firm personnel, such as trained users or groups (e.g., developers) who can access the technology to mitigate related risks.
People are often the weakest link when it comes to security, and sole reliance on policies is insufficient. Therefore, implementing technical guardrails can help to ensure that the desired outcome is achieved. Limiting the use of AI solutions to those that operate entirely within the organization’s control environment can significantly mitigate risks related to information security, as the storage, transmission, and processing of client or company data occurs within the organization’s existing technical infrastructure. A main focus in our development of the Kelvin Legal Data OS at 273 Ventures (one of my companies) was to avoid introducing additional security and data protection risks by designing it to be fully functional on a firm’s own infrastructure (or their approved cloud infrastructure in their own environments).
Security layers can be added to third-party AI systems and tools to mitigate the risk of data exfiltration by preventing the transfer of data that meets certain rules (e.g., contains SSNs, PHI, or even company-defined trade secrets).
Transfer
Transfer of risk most often occurs through the use of insurance or contractual obligations, such as indemnification. Indemnification and limitations of liability vary by AI tool, ranging from no indemnification all the way up to full indemnification with no liability cap, with many providers falling somewhere in between.
Furthermore, many of these protections are extremely limited in scope once a company starts customizing the AI tool or system to make it more relevant; most AI companies exclude indemnification protection for any event arising from the customer’s modifications to their product or service. If your company decides to fine-tune one of Anthropic or OpenAI’s models, your outputs aren’t covered by the indemnification protection anymore. Given the variability between vendors and numerous exemptions, it’s important to understand whether transfer of risk related to the use of a specific technology is a viable risk treatment option.
Boards are painfully aware that cyber insurance coverage has become a more difficult way to actually achieve the transfer of certain risks. Coverage limitations and exclusions are on the rise, and following a cyberattack, many companies have found that their policy doesn’t actually cover the event. Given the uncertainty surrounding many legal aspects of AI (including copyright, data protection, and explainability and transparency), it may be difficult for organizations to find insurers who are willing to offer policies that transfer this risk. Munich Re, a global reinsurance group, was one of the first to offer AI-specific policies back in 2018, and it’s likely that in the coming years more insurance companies will opt to do so.
Acceptance
In some cases, the cost of mitigation or transfer of risk is too high; in others, the potential impact or likelihood of a risk is too low to justify the efforts to mitigate it. Regardless of the exact reason, organizations will sometimes choose to accept the risk and its consequences.
4. Recording + Reporting
The risk management process should be carefully documented to support transparency and accountability. By creating and preserving records of the organization’s risk management activities, it can demonstrate compliance with regulatory obligations and industry standards.
Bottom-up reporting is critical to the board and management’s ability to make informed strategic decisions. It enables benchmarking and measurements of the performance of risk treatment which in turn enables continuous improvement during monitoring and review.
Response Plan
Despite implementing mitigation or other treatment strategies, risks may still be realized. In these situations, preemptively having a well-crafted response and contingency plan helps to minimize the impact on the organization by more effectively managing the incident.
Using the information from the data mapping exercise during the risk assessment stage allows organizations to more quickly identify exactly what information may have been implicated in an incident; this is not only helpful in reducing reputational harm, but is also a requirement for data breach reporting obligations.
The response plan should include a communication strategy, both within the organization and externally.
5. Communication + Consultation
Consultation
Risk can be managed most effectively when stakeholders throughout the organization feel empowered. Consultation throughout the risk management process with personnel within different roles, departments, and levels within the organization is essential to ensuring that risks are sufficiently identified and are addressed realistically. With AI, particularly with easily accessible consumer chat platforms like those offered by Anthropic or OpenAI, employees often use the tools even when they’re banned by employers. It’s essential for organizations to develop policies, procedures, and controls that are realistic, lest they be ignored or circumvented.
Training
Regular personnel training is a staple of successful risk management programs. With AI, this is even more important given how quickly the technology changes and new risks and opportunities emerge. Strategic AI training programs will empower personnel to capture the value of these tools while understanding the risks associated with them and how to manage them.
Training should be both specific and general. It should:
- Cover risks associated with specific tools/products/systems used by the organization (as well as ways to mitigate or treat these risks)
- Highlight limitations of the tool and its output
- Provide best practices that are in line with the strategic objectives of the organization
- Discuss all relevant contractual, ethical, and legal/regulatory obligations
- Include a review of the relevant policies and procedures
External Communication
Communication is not just an internal matter; it’s also crucial for managing external stakeholders and public perception. As mentioned in the response plan section, having a clear external communication strategy is vital when things go wrong. However, a comprehensive communication strategy about your AI risk management should address:
Proactive Transparency: By regularly communicating the AI risk management practices to stakeholders your organization builds trust and demonstrates its commitment to responsible AI use.
Incident Response Communication: As previously noted, in the case of an AI-related incident, having a clear plan for timely and accurate communication with affected parties, regulators, and the public is essential.
Board Engagement: The board benefits from regular engagement with industry peers, regulators, and AI experts to stay informed about best practices and emerging risks.
Public Education: Consider initiatives to educate the public about your AI use and risk management practices. This can help manage expectations and build trust.
Investor Relations: For public companies, regular disclosures may be required; updates help keep investors informed about the organization’s AI initiatives, risks, and mitigation strategies.
By maintaining open and transparent communication about AI risks and management strategies, organizations can build trust, manage reputational risks, and position themselves as responsible leaders in AI adoption.
6. Monitoring + Review
Effective risk management is continuous, not a one-and-done drill. As I mentioned previously, given the speed at which AI and the related legal obligations and risks develop, boards must be prepared to regularly evaluate their AI risks and opportunities. They should assess whether the existing risk management program still meets the organization’s needs; continuous improvement involves the board identifying new risks, areas for improvement, redundant or inefficient risk treatment, and the implementation of changes that improve the program’s effectiveness.
What’s Next?
As AI continues to evolve and integrate into various aspects of business operations, the role of board directors in overseeing AI risk management becomes increasingly crucial. By following the framework outlined in this post, organizations can develop a robust approach to managing AI-related risks while capitalizing on the opportunities this transformative technology presents.
Our goal for effective AI risk management is to create a secure and ethical foundation for AI adoption; we should encourage practices that allow our organization to achieve and protect its strategic objectives. As board directors, we must stay informed, engage in ongoing learning, and foster a culture of responsible AI use.
Note: You can read the legal industry-specific version of this post that I wrote for my legal AI company here.