Online Exclusive

The Board’s Playbook for Navigating AI Policy

By James Turgal

10/24/2025

Partner Content Provided by Optiv Security Inc.
Artificial Intelligence Risk Oversight

As companies adopt new tools to drive growth, boards should ensure AI policy aligns with organizational goals and values.

You would be hard-pressed today to find a business that isn’t either already implementing artificial intelligence in its operations or strongly considering it. Whether it is for efficiency gains, maximizing data, or reducing risk, the benefits are undeniable.

However, as AI technologies become more sophisticated and embedded in business processes, directors face new responsibilities and challenges. The implications of AI policy are now a board-level issue, requiring strategic oversight, risk management, and a commitment to ethical leadership.

The Board’s Role in AI Oversight

Boards have always been responsible for setting an organization’s mission and vision and approving its strategy. Currently, this includes oversight of AI initiatives. Whether it is endorsing investments in AI-powered analytics or reviewing the company’s use of generative AI tools, boards must ensure that AI aligns with organizational goals and values.

Many companies have responded by assigning AI oversight to risk or audit committees—and some have even established a dedicated AI committee. The goal is to provide informed, proactive governance that balances innovation with accountability. Boards should assess factors such as the complexity and scale of their organization’s AI initiatives, the level of in-house AI expertise, and existing committee structures.

Organizations with extensive or high-risk AI applications may benefit from a dedicated AI committee to ensure focused oversight and cross-functional expertise, while those with more limited AI use might integrate oversight within risk or audit committees, supplementing with targeted education and resources.

Benefits of AI for Governance

AI isn’t just a risk to be managed; it is also a powerful tool for improving board effectiveness. AI can automate routine tasks such as meeting scheduling and data analysis, freeing up directors’ time for higher-level strategic discussions. AI-powered dashboards can also surface key performance indicators, flag emerging risks, and even suggest agenda items based on organizational priorities.

By embracing these tools, boards can make better, faster decisions and focus on steering the organizations they serve toward long-term success.

Key Risks and Challenges

Despite its promise, AI brings significant risks that boards must address. One of the foremost concerns is data privacy and security. As AI systems process vast amounts of sensitive information, it increases the risk of data breaches and potential regulatory penalties. Boards should ensure that robust data governance and cybersecurity measures are firmly in place.  

To accomplish this, boards should work closely with executive leadership teams, including the chief information officer, chief risk officer, and legal counsel, to establish clear policies for data access, encryption, and compliance with regulations such as the European Union’s General Data Protection Regulation and the California Consumer Privacy Act.

Implementing continuous monitoring and conducting regular audits of AI systems can help detect vulnerabilities early. Additionally, establishing a culture of data responsibility across the organization ensures all employees understand their role in protecting data integrity and privacy.

Another critical issue is bias and transparency. AI can perpetuate or even amplify existing biases, leading to unfair outcomes and reputational harm. The “black box” nature of some AI models, or the internal decision-making processes that are not visible or understandable to users, further complicates matters. This opacity makes it difficult for boards to fully grasp how AI arrives at specific outputs or decisions, increasing the risk of unintentional biases or errors going unnoticed.

Additionally, AI-generated content can inadvertently infringe on intellectual property or spread misinformation, exposing organizations to both legal and reputational risks. Flawed algorithms or the misuse of AI can erode stakeholder trust, invite legal scrutiny, and undermine a company’s competitive advantage.

Given these challenges, it is essential for boards to understand the full spectrum of risks associated with AI and implement comprehensive policies to mitigate them before they escalate into crises. Ongoing education is key for this. Boards should receive regular training and updates on emerging AI risks, ethical considerations, regulatory changes, and best governance practices.

Such training is best delivered through independent third-party experts, academic institutions, or recognized AI governance organizations that specialize in board-level education. Programs that combine practical frameworks, case studies, and scenario-based learning empower directors to engage knowledgeably in AI oversight and strategic decision-making.

Developing an Effective AI Policy

When it comes to developing an effective AI policy, boards play a critical role in helping to set the tone and direction for responsible AI use within the organizations they serve. The process begins with management conducting a thorough assessment of current and potential AI use cases across the enterprise, ensuring a clear understanding of where AI is being deployed and the specific risks involved.

After management defines what constitutes acceptable and prohibited uses of AI, the board should review the measures and approve them to ensure these guidelines prioritize ethics and align with the organization’s core values. Setting robust data governance standards, with policies that address data privacy, security, and the necessity of human oversight for AI systems, is also essential both internally and externally with all third-party vendors. Understanding how the company’s vendors use AI is critical to make certain their AI tool is not used against you.

Finally, boards should work with management to implement processes to continuously monitor AI performance, which encompasses the system’s efficiency, accuracy, and alignment with organizational values by detecting issues such as bias or hallucinations. Tracking key performance indicators, including fairness metrics, explainability, error rates, and operational responsiveness, enables proactive governance. Regular audits of outcomes and policy updates ensure that the organization remains agile and compliant as technology and regulations evolve.  

Stakeholder Engagement and Communication

AI policy is more than just an internal matter. Boards should actively consider the interests of key stakeholders, including customers, employees, regulators, and the wider community. Transparent communication about how AI is used within the organization and governed builds trust and demonstrates a commitment to responsible innovation.

Boards should ensure that management clearly communicates AI policies both internally and externally and that stakeholder feedback is incorporated into policy development. Boards typically receive stakeholder feedback through management, who can implement formal mechanisms such as surveys, focus groups, advisory councils, and public forums to collect input from customers, employees, regulators, and community representatives.

Some organizations also adopt structured frameworks, such as multistakeholder engagement models, or integrate feedback channels into their AI governance processes to ensure continuous, meaningful dialogue. Management should regularly report aggregated insights and any critical issues to the board, enabling them to consider diverse perspectives in policy development and oversight.

Actionable AI Board Tips 

Boards can take the following actions when modeling their company’s AI usage and governance:

  1. Discover where AI is honestly being used in the enterprise. Shadow information technology and AI are real and they occur in plain sight. The board can’t govern or protect what it can’t see. 
  2. Ask management to conduct threat modeling of the AI usage within the enterprise, using data flow diagrams, both at the macro, or enterprise, and micro, or business unit, level. 
  3. Understand how data, privacy, and trust boundaries work within the enterprise because AI changes how data flows. This could then affect trust boundaries within internal and external teams. 
  4. Request management develop both AI product and vendor risk management questionnaires to use during the audit process. 
  5. Build an AI champion network across the business and IT teams to understand and define what “normal” is so that the board will know when deviations or anomalies occur.  
Charting a Responsible Path Forward

As AI rapidly reshapes the business landscape, boards can no longer afford to take a hands-off approach. The organizations that will thrive are those whose directors are willing to ask tough questions, challenge assumptions, and set clear guardrails for responsible AI use. It is about earning trust, protecting the business’s brand, and unlocking AI’s full potential as a driver of innovation and growth.

The future of AI is being written today in boardrooms around the world. Directors should ensure their organization’s story is one of foresight, integrity, and leadership.

The views expressed in this article are the author's own and do not represent the perspective of NACD.

Optiv is a NACD sponsor, providing directors with critical and timely information, and perspectives. Optiv is a financial supporter of the NACD.

James Turgal

 

 

James Turgal is vice president of the global cyber advisory, risk, and board relations at Optiv.