Director Essentials

Implementing AI Governance

By Dylan Sandlin, Data & Trust Alliance Staff, Helmuth Ludwig, and Benjamin van Giffen

09/02/2025

Artificial Intelligence Data Governance Technology Oversight

Introduction

The AI landscape has rapidly evolved, presenting businesses and their boards with a combination of urgency and uncertainty. Firms are unclear about whether, how, and when AI will deliver return on investment (ROI), and it is unclear whether AI chiefly offers them opportunity or risk.

AI introduces competitive threats, business model disruption opportunities, capital intensive investment requirements, novel risks, and the need to develop competence in areas that impact the board’s oversight responsibilities. NACD survey data show that more than 62 percent of director respondents are setting aside agenda time to discuss AI. (See AI oversight activities performed by boards.) But with so many unanswered questions–from where to allocate capital to new cybersecurity threats to potential workforce impacts–boards are left asking how they can ensure this time and additional focus is effectively structured and used appropriately. How can they guide the company’s engagement with AI in a way that furthers the company’s strategy, produces real value, and does not introduce unmitigated risk?  

AI oversight activities performed by boards (Respondents could select all that apply.) 

bar chart showing results based on AI oversight activities performed by boards

Q: Which of the following activities relating to AI oversight has your board performed? 
2025 NACD Public Company Board Practices and Oversight Survey, n= 211 

This Directors Essentials report outlines a framework for boards to guide their AI oversight and includes practices the board can implement and deployment scenarios boards are likely to face. While specific practices and answers to questions will differ among organizations, this guide describes steps to help boards create an appropriate agenda, obtain the right expertise, and focus board attention on the critical topics of AI governance.

AI Governance: A 4-Pillar Framework for Oversight

The following sections apply and extend a research-based framework that aligns the board’s core fiduciary responsibilities and that can help shape the board’s oversight and attention. (See 4-Pillar Framework for AI Oversight.) This holistic framework avoids gaps or overlaps in the board’s AI governance, while ensuring the flexibility to perform specific oversight tasks based on the company’s AI strategy and long-term objectives. It is also a helpful reference for many boards that are beginning their AI oversight journey.

A board’s initial conversations around AI are typically grounded in the technology’s impact on the organization’s strategy and its potential to provide additional value, cost savings, or competitive advantage.

To support the execution of an AI strategy, the board will need to approve and scrutinize proposed budgets and capital allocation. As companies undertake new AI projects and incorporate new systems, new risks will be introduced, requiring the board’s oversight to protect and enable the company’s goals and strategy. The board will likewise need to adapt their own skill sets and processes to ensure that they, along with the management team and workforce, have the right expertise, education, structures, and practices. This will allow the board to be a strategic asset, supporting management in responsibly deploying and scaling AI in a way that generates value over the long term. 

4-Pillar Framework for AI Oversight 

4 Pillar-Framework for AI Oversight by Boards; Strategic Oversight, Capital Allocation, AI Risk, Technology Competence

AI Strategy 

Many directors note the potential for “AI disruption” of their company’s current strategy and long-term viability. Yet NACD survey data show that only 23 percent of boards have assessed how this disruption might happen or where it may come from. (See AI oversight activities performed by boards.) Achieving strategic alignment on AI, receiving regular strategic briefings, and incorporating AI into the board’s strategy retreat are practices boards can implement to help improve their strategic oversight of this technology.

Develop shared understanding among the full board of AI’s strategic relevance and importance.

As a first step, boards should align on AI’s strategic impact and opportunities for the company. Beginning this conversation can be difficult, and many directors likely find themselves in a “quiet middle” position between those with a limited desire for engagement on AI and directors with experience or active engagement with the technology. The following questions can help guide this conversation toward a shared understanding:

 

Question: Outcome:
Why is AI strategically relevant to our company? 
  • Shared understanding about AI’s impact and strategic importance 
  • Agreement on need for board time and attention   
What are the roles of the board of directors in the context of AI?
  • Discuss and establish clear board and committee roles and responsibilities. 
  • Identify board focus areas on strategy, capital allocation, risk, and board and committee roles and operations.
How can the board perform those roles?  
  • Alignment of directors' skills, expertise, and education to their oversight duties 
  • Creation of structures and processes to oversee AI

 

Ultimately, the discussion should correct misconceptions that AI is merely a technology issue, helping directors see it as a tool for creating business value for the company.  

The following are examples of practices that were successful in driving greater board engagement and more strategic discussions in subsequent board meetings: 

 

Practice: Example:
Digital Audit  A technology company board questioned their audit firm about how they are balancing productivity gains and cost savings in financial audits through AI integration while ensuring data protection. This included explicitly addressing concerns about confidential information security and cross-client data usage. This situation highlighted a broader strategic challenge: managing supplier relationships to prevent emerging AI-related corporate risks. 
Legal Risk Scenario Walk-Throughs  An insurance company board received presentations from in-house counsel and external AI legal experts about AI-related lawsuits, focusing on data protection, liability issues, data leakage, and AI bias. The case studies of legal claims effectively heightened board awareness and engagement with AI governance risks. 
Board Training with AI Scenarios  As part of mandatory director training, one company used immersive AI scenarios led by independent experts to create transformative educational moments for non-technical directors. One successful training simulated an AI discrimination lawsuit crisis, that required directors to navigate potential risks and determine oversight responses. The training included case studies of company responses. This practical method revealed process gaps and identified how and where the board could strengthen oversight. 
AI Application Presentation & Demos   A board increased AI engagement through industry-specific presentations of three AI applications: their company's, a competitor's, and an adjacent industry's. Each demonstrated inputs required, output results, and quantifiable benefits. This practical approach created urgency when directors saw competitors already deploying solutions. Outcomes included planning full-day AI workshops and incorporating AI expert briefings into future board meetings. 

 

Establish a cadence for AI discussions and updates.

The board should regularly allocate agenda time for updates on the company’s AI initiatives and related issues: for example, evolving regulatory and compliance requirements, updates on emerging competitive threats, or deep dives with outside experts on relevant AI topics. These discussions should illuminate AI’s potential impact on the company’s strategy and uses of AI to avoid falling behind competition, as well as opportunities for the technology to provide a competitive advantage. 

Because AI technologies develop and advance quickly–both through new releases and through the emergent properties of learning systems–continuous oversight is required. One outcome of AI discussions should be for the board to identify the topics for future AI board briefings. The board can work with the CEO to establish a regular briefings' cadence, but given AI’s pace of change, maintaining proper awareness can be difficult with a traditional, quarterly-meeting cadence. Thus, it may be beneficial for boards to request more frequent information sharing and reporting from management about progress and developments with AI initiatives, at a monthly, or in some cases even bi-weekly cadence if that is deemed necessary.  

Ultimately, reporting on AI must avoid a “circular governance” scenario where the oversight terms are dictated by oversight subjects. As with any area of corporate governance, the board must exercise and maintain independent judgment and oversight. 

Alongside these more structured briefings and reports, the board should conduct future- and innovation-focused conversations about AI and its impact on long-term strategy. The goal is to create the space necessary to envision what a potential AI future could be, to question the viability of this imagined future, and then, ultimately, decide if the idea could and should be incorporated into the company’s long-term strategic plans. These conversations can be more informal (e.g., at a board pre-meeting dinner or working lunch) or more structured (e.g., during a strategic off-site retreat), and they can be enriched by inviting outside experts or guest speakers. 

Incorporate AI as a topic in the board’s annual strategy retreat. 

The board’s annual strategy retreat offers the opportunity for directors to receive a fulsome view of the AI landscape and its impact on the company’s strategy. The agenda can include briefings about the competitive AI landscape, reviews of  assessments and updates on the internal use of AI from management, and technical demonstrations, deep dive discussions, or strategy workshops. 

Deployment Scenario: Evaluating AI pilots and use cases

A likely topic of boardroom conversation will be the evaluation of the AI pilots and use cases being pursued by the organization. Several AI pilots have launched across industries and functions, but far fewer have made it into production, with many organizations stuck in “pilot hell.” NACD survey data reveal that 32 percent of directors identify uncertainty around AI’s ROI as the number one roadblock to AI adoption within their organization. (see Biggest barriers to AI adoption/implementation/deployment.)  

Not all use cases for the organization are created equal, and they should be prioritized according to strategic relevance, organizational readiness, and risk tolerance. Quick wins can build momentum and trust and attract more investment for further innovation.

Biggest barriers to AI adoption/implementation/deployment

Bar chart showing results for Biggest barriers to AI adoption/implementation/deployment

Q: Which of the following do you believe is the biggest barrier to AI adoption/implementation/deployment at your organization? 
2025 NACD  Board Practices and Oversight Survey, n = 237; +/- 100 due to rounding.

 

AI Strategy: What Boards Should Look For

Green Flags:

  • Clear alignment of AI use cases with core business strategy and competitive differentiators 
    • Trend showing improved efficiency or cost reductions in key cost centers 
    • Enablement ​of ​new, differentiated offerings 
    • Significant improvement of existing core offerings 
  • Strong pipeline from pilots to production 
    • Pilots have strong feedback loops, capturing business results (revenue impact, cost savings, customer satisfaction, process efficiency gains) as well as technological performance (model accuracy, system uptime, processing speed, user adoption rates) 
    • Realistic threshold for pilot-to-production conversion rate (e.g., 15–25%), with clear scaling criteria 
  • Executive team can identify top barriers to AI scaling with specific mitigation plans 
    • Regular pilot portfolio reviews are being conducted, with explicit go/no-go decisions 
    • Understanding of the impediments to scaling is developed together with domain specialists (e.g., factory or business-unit leaders) and AI technology leader 
  • Business processes are being fundamentally redesigned (not just automated) using AI to ensure maximum value creation. Examples could include: 
    • Automation: AI chatbot answers common customer service questions instead of human agents. 
    • Process Redesign: AI analyzes customer sentiment, purchase history, and interaction patterns to proactively identify at-risk customers and automatically triggers individualized retention campaigns, while routing high-value customers to specialized teams before they even need to contact support.

Red Flags: 

  • AI initiatives are described in technical terms, without clear business outcomes. 
  • Pilots' scale rate is below threshold, with no understanding of why initiatives fail. 
  • Multiple pilots are running without predetermined success thresholds. 
  • Process improvements remain within departmental silos. 
  • Leadership is reluctant to terminate underperforming pilots. 

 

Key Performance Indicators: 

  • Pilot-to-production conversion rate 
  • Revenue/cost savings attributed to AI initiatives 

Capital Allocation

Capital allocation is an important area of strategic oversight for boards, and AI investments should be evaluated with the same rigor as other capital investments the organization undertakes. More than one-in-five boards identify proper allocation of capital resources as a challenge they faced by their organizations in adopting AI technologies. In fact, barely more than 11 percent of boards have approved an annual budget for AI projects. (See AI oversight activities performed by boards.) Because AI has such broad implications for business strategy and operations, its use by the firm needs to be overseen at a whole-enterprise level, rather than department by department or use case by use case. Significant investments in tailored platforms, tools, and processes are likely to be required in order to realize AI’s potential for driving both efficiency and innovation. 

AI expenditures can include both large sums on foundational AI capabilities like specific data platforms, technology infrastructure, and (increasingly) talent, as well as smaller expenses tied to experiments and AI pilots that may not receive board attention. For companies in a position to acquire AI capabilities, deploying capital resources in a partnership, merger, or acquisition opportunity requires similar focus and support at the board level. It is important for the board to both scrutinize large capital expenditures to ensure strategic alignment while also protecting early investments in potentially promising AI use cases in uncertain or tight economic environments.  

Include AI expenses in annual budgeting and approval.

Ensuring that AI investments are included as part of the annual budget approval process allows the board to evaluate these current investments and to determine if more resources may be required to meet the organization’s strategic objectives. Boards can improve the effectiveness of their AI capital allocation by providing support for AI pilots and experiments and by securing investments in the AI capabilities necessary to scale it from initial experimentation.  

Organizations may be running several concurrent AI pilots that individually do not represent significant AI expenses. To protect AI investments across multiple horizons, the board should encourage and support AI experimentation within the company and request updates on promising AI use cases or pilots. This can help increase the visibility of AI expenses and pilots and guard these investments from premature funding cuts.  

Similarly, when organizations graduate beyond initial experiments, it is important for the board to secure investments in the necessary capabilities for AI scaling deployment. This may require investments for Information Technology (IT) infrastructure, data platforms, talent, and workforce retraining. These investments only make sense if they deliver competitive advantage. By regularly asking questions in dialogue with management and measuring AI investments against the intended capabilities, the board can support management in unearthing opportunities for strategic deployment of capital resources aligned with the organization’s business goals.  

Regularly review the viability and opportunities for M&A and partnerships to acquire AI capabilities. 

M&A or partnerships are viable options for companies to acquire specific AI capabilities, services, products, or talent, compared to organic, in-house development of these capabilities. There are important considerations for boards to raise with management, such as partnership lock-in and unclear, potentially inflated target acquisition valuations. They can also work with management to establish clear partnership or M&A evaluation criteria to help assess opportunities.  

Deployment Scenario: Strengthening the organization’s deployment, governance, and stewardship of data 

AI’s business value depends on the quality of the data that feeds it. It also requires significant investments in foundational data and AI capabilities, platforms, and tools to develop trustworthy AI with outputs that are accurate, reliable, and safe. This is only possible with strong, strategic, and dynamic data governance and stewardship. Without this, AI deployments can stall due to data access or quality issues, siloed insights, or inconsistencies across the organization. Good management of data is also key to mitigating risks in privacy, data security, appropriate use, IP infringement, and consumer protection. 

 

Capital Allocation: What Boards Should Look For

Green Flags:

  • Investments ​are ​made ​in ​architecture and infrastructure that will scale with the organization’s AI ambition (e.g., enabling metadata, lineage, tracking, interoperability).
    • Regular governance framework updates are aligned with AI system expansion. 
    • Ongoing evaluation of the organization’s governance ensures it is evolving dynamically to manage both risk and opportunity.
  • The executive team provides a road map demonstrating competitive advantage from newly purchased and implemented AI and data systems and platforms.
    • ROI is demonstrated from projects based on data infrastructure, where possible. 
  • Clear data sharing protocols exist among departments, with measurable collaboration outcomes. 
  • There is a cross-functional data governance committee with representation from all business units. 
  • Data architecture supports self-service analytics across departments. 
  • Program and policies are in place to understand the provenance of data assets: where the dataset comes from, how it was created, and whether it may legally be used. 

Red Flags:

  • Data governance is positioned as a cost center focused only on compliance and risk mitigation. 
  • Departments create isolated data lakes or purchase separate analytics tools. 
  • Data governance decisions are made without input from business stakeholders. 
  • Legacy data infrastructure limits AI initiatives or requires expensive workarounds. 
  • Governance policies are static despite rapid AI system deployment.
  • Data access requests create bottlenecks that slow business decision-making. 

Key Performance Indicators: 

  • Projects leveraging new data, software, and AI systems 
  • Cross-departmental data projects (number of active projects using shared data assets) 
  • Data access request fulfillment time 
 
Case Study: Achieving Value with Data 
AI Risks 

Risk oversight is not only a board’s core fiduciary responsibility, but it is also central for the responsible use of AI systems and for maintaining trust among key stakeholders. As AI becomes critical for more of a company’s value creation or destruction potential, AI risk factors will become increasingly salient for shareholders and key stakeholders. By maintaining proper oversight of AI risks, boards can address and ensure these risks are communicated effectively to key constituencies, while also securing the organization’s long-term creation of value. This oversight includes integrating AI into the organization’s enterprise risk management (ERM) program and incorporating regular briefings from AI risk experts on the board’s agenda. 

Integrate AI into Enterprise Risk Management. 

Many boards currently oversee ERM in their audit or risk committees, and they can request that management teams include an AI category–or at least incorporate AI-specific risks in their ERM framework. Delegating authority to these committees can ensure the attention of the board’s risk-oversight experts.  

However, it is important that the full board receives updates and regularly monitors risks associated with AI, such as through an annual, full ERM report from management. Further, management teams can and should leverage existing risk assessment frameworks to evaluate AI risk in economic terms to better evaluate the most effective risk-mitigation actions and controls. 

As part of this exercise, requesting a map from management of the AI tools in use, their data access, and their governance can create the necessary transparency for board members to focus on the high-impact risks. Roughly one in five boards, only ​​21 percent, state they performed an audit to determine where AI is currently in use within their companies, showing an area for improvement for board visibility into their organization’s use of and engagement with AI. (See AI oversight activities performed by boards.)

Receive regular briefings from AI risk experts.

The board should receive briefings from both internal and external AI risk experts or incorporate this as a regular item in their recurring AI briefings. Care should be taken when inviting experts and setting expert-briefing agendas to make sure they can provide valuable information to the board as a governing, non-technical body. 

Deployment Scenario: Building with third-party AI tools 

The most common current AI use cases (e.g., customer support, software creation, and marketing) are primarily being powered by third-party AI products. The rapid proliferation of vendors has left organizations struggling to efficiently parse vendors’ ability to mitigate risk and add value, and to sort valuable tools from marketing hype. Nearly half, 47 percent, of directors in a 2025 NACD survey indicated that selecting the AI tools that will deliver the most return on investment is a current challenge with respect to AI adoption. 

Third-party AI introduces new risks not seen in traditional software, which may not be appropriately accounted for by existing procurement systems. New risks include these: 

  • Unpredictable Performance Over Time: Unlike traditional software with deterministic outputs, AI models can produce inconsistent or unexpected results even with identical inputs. Problems may only emerge in deployment at scale or with specific data patterns that were not present in demos or testing. Problems may also emerge or worsen over time, as real-world data diverges from training data. 
  • Opacity and Explainability Gap: Many AI systems operate as "black boxes" where decision-making processes are opaque. Standard procurement may accept vendor claims about accuracy without ensuring the organization can audit, explain, or debug AI decisions—critical for regulatory compliance and accountability. 
  • Training Data Contamination: AI models inherit biases, errors, and potentially sensitive information from their training data. Traditional procurement rarely examines the provenance, quality, or composition of training datasets. This creates risks around discriminatory outputs, privacy violations, or models that perform poorly on underrepresented groups—issues that won't surface in typical vendor presentations. 
  • Intellectual Property Ambiguity: AI training may inadvertently incorporate copyrighted material, creating unclear IP ownership. Traditional procurement IP clauses don't address whether AI-generated content infringes on training data rights or who owns outputs created by the AI system. 

With the intense urgency to advance AI adoption, procurement processes must move fast while appropriately triaging and mitigating higher-risk use cases. AI tools that are quietly added by individual employees to existing platforms can create a “shadow AI” layer that bypasses traditional procurement altogether. Frameworks that leverage internal and vendor assessments alongside guidance to evaluate vendors can help companies adapt their procurement programs for AI tool procurement. 

As the ecosystem expands, organizations must find flexible and adaptive ways to integrate and govern third-party tools, products, and data. Long-term business strategy must guide investment decisions on which tools are embedded into infrastructure and governance models, even as these tools continuously improve and evolve. Good foundations must be laid to enable future scale without ballooning costs, unsustainable demand for computing power, or operational complexity, and align with the organization’s mission and values.

 

AI Risks: What Boards Should Look For

Green Flags:

  • Strong AI procurement process that triages appropriately by risk and aligns with the organization's values
  • Comprehensive AI inventory maintained, with data access mapping and governance controls, including AI features activated on existing platforms
  • Holistic AI integration architecture that connects systems beyond individual app deployments
  • Periodic reviews of vendor road map alignment and vendor dependency against long-term organizational strategy

Red flags:

  • AI tools procured without risk assessment or values alignment review
  • Lack of focus on reducing "shadow AI" tools and applications within the organization
  • Unknown or shadow AI deployments discovered across the organization
  • AI features activated on platforms without governance oversight or data protection review
  • Fragmented AI implementations creating data silos and integration challenges
  • Vendor lock-in situations with limited ability to migrate or switch providers

Key Performance Indicators:

  • Number of third-party AI tools in use
  • Unsanctioned AI tools identified during audits
  • Third-party AI total spends
  • AI spend concentration (dollar amount with top-three vendors as a percentage of total AI budget) 

 

Technology Competency

Effective technology governance, including AI and data oversight, requires full-board engagement with all directors maintaining at least a foundational knowledge of AI. Boards​ ​cannot simply rely on the AI expertise and competency of a single director or committee. Similarly, the whole board’s AI education and fluency must be aligned to the organization’s needs, and the board must also ensure the CEO, management team, and workforce have the technology competency to execute the company’s AI agenda. Without this focus on board, management, and workforce competency, AI initiatives are likely to stall and fail to deliver anticipated value.  

Maintain board-level AI and technology proficiency aligned to corporate strategy and governance needs. 

The board, often through the nominating and governance committee, must ensure it maintains the necessary AI proficiency and governance structures and processes to provide effective oversight. 

Board Structure: As part of the annual committee evaluation process, the nominating and governance committee can oversee the incorporation of AI responsibilities across the board and committees' charters. This exercise may identify gaps and prompt discussion about additional changes to the board’s structures such as the establishment of a dedicated technology/strategy committee to support AI and technology oversight.  

Processes: By integrating AI questions into the full board, committee, and director evaluations, the nominating and governance committee can uncover gaps that warrant further discussion about additional changes to the board’s practices (e.g., incorporating AI-focused sessions during the annual strategy offsite meeting/workshop). 

Education & Expertise: The nominating and governance committee should identify director AI-education opportunities and incorporate AI proficiency into the board’s skills matrix. Further, the committee should incorporate a mechanism to verify and validate a director’s AI credentials and experience. Findings from these practices should improve the board’s AI competence and inform the board’s succession planning and director recruitment. Recruiting directors with more AI or emerging technology expertise may be necessary to maintain alignment between the board’s AI competence and the company’s strategic goals.

Establish authority and responsibility for AI within the organization. 

The board should ensure that the organization has clearly designated leaders for strategic AI implementation that appropriately balance opportunity and risk. The board should also ensure a consistent and coherent decision-making model, both across functions and at business unit level, with processes for escalation. Accountability for AI initiatives can remain with the functional, business-unit, or cost-center leader, supported by a centralized AI technology and governance leader who maintains process and governance responsibility over the technology across the enterprise. For the organization’s AI strategy and initiatives, the board should establish clear leadership accountability for AI within the management team.

Ensure management and workforce readiness for AI transformation. 

AI increases the risk of broad-based erosion of demand for labor in the economy, including, specifically, the potential weakening of future talent pipelines from automation of entry-level jobs. Boards must thus consider how the organization’s talent and human-capital strategies are accommodating this transition. Often through the compensation and human capital committee, the board can work with management to ensure that pipelines for necessary talent and skills (e.g., strong development and workforce education and retraining programs) are in place as AI is further deployed throughout the company. The committee can likewise evaluate management’s plan to restructure incentives in a way that links AI strategy with career progression and alignment to the organization’s pay philosophy and goals. To combat fear and mistrust among employees about potential job displacement, the board and committee should also ensure that communications about AI within the company are harmonized internally across all levels and externally to the marketplace

Incorporate AI oversight roles and responsibilities into board committee charters. 

Slightly less than one-quarter (25%) of boards have incorporated AI oversight responsibilities into board committee charters. (See AI oversight activities performed by boards.) Effective AI governance will require board committees to expand their responsibilities to incorporate AI-governance practices. By performing this exercise, boards can limit gaps and potential overlaps in their AI oversight and more efficiently utilize their committees’ focus, expertise, and agenda time. Below is guidance on how each standing committee can contribute to and support the board’s AI oversight.

 

Committee-Level AI Practices 
Compensation & Human Capital  Audit Committee Nominating & Governance Committee 
Compensation Philosophy Review: Assess how AI impacts current incentive structures, targets, and compensation design.  Disclosure Materials: Ensure accurate, material AI risk disclosures while avoiding exaggeration of capabilities, or “AI washing.”  Skills Assessment: Maintain and update board AI competence through skills-matrix evaluation and verify directors AI expertise and credentials.  
Metrics and Targets: Establish AI-related performance metrics and KPIs, and evaluate existing targets in light of AI goals.  ERM Oversight: Integrate AI risks into the overall risk-management framework.  Director Education: Identify opportunities or events for board and director education aligned to AI’s impact on corporate strategy and provide ongoing AI education and training programs for board members. 
AI Talent Benchmarking: Request specialized compensation data from consultants for AI roles and determine potential pay equity or financial impacts. Data Protection and Controls: Review cybersecurity, privacy, and data-protection measures for AI systems to maintain proper security of AI data.  Charter Updates: Incorporate AI oversight responsibilities into all committee charters. 
Workforce Strategy: Assess impacts of AI on human capital strategy and make adjustments as necessary, such as increasing pipelines for more digital talent, ensuring strong talent-management pipelines, and investing in workforce education and retraining.   Financial Oversight: Monitor capital allocation for AI investments and assess financial impacts.  CEO Succession Planning: Evaluate AI leadership capabilities in succession planning processes. 
  Ethics and Compliance: Oversee AI ethics frameworks and regulatory compliance programs to ensure AI use is compliant with regulations and the corporate AI acceptable-use policy.  Board Succession: Integrate AI expertise requirements into board succession planning.
    Director and Board Evaluations: Conduct AI competence assessment as part of the regularly performed director evaluations. Board evaluations may also identify the need to create dedicated AI or technology committee to support AI oversight.

 

Deployment Scenario: Board Support of workforce transformation and change management 

The best AI use cases and data pipelines will fail in deployment if workforces do not possess adequate AI competency or are not correctly trained, supported, and incentivized. Boards can support the deployment of AI within their organizations by exploring how management is aligning necessary organizational structures, processes, and culture. Failure to offer support and address the following concerns can slow adoption and erode employee trust. 

  • Training: Employees must be trained on how to use AI tools effectively and securely–e.g., maximizing their output with effective prompt engineering while protecting IP. Targeted upskilling equips employees for specific AI-driven roles and vendor models. Adequate training sets a baseline for AI use and can strengthen company culture around AI, but training may be siloed, inaccessible, or lack specificity. A strong training program becomes a two-way education, as the organization learns from how employees use a tool in deployment. 
  • Support: Maintaining employee trust in AI requires addressing concerns in critical areas such as these: 
    • Job impact, including fear of role dilution or job loss 
    • IP (first- or third-party) appearing in output 
    • IP being disclosed/leaked to third parties via prompts 
    • Inaccurate output contaminating workflows, processes, and products 
    • Additional work created in testing, troubleshooting, and revising outputs
  • Incentives: Workforce incentives must be appropriately aligned to strategic uses of AI. KPIs and rewards should all signal the same direction, with management and the board championing a growth mindset where AI usage is linked to career progression and messaged as a learning and employee value-enhancing opportunity. Successful workforce transformation means not only using AI tools to reduce costs, but also building new AI-enabled capability. This extends to the talent pipeline as AI potentially reduces entry-level white-collar jobs that may create future talent and leadership shortages. 

 

Technology Oversight: What Boards Should Look For

Green Flags:

  • Significant investment in employee support, training, and trust accompany all investments in technology.
    • Employee trust metrics (e.g., during annual employee engagement evaluation) improve as AI deployment increases.
    • AI training programs consistently achieve high employee satisfaction and skill retention scores.
    • Training emphasizes enhancing skills that elevate employee value and personal growth.
  • There is an approved technology adoption and usage policy that outlines how employees can use AI technologies in their work.
  • Incentive structures at all levels of the organization promote the use of AI in service of the organization's strategic goals and in alignment with its values.
    • AI use and success is linked to career progression for employees.
  • AI's impacts on the workforce are being planned for across the organization, with transparency and strategic alignment.
    • There is clear, delegated accountability for how AI strategy is integrated into workforce planning and it is not left to the tech team or HR alone.
    • There is regular, transparent reporting on AI's impact on the organization's talent pipeline and leadership development, especially for entry-level and historically underrepresented roles.

Red flags:

  • AI training is treated as onetime event rather than ongoing development investment.
  • Outdated Incentive structures create barriers to AI adoption.
  • AI workforce planning is delegated solely to IT or HR without cross-functional leadership.
  • AI initiatives are presented primarily as headcount reduction or cost-cutting measures.
  • Employee survey results show declining trust or engagement related to AI implementation.
  • Lack of harmonization in communications both internally to employees and externally about AI impact on employees.

Key Performance Indicators:

  • Employee engagement scores (add AI-relevant questions)
  • AI training and support budget

 

Case Study: Driving AI Enabled Productivity

Conclusion

AI is evolving at a rapid speed and is already reshaping how businesses operate and compete. Boards that wait on the sidelines risk having their organizations being left behind. The urgency is real, but so is the opportunity for directors who take proactive steps now.

This Director Essentials equips board directors with practical steps to address AI oversight across four critical areas—AI strategy, capital allocation, AI risks, and technology competency—all of which tie directly to core fiduciary responsibilities. The practices outlined provide boards with concrete actions they can implement now—from establishing regular AI briefings to integrating oversight responsibilities into committee charters.

Directors who embrace the proposed governance practices today will be better positioned to guide their organizations through AI’s transformative impact, turning today’s uncertainty into tomorrow’s competitive advantage. However, with AI’s tremendous potential comes significant responsibility. Effective oversight is not only just about capturing opportunity, but also about protecting stakeholder trust and ensuring long-term value creation while navigating unprecedented risks that are already reshaping entire industries.

 

Contributors

NACD recognizes the following individuals for their valuable contributions to this report.

Rima Qureshi; Board Director: Mastercard, British Telecom, Loblaw Companies Ltd.

Samantha Kappagoda; Board Member, Credit Suisse Funds and Member of the Business Board of the Governing Council at The University of Toronto; Chief Data Scientist, Numerati Partners; Visiting Scholar, Courant Institute of Mathematical Sciences NYU

 

About the Authors

Dylan Sandlin is the Program Manager for Digital and Cybersecurity Governance content at NACD.

Dr. Helmuth Ludwig NACD.DC, is Professor of Practice for Strategy and Entrepreneurship and the Cox Executive Director of the Hart Institute for Technology Innovation and Entrepreneurship at the Edwin L. Cox School of Business at Southern Methodist University; Board Member at Hitachi Ltd., Tokyo and Chair of Humanetics Group, Farmington Hills; and Senior Advisor Bridgepoint LLC

Dr. Benjamin van Giffen is Associate Professor of Information Systems & Digital Innovation at the University of Liechtenstein, specializing in board-level AI governance. He is a trusted advisor to boards, senior executives, and European regulatory agencies on responsible AI strategies and oversight.

Data & Trust Alliance (D&TA) is a CEO-led, nonprofit consortium that brings together industry leaders to accelerate the deployment of AI—by working on standardized practices that deliver business value and trust.

D&TA’s practitioner-built tools include Algorithmic Bias Safeguards for workforce decisions, Responsible Data & AI Diligence for M&A, cross-industry Data Provenance Standards, and an AI Vendor Assessment Framework (launching in 2025) to help enterprises evaluate third-party AI products. Learn more at dataandtrustalliance.org

Introduction

The AI landscape has rapidly evolved, presenting businesses and their boards with a combination of urgency and uncertainty. Firms are unclear about whether, how, and when AI will deliver return on investment (ROI), and it is unclear whether AI chiefly offers them opportunity or risk.

AI introduces competitive threats, business model disruption opportunities, capital intensive investment requirements, novel risks, and the need to develop competence in areas that impact the board’s oversight responsibilities. NACD survey data show that more than 62 percent of director respondents are setting aside agenda time to discuss AI. (See AI oversight activities performed by boards.) But with so many unanswered questions–from where to allocate capital to new cybersecurity threats to potential workforce impacts–boards are left asking how they can ensure this time and additional focus is effectively structured and used appropriately. How can they guide the company’s engagement with AI in a way that furthers the company’s strategy, produces real value, and does not introduce unmitigated risk?  

AI oversight activities performed by boards (Respondents could select all that apply.) 

Q: Which of the following activities relating to AI oversight has your board performed? 
2025 NACD Public Company Board Practices and Oversight Survey, n= 211 

This Directors Essentials report outlines a framework for boards to guide their AI oversight and includes practices the board can implement and deployment scenarios boards are likely to face. While specific practices and answers to questions will differ among organizations, this guide describes steps to help boards create an appropriate agenda, obtain the right expertise, and focus board attention on the critical topics of AI governance.

AI Governance: A 4-Pillar Framework for Oversight

The following sections apply and extend a research-based framework that aligns the board’s core fiduciary responsibilities and that can help shape the board’s oversight and attention. (See 4-Pillar Framework for AI Oversight.) This holistic framework avoids gaps or overlaps in the board’s AI governance, while ensuring the flexibility to perform specific oversight tasks based on the company’s AI strategy and long-term objectives. It is also a helpful reference for many boards that are beginning their AI oversight journey.

A board’s initial conversations around AI are typically grounded in the technology’s impact on the organization’s strategy and its potential to provide additional value, cost savings, or competitive advantage.

To support the execution of an AI strategy, the board will need to approve and scrutinize proposed budgets and capital allocation. As companies undertake new AI projects and incorporate new systems, new risks will be introduced, requiring the board’s oversight to protect and enable the company’s goals and strategy. The board will likewise need to adapt their own skill sets and processes to ensure that they, along with the management team and workforce, have the right expertise, education, structures, and practices. This will allow the board to be a strategic asset, supporting management in responsibly deploying and scaling AI in a way that generates value over the long term. 

4-Pillar Framework for AI Oversight 

AI Strategy 

Many directors note the potential for “AI disruption” of their company’s current strategy and long-term viability. Yet NACD survey data show that only 23 percent of boards have assessed how this disruption might happen or where it may come from. (See AI oversight activities performed by boards.) Achieving strategic alignment on AI, receiving regular strategic briefings, and incorporating AI into the board’s strategy retreat are practices boards can implement to help improve their strategic oversight of this technology.

Develop shared understanding among the full board of AI’s strategic relevance and importance.

As a first step, boards should align on AI’s strategic impact and opportunities for the company. Beginning this conversation can be difficult, and many directors likely find themselves in a “quiet middle” position between those with a limited desire for engagement on AI and directors with experience or active engagement with the technology. The following questions can help guide this conversation toward a shared understanding:

 

Question: Outcome:
Why is AI strategically relevant to our company? 
  • Shared understanding about AI’s impact and strategic importance 
  • Agreement on need for board time and attention   
What are the roles of the board of directors in the context of AI?
  • Discuss and establish clear board and committee roles and responsibilities.
  • Identify board focus areas on strategy, capital allocation, risk, and board and committee roles and operations.
How can the board perform those roles?  
  • Alignment of directors' skills, expertise, and education to their oversight duties 
  • Creation of structures and processes to oversee AI

 

Ultimately, the discussion should correct misconceptions that AI is merely a technology issue, helping directors see it as a tool for creating business value for the company.  

The following are examples of practices that were successful in driving greater board engagement and more strategic discussions in subsequent board meetings: 

 

Practice: Example:
Digital Audit  A technology company board questioned their audit firm about how they are balancing productivity gains and cost savings in financial audits through AI integration while ensuring data protection. This included explicitly addressing concerns about confidential information security and cross-client data usage. This situation highlighted a broader strategic challenge: managing supplier relationships to prevent emerging AI-related corporate risks. 
Legal Risk Scenario Walk-Throughs  An insurance company board received presentations from in-house counsel and external AI legal experts about AI-related lawsuits, focusing on data protection, liability issues, data leakage, and AI bias. The case studies of legal claims effectively heightened board awareness and engagement with AI governance risks. 
Board Training with AI Scenarios  As part of mandatory director training, one company used immersive AI scenarios led by independent experts to create transformative educational moments for non-technical directors. One successful training simulated an AI discrimination lawsuit crisis, that required directors to navigate potential risks and determine oversight responses. The training included case studies of company responses. This practical method revealed process gaps and identified how and where the board could strengthen oversight. 
AI Application Presentation & Demos   A board increased AI engagement through industry-specific presentations of three AI applications: their company's, a competitor's, and an adjacent industry's. Each demonstrated inputs required, output results, and quantifiable benefits. This practical approach created urgency when directors saw competitors already deploying solutions. Outcomes included planning full-day AI workshops and incorporating AI expert briefings into future board meetings. 

 

Establish a cadence for AI discussions and updates.

The board should regularly allocate agenda time for updates on the company’s AI initiatives and related issues: for example, evolving regulatory and compliance requirements, updates on emerging competitive threats, or deep dives with outside experts on relevant AI topics. These discussions should illuminate AI’s potential impact on the company’s strategy and uses of AI to avoid falling behind competition, as well as opportunities for the technology to provide a competitive advantage. 

Because AI technologies develop and advance quickly–both through new releases and through the emergent properties of learning systems–continuous oversight is required. One outcome of AI discussions should be for the board to identify the topics for future AI board briefings. The board can work with the CEO to establish a regular briefings' cadence, but given AI’s pace of change, maintaining proper awareness can be difficult with a traditional, quarterly-meeting cadence. Thus, it may be beneficial for boards to request more frequent information sharing and reporting from management about progress and developments with AI initiatives, at a monthly, or in some cases even bi-weekly cadence if that is deemed necessary.  

Ultimately, reporting on AI must avoid a “circular governance” scenario where the oversight terms are dictated by oversight subjects. As with any area of corporate governance, the board must exercise and maintain independent judgment and oversight. 

Alongside these more structured briefings and reports, the board should conduct future- and innovation-focused conversations about AI and its impact on long-term strategy. The goal is to create the space necessary to envision what a potential AI future could be, to question the viability of this imagined future, and then, ultimately, decide if the idea could and should be incorporated into the company’s long-term strategic plans. These conversations can be more informal (e.g., at a board pre-meeting dinner or working lunch) or more structured (e.g., during a strategic off-site retreat), and they can be enriched by inviting outside experts or guest speakers. 

Incorporate AI as a topic in the board’s annual strategy retreat. 

The board’s annual strategy retreat offers the opportunity for directors to receive a fulsome view of the AI landscape and its impact on the company’s strategy. The agenda can include briefings about the competitive AI landscape, review of assessments and updates on the internal use of AI from management, and technical demonstrations, deep dive discussions, or strategy workshops. 

Deployment Scenario: Evaluating AI pilots and use cases

A likely topic of boardroom conversation will be the evaluation of the AI pilots and use cases being pursued by the organization. Several AI pilots have launched across industries and functions, but far fewer have made it into production, with many organizations stuck in “pilot hell.” NACD survey data reveal that 32 percent of directors identify uncertainty around AI’s ROI as the number one roadblock to AI adoption within their organization. (see Biggest barriers to AI adoption/implementation/deployment.)  

Not all use cases for the organization are created equal, and they should be prioritized according to strategic relevance, organizational readiness, and risk tolerance. Quick wins can build momentum and trust and attract more investment for further innovation.

Biggest barriers to AI adoption/implementation/deployment

Q: Which of the following do you believe is the biggest barrier to AI adoption/implementation/deployment at your organization? 
2025 NACD  Board Practices and Oversight Survey, n = 237; +/- 100 due to rounding.

 

AI Strategy: What boards should look for

Green Flags:

  • Clear alignment of AI use cases with core business strategy and competitive differentiators 
    • Trend showing improved efficiency or cost reductions in key cost centers 
    • Enablement ​of ​new, differentiated offerings 
    • Significant improvement of existing core offerings 
  • Strong pipeline from pilots to production 
    • Pilots have strong feedback loops, capturing business results (revenue impact, cost savings, customer satisfaction, process efficiency gains) as well as technological performance (model accuracy, system uptime, processing speed, user adoption rates) 
    • Realistic threshold for pilot-to-production conversion rate (e.g., 15–25%), with clear scaling criteria 
  • Executive team can identify top barriers to AI scaling with specific mitigation plans 
    • Regular pilot portfolio reviews are being conducted, with explicit go/no-go decisions 
    • Understanding of the impediments to scaling is developed together with domain specialists (e.g., factory or business-unit leaders) and AI technology leader 
  • Business processes are being fundamentally redesigned (not just automated) using AI to ensure maximum value creation. Examples could include: 
    • Automation: AI chatbot answers common customer service questions instead of human agents. 
    • Process Redesign: AI analyzes customer sentiment, purchase history, and interaction patterns to proactively identify at-risk customers and automatically triggers individualized retention campaigns, while routing high-value customers to specialized teams before they even need to contact support.

Red Flags: 

  • AI initiatives are described in technical terms, without clear business outcomes. 
  • Pilots' scale rate is below threshold, with no understanding of why initiatives fail. 
  • Multiple pilots are running without predetermined success thresholds. 
  • Process improvements remain within departmental silos. 
  • Leadership is reluctant to terminate underperforming pilots. 

 

Key Performance Indicators: 

  • Pilot-to-production conversion rate 
  • Revenue/cost savings attributed to AI initiatives 

Capital Allocation

Capital allocation is an important area of strategic oversight for boards, and AI investments should be evaluated with the same rigor as other capital investments the organization undertakes. More than one-in-five boards identify proper allocation of capital resources as a challenge they faced by their organizations in adopting AI technologies. In fact, barely more than 11 percent of boards have approved an annual budget for AI projects. (See AI oversight activities performed by boards.) Because AI has such broad implications for business strategy and operations, its use by the firm needs to be overseen at a whole-enterprise level, rather than department by department or use case by use case. Significant investments in tailored platforms, tools, and processes are likely to be required in order to realize AI’s potential for driving both efficiency and innovation. 

AI expenditures can include both large sums on foundational AI capabilities like specific data platforms, technology infrastructure, and (increasingly) talent, as well as smaller expenses tied to experiments and AI pilots that may not receive board attention. For companies in a position to acquire AI capabilities, deploying capital resources in a partnership, merger, or acquisition opportunity requires similar focus and support at the board level. It is important for the board to both scrutinize large capital expenditures to ensure strategic alignment while also protecting early investments in potentially promising AI use cases in uncertain or tight economic environments.  

Include AI expenses in annual budgeting and approval.

Ensuring that AI investments are included as part of the annual budget approval process allows the board to evaluate these current investments and to determine if more resources may be required to meet the organization’s strategic objectives. Boards can improve the effectiveness of their AI capital allocation by providing support for AI pilots and experiments and by securing investments in the AI capabilities necessary to scale it from initial experimentation.  

Organizations may be running several concurrent AI pilots that individually do not represent significant AI expenses. To protect AI investments across multiple horizons, the board should encourage and support AI experimentation within the company and request updates on promising AI use cases or pilots. This can help increase the visibility of AI expenses and pilots and guard these investments from premature funding cuts.  

Similarly, when organizations graduate beyond initial experiments, it is important for the board to secure investments in the necessary capabilities for AI scaling deployment. This may require investments for Information Technology (IT) infrastructure, data platforms, talent, and workforce retraining. These investments only make sense if they deliver competitive advantage. By regularly asking questions in dialogue with management and measuring AI investments against the intended capabilities, the board can support management in unearthing opportunities for strategic deployment of capital resources aligned with the organization’s business goals.  

Regularly review the viability and opportunities for M&A and partnerships to acquire AI capabilities. 

M&A or partnerships are viable options for companies to acquire specific AI capabilities, services, products, or talent, compared to organic, in-house development of these capabilities. There are important considerations for boards to raise with management, such as partnership lock-in and unclear, potentially inflated target acquisition valuations. They can also work with management to establish clear partnership or M&A evaluation criteria to help assess opportunities.  

Deployment Scenario: Strengthening the organization’s deployment, governance, and stewardship of data 

AI’s business value depends on the quality of the data that feeds it. It also requires significant investments in foundational data and AI capabilities, platforms, and tools to develop trustworthy AI with outputs that are accurate, reliable, and safe. This is only possible with strong, strategic, and dynamic data governance and stewardship. Without this, AI deployments can stall due to data access or quality issues, siloed insights, or inconsistencies across the organization. Good management of data is also key to mitigating risks in privacy, data security, appropriate use, IP infringement, and consumer protection. 

 

Capital Allocation: What Boards Should Look For

Green Flags:

  • Investments ​are ​made ​in ​architecture and infrastructure that will scale with the organization’s AI ambition (e.g., enabling metadata, lineage, tracking, interoperability).
    • Regular governance framework updates are aligned with AI system expansion. 
    • Ongoing evaluation of the organization’s governance ensures it is evolving dynamically to manage both risk and opportunity.
  • The executive team provides a road map demonstrating competitive advantage from newly purchased and implemented AI and data systems and platforms.
    • ROI is demonstrated from projects based on data infrastructure, where possible. 
  • Clear data sharing protocols exist among departments, with measurable collaboration outcomes. 
  • There is a cross-functional data governance committee with representation from all business units. 
  • Data architecture supports self-service analytics across departments. 
  • Program and policies are in place to understand the provenance of data assets: where the dataset comes from, how it was created, and whether it may legally be used. 

Red Flags:

  • Data governance is positioned as a cost center focused only on compliance and risk mitigation. 
  • Departments create isolated data lakes or purchase separate analytics tools. 
  • Data governance decisions are made without input from business stakeholders. 
  • Legacy data infrastructure limits AI initiatives or requires expensive workarounds. 
  • Governance policies are static despite rapid AI system deployment.
  • Data access requests create bottlenecks that slow business decision-making. 

Key Performance Indicators: 

  • Projects leveraging new data, software, and AI systems 
  • Cross-departmental data projects (number of active projects using shared data assets) 
  • Data access request fulfillment time 
 
Case Study: Achieving Value with Data 
AI Risks 

Risk oversight is not only a board’s core fiduciary responsibility, but it is also central for the responsible use of AI systems and for maintaining trust among key stakeholders. As AI becomes critical for more of a company’s value creation or destruction potential, AI risk factors will become increasingly salient for shareholders and key stakeholders. By maintaining proper oversight of AI risks, boards can address and ensure these risks are communicated effectively to key constituencies, while also securing the organization’s long-term creation of value. This oversight includes integrating AI into the organization’s enterprise risk management (ERM) program and incorporating regular briefings from AI risk experts on the board’s agenda. 

Integrate AI into Enterprise Risk Management. 

Many boards currently oversee ERM in their audit or risk committees, and they can request that management teams include an AI category–or at least incorporate AI-specific risks in their ERM framework. Delegating authority to these committees can ensure the attention of the board’s risk-oversight experts.  

However, it is important that the full board receives updates and regularly monitors risks associated with AI, such as through an annual, full ERM report from management. Further, management teams can and should leverage existing risk assessment frameworks to evaluate AI risk in economic terms to better evaluate the most effective risk-mitigation actions and controls. 

As part of this exercise, requesting a map from management of the AI tools in use, their data access, and their governance can create the necessary transparency for board members to focus on the high-impact risks. Roughly one in five boards, only ​​21 percent, state they performed an audit to determine where AI is currently in use within their companies, showing an area for improvement for board visibility into their organization’s use of and engagement with AI. (See AI oversight activities performed by boards.) 

Receive regular briefings from AI risk experts.

The board should receive briefings from both internal and external AI risk experts or incorporate this as a regular item in their recurring AI briefings. Care should be taken when inviting experts and setting expert-briefing agendas to make sure they can provide valuable information to the board as a governing, non-technical body. 

Deployment Scenario: Building with third-party AI tools 

The most common current AI use cases (e.g., customer support, software creation, and marketing) are primarily being powered by third-party AI products. The rapid proliferation of vendors has left organizations struggling to efficiently parse vendors’ ability to mitigate risk and add value, and to sort valuable tools from marketing hype. Nearly half, 47 percent, of directors in a 2025 NACD survey indicated that selecting the AI tools that will deliver the most return on investment is a current challenge with respect to AI adoption. 

Third-party AI introduces new risks not seen in traditional software, which may not be appropriately accounted for by existing procurement systems. New risks include these:

  • Unpredictable Performance Over Time: Unlike traditional software with deterministic outputs, AI models can produce inconsistent or unexpected results even with identical inputs. Problems may only emerge in deployment at scale or with specific data patterns that were not present in demos or testing. Problems may also emerge or worsen over time, as real-world data diverges from training data. 
  • Opacity and Explainability Gap: Many AI systems operate as "black boxes" where decision-making processes are opaque. Standard procurement may accept vendor claims about accuracy without ensuring the organization can audit, explain, or debug AI decisions—critical for regulatory compliance and accountability. 
  • Training Data Contamination: AI models inherit biases, errors, and potentially sensitive information from their training data. Traditional procurement rarely examines the provenance, quality, or composition of training datasets. This creates risks around discriminatory outputs, privacy violations, or models that perform poorly on underrepresented groups—issues that won't surface in typical vendor presentations. 
  • Intellectual Property Ambiguity: AI training may inadvertently incorporate copyrighted material, creating unclear IP ownership. Traditional procurement IP clauses don't address whether AI-generated content infringes on training data rights or who owns outputs created by the AI system. 

With the intense urgency to advance AI adoption, procurement processes must move fast while appropriately triaging and mitigating higher-risk use cases. AI tools that are quietly added by individual employees to existing platforms can create a “shadow AI” layer that bypasses traditional procurement altogether. Frameworks that leverage internal and vendor assessments alongside guidance to evaluate vendors can help companies adapt their procurement programs for AI tool procurement. 

As the ecosystem expands, organizations must find flexible and adaptive ways to integrate and govern third-party tools, products, and data. Long-term business strategy must guide investment decisions on which tools are embedded into infrastructure and governance models, even as these tools continuously improve and evolve. Good foundations must be laid to enable future scale without ballooning costs, unsustainable demand for computing power, or operational complexity, and align with the organization’s mission and values.

 

AI Risks: What Boards Should Look For

Green Flags:

  • Strong AI procurement process that triages appropriately by risk and aligns with the organization's values
  • Comprehensive AI inventory maintained, with data access mapping and governance controls, including AI features activated on existing platforms
  • Holistic AI integration architecture that connects systems beyond individual app deployments
  • Periodic reviews of vendor road map alignment and vendor dependency against long-term organizational strategy

Red flags:

  • AI tools procured without risk assessment or values alignment review
  • Lack of focus on reducing "shadow AI" tools and applications within the organization
  • Unknown or shadow AI deployments discovered across the organization
  • AI features activated on platforms without governance oversight or data protection review
  • Fragmented AI implementations creating data silos and integration challenges
  • Vendor lock-in situations with limited ability to migrate or switch providers

Key Performance Indicators:

  • Number of third-party AI tools in use
  • Unsanctioned AI tools identified during audits
  • Third-party AI total spends
  • AI spend concentration (dollar amount with top-three vendors as a percentage of total AI budget) 

 

Technology Competency

Effective technology governance, including AI and data oversight, requires full-board engagement with all directors maintaining at least a foundational knowledge of AI. Boards​ ​cannot simply rely on the AI expertise and competency of a single director or committee. Similarly, the whole board’s AI education and fluency must be aligned to the organization’s needs, and the board must also ensure the CEO, management team, and workforce have the technology competency to execute the company’s AI agenda. Without this focus on board, management, and workforce competency, AI initiatives are likely to stall and fail to deliver anticipated value.  

Maintain board-level AI and technology proficiency aligned to corporate strategy and governance needs. 

The board, often through the nominating and governance committee, must ensure it maintains the necessary AI proficiency and governance structures and processes to provide effective oversight. 

Board Structure: As part of the annual committee evaluation process, the nominating and governance committee can oversee the incorporation of AI responsibilities across the board and committees' charters. This exercise may identify gaps and prompt discussion about additional changes to the board’s structures such as the establishment of a dedicated technology/strategy committee to support AI and technology oversight.  

Processes: By integrating AI questions into the full board, committee, and director evaluations, the nominating and governance committee can uncover gaps that warrant further discussion about additional changes to the board’s practices (e.g., incorporating AI-focused sessions during the annual strategy offsite meeting/workshop). 

Education & Expertise: The nominating and governance committee should identify director AI-education opportunities and incorporate AI proficiency into the board’s skills matrix. Further, the committee should incorporate a mechanism to verify and validate a director’s AI credentials and experience. Findings from these practices should improve the board’s AI competence and inform the board’s succession planning and director recruitment. Recruiting directors with more AI or emerging technology expertise may be necessary to maintain alignment between the board’s AI competence and the company’s strategic goals. 

Establish authority and responsibility for AI within the organization. 

The board should ensure that the organization has clearly designated leaders for strategic AI implementation that appropriately balance opportunity and risk. The board should also ensure a consistent and coherent decision-making model, both across functions and at business unit level, with processes for escalation. Accountability for AI initiatives can remain with the functional, business-unit, or cost-center leader, supported by a centralized AI technology and governance leader who maintains process and governance responsibility over the technology across the enterprise. For the organization’s AI strategy and initiatives, the board should establish clear leadership accountability for AI within the management team.

Ensure management and workforce readiness for AI transformation. 

AI increases the risk of broad-based erosion of demand for labor in the economy, including, specifically, the potential weakening of future talent pipelines from automation of entry-level jobs. Boards must thus consider how the organization’s talent and human-capital strategies are accommodating this transition. Often through the compensation and human capital committee, the board can work with management to ensure that pipelines for necessary talent and skills (e.g., strong development and workforce education and retraining programs) are in place as AI is further deployed throughout the company. The committee can likewise evaluate management’s plan to restructure incentives in a way that links AI strategy with career progression and alignment to the organization’s pay philosophy and goals. To combat fear and mistrust among employees about potential job displacement, the board and committee should also ensure that communications about AI within the company are harmonized internally across all levels and externally to the marketplace

Incorporate AI oversight roles and responsibilities into board committee charters. 

Slightly less than one-quarter (25%) of boards have incorporated AI oversight responsibilities into board committee charters. (See AI oversight activities performed by boards.) Effective AI governance will require board committees to expand their responsibilities to incorporate AI-governance practices. By performing this exercise, boards can limit gaps and potential overlaps in their AI oversight and more efficiently utilize their committees’ focus, expertise, and agenda time. Below is guidance on how each standing committee can contribute to and support the board’s AI oversight.

 

Committee-Level AI Practices 

Compensation & Human Capital 
Compensation Philosophy Review: Assess how AI impacts current incentive structures, targets, and compensation design. 
Metrics and Targets: Establish AI-related performance metrics and KPIs, and evaluate existing targets in light of AI goals. 
AI Talent Benchmarking: Request specialized compensation data from consultants for AI roles and determine potential pay equity or financial impacts.
Workforce Strategy: Assess impacts of AI on human capital strategy and make adjustments as necessary, such as increasing pipelines for more digital talent, ensuring strong talent-management pipelines, and investing in workforce education and retraining.  
Audit Committee
Disclosure Materials: Ensure accurate, material AI risk disclosures while avoiding exaggeration of capabilities, or “AI washing.” 
ERM Oversight: Integrate AI risks into the overall risk-management framework. 
Data Protection and Controls: Review cybersecurity, privacy, and data-protection measures for AI systems to maintain proper security of AI data. 
Financial Oversight: Monitor capital allocation for AI investments and assess financial impacts. 
Ethics and Compliance: Oversee AI ethics frameworks and regulatory compliance programs to ensure AI use is compliant with regulations and the corporate AI acceptable-use policy. 
Nominating & Governance Committee 
Skills Assessment: Maintain and update board AI competence through skills-matrix evaluation and verify directors AI expertise and credentials.  
Director Education: Identify opportunities or events for board and director education aligned to AI’s impact on corporate strategy and provide ongoing AI education and training programs for board members. 
Charter Updates: Incorporate AI oversight responsibilities into all committee charters. 
CEO Succession Planning: Evaluate AI leadership capabilities in succession planning processes. 
Board Succession: Integrate AI expertise requirements into board succession planning.
Director and Board Evaluations: Conduct AI competence assessment as part of the regularly performed director evaluations. Board evaluations may also identify the need to create dedicated AI or technology committee to support AI oversight.

 

Deployment Scenario: Board Support of workforce transformation and change management 

The best AI use cases and data pipelines will fail in deployment if workforces do not possess adequate AI competency or are not correctly trained, supported, and incentivized. Boards can support the deployment of AI within their organizations by exploring how management is aligning necessary organizational structures, processes, and culture. Failure to offer support and address the following concerns can slow adoption and erode employee trust.

  • Training: Employees must be trained on how to use AI tools effectively and securely–e.g., maximizing their output with effective prompt engineering while protecting IP. Targeted upskilling equips employees for specific AI-driven roles and vendor models. Adequate training sets a baseline for AI use and can strengthen company culture around AI, but training may be siloed, inaccessible, or lack specificity. A strong training program becomes a two-way education, as the organization learns from how employees use a tool in deployment. 
  • Support: Maintaining employee trust in AI requires addressing concerns in critical areas such as these: 
    • Job impact, including fear of role dilution or job loss 
    • IP (first- or third-party) appearing in output 
    • IP being disclosed/leaked to third parties via prompts 
    • Inaccurate output contaminating workflows, processes, and products 
    • Additional work created in testing, troubleshooting, and revising outputs 
  • Incentives: Workforce incentives must be appropriately aligned to strategic uses of AI. KPIs and rewards should all signal the same direction, with management and the board championing a growth mindset where AI usage is linked to career progression and messaged as a learning and employee value-enhancing opportunity. Successful workforce transformation means not only using AI tools to reduce costs, but also building new AI-enabled capability. This extends to the talent pipeline as AI potentially reduces entry-level white-collar jobs that may create future talent and leadership shortages. 

 

Technology Oversight: What Boards Should Look For

Green Flags:

  • Significant investment in employee support, training, and trust accompany all investments in technology.
    • Employee trust metrics (e.g., during annual employee engagement evaluation) improve as AI deployment increases.
    • AI training programs consistently achieve high employee satisfaction and skill retention scores.
    • Training emphasizes enhancing skills that elevate employee value and personal growth.
  • There is an approved technology adoption and usage policy that outlines how employees can use AI technologies in their work.
  • Incentive structures at all levels of the organization promote the use of AI in service of the organization's strategic goals and in alignment with its values.
    • AI use and success is linked to career progression for employees.
  • AI's impacts on the workforce are being planned for across the organization, with transparency and strategic alignment.
    • There is clear, delegated accountability for how AI strategy is integrated into workforce planning and it is not left to the tech team or HR alone.
    • There is regular, transparent reporting on AI's impact on the organization's talent pipeline and leadership development, especially for entry-level and historically underrepresented roles.

Red flags:

  • AI training is treated as onetime event rather than ongoing development investment.
  • Outdated Incentive structures create barriers to AI adoption.
  • AI workforce planning is delegated solely to IT or HR without cross-functional leadership.
  • AI initiatives are presented primarily as headcount reduction or cost-cutting measures.
  • Employee survey results show declining trust or engagement related to AI implementation.
  • Lack of harmonization in communications both internally to employees and externally about AI impact on employees.

Key Performance Indicators:

  • Employee engagement scores (add AI-relevant questions)
  • AI training and support budget

 

Case Study: Driving AI Enabled Productivity

Conclusion

AI is evolving at a rapid speed and is already reshaping how businesses operate and compete. Boards that wait on the sidelines risk having their organizations being left behind. The urgency is real, but so is the opportunity for directors who take proactive steps now.

This Director Essentials equips board directors with practical steps to address AI oversight across four critical areas—AI strategy, capital allocation, AI risks, and technology competency—all of which tie directly to core fiduciary responsibilities. The practices outlined provide boards with concrete actions they can implement now—from establishing regular AI briefings to integrating oversight responsibilities into committee charters.

Directors who embrace the proposed governance practices today will be better positioned to guide their organizations through AI’s transformative impact, turning today’s uncertainty into tomorrow’s competitive advantage. However, with AI’s tremendous potential comes significant responsibility. Effective oversight is not only just about capturing opportunity, but also about protecting stakeholder trust and ensuring long-term value creation while navigating unprecedented risks that are already reshaping entire industries.

 

Contributors

NACD recognizes the following individuals for their valuable contributions to this report.

Rima Qureshi; Board Director: Mastercard, British Telecom, Loblaw Companies Ltd.

Samantha Kappagoda; Board Member, Credit Suisse Funds and Member of the Business Board of the Governing Council at The University of Toronto; Chief Data Scientist, Numerati Partners; Visiting Scholar, Courant Institute of Mathematical Sciences NYU

 

About the Authors

Dylan Sandlin is the Program Manager for Digital and Cybersecurity Governance content at NACD.

Dr. Helmuth Ludwig NACD.DC, is Professor of Practice for Strategy and Entrepreneurship and the Cox Executive Director of the Hart Institute for Technology Innovation and Entrepreneurship at the Edwin L. Cox School of Business at Southern Methodist University; Board Member at Hitachi Ltd., Tokyo and Chair of Humanetics Group, Farmington Hills; and Senior Advisor Bridgepoint LLC

Dr. Benjamin van Giffen is Associate Professor of Information Systems & Digital Innovation at the University of Liechtenstein, specializing in board-level AI governance. He is a trusted advisor to boards, senior executives, and European regulatory agencies on responsible AI strategies and oversight.

Data & Trust Alliance (D&TA) is a CEO-led, nonprofit consortium that brings together industry leaders to accelerate the deployment of AI—by working on standardized practices that deliver business value and trust.

D&TA’s practitioner-built tools include Algorithmic Bias Safeguards for workforce decisions, Responsible Data & AI Diligence for M&A, cross-industry Data Provenance Standards, and an AI Vendor Assessment Framework (launching in 2025) to help enterprises evaluate third-party AI products. Learn more at dataandtrustalliance.org

Discover More