The Board’s AI Risk Moment: Who Is Accountable?
Archive
NACD Northern California
Contact Us
Lisa Spivey,
Executive Director
Kate Azima,
Director of Partnerships & Marketing
programs@northerncalifornia.nacdonline.org
Find a Chapter
About The Event
As AI moves from experimentation to enterprise infrastructure, board accountability is rising fast. This virtual session brought together legal, insurance, and governance perspectives to examine board liability, risks, and available frameworks such as AIUC-1 to ensure effective oversight of AI.
VIEW THE RECORDING
KEY TAKEAWAYS
AI Governance: Speed, Safety, and Strategic Trade-offs
-
Effective AI governance must enable speed and safety—like brakes and a seatbelt that allow a car to go faster. Over-protecting can be as risky as under-governing.
-
Boards often focus on policies and tools, but success depends on a robust, fast-moving governance process with a clearly defined ROI, and not adopting AI simply because of hype.
-
A common false comfort is believing that a comprehensive process equals good governance; overly slow or rigid processes may cause companies to miss high-value use cases that never reach the board.
-
Directors should explicitly examine the speed-versus-protection trade-off: what risks are being mitigated, and what opportunities may be lost as a result.
Enforcement, Legal Exposure, and Oversight Expectations
-
Current enforcement theories around AI include consumer deception, privacy, discrimination, safety, IP, and unfair practices, often driven at the state level.
-
Risk exposure depends on how AI fails in the marketplace; while egregious conduct triggers scrutiny, boards are expected to actively probe for governance gaps before harm occurs.
-
Litigation risk is emerging in two areas: AI-washing in public disclosures and director oversight obligations (e.g., Caremark duties).
-
Regulators’ first step in an AI-related incident is reconstructing the company’s “story,” including what governance was in place, what was known, and how oversight functioned. The stronger the board’s governance and documentation processes, the better positioned the company is to respond.
AI-Washing and Disclosure Risk
-
Overstating AI capabilities, maturity, or revenue impact can expose companies to securities fraud and disclosure risk, particularly where valuation or capital raising is involved.
-
Regulatory focus on AI-washing is increasing, drawing parallels to earlier ESG-washing enforcement.
-
Boards should ensure public statements about AI align with operational reality and internal controls.
Insurance, Risk Transfer, and Coverage Gaps
-
The insurance market is fragmented: some carriers exclude AI risks entirely, while others underwrite them with highly specific requirements.
-
AI exclusions may emerge at renewal across Cyber, D&O, General Liability or tech-related policies, requiring careful board-level attention.
-
Underwriters focused specifically on AI Insurance now assess AI-specific factors such as model behavior, use case, data sources, metrics, and material changes which go beyond traditional insurance policy frameworks.
-
Companies must assess whether AI is autonomous or human-in-the-loop and whether employee training, validation, and guardrails are sufficient prior to customer exposure.
Technical Standards, Audits, and the AIUC-1 Model
-
AIUC-1 was presented as a technically grounded standard for AI agents, comparable to a SOC 2–type report for AI, with added requirements for ongoing testing.
-
The framework includes agreed security standards, third-party testing designed to surface vulnerabilities, and transparent and ongoing audit reporting.
-
Because AI is probabilistic, risk cannot be eliminated—only mitigated and, in some cases, insured.
-
AIUC-1 is positioned as complementary to high-level standards such as NIST and ISO, addressing technical gaps they do not cover.
Supply Chain and Third-Party AI Risk
-
AI risk increasingly resides in vendors and partners.
-
Boards should focus on material vendors with AI components and ensure appropriate contractual risk allocation, audit rights, and standards alignment.
-
Certification of vendor AI agents is emerging as a mechanism for shared industry assurance.
Board Accountability and Governance Structure
-
AI governance is a full-board responsibility, regardless of committee structure. While not all directors need deep technical expertise, the board must be capable of asking informed questions, with a small number of directors able to probe second- and third-order technical implications.
-
Boards should anchor AI oversight on three core questions: what measurable ROI is expected, what customer or external harm could arise, and what real-world outcomes—positive or negative—are being produced for customers and the organization.
-
Foundational enablers require immediate attention, particularly data governance and legacy technology debt, which materially limit both AI effectiveness and risk management.
-
Governance structures should be formalized and updated: refresh charters, agendas, and oversight models to give technology governance coverage comparable to financial oversight, with clear committee roles (e.g., Nominating/Governance for ongoing education, Audit for data and control rigor, Compensation for workforce and skills impacts).
-
AI oversight should move from episodic to embedded, beginning with a dedicated board deep dive and evolving into a standing agenda item supported by continuous education and external expertise.
-
Reframing AI governance from compliance to competitive advantage enables boards to manage risk while positioning the company to capture value and win in the marketplace.
Questions for the Boardroom: Management Oversight on AI
-
How are we adopting AI securely, and which governance or risk frameworks are we using to guide deployment and oversight? Why were these selected?
-
How does this AI initiative tie directly to our business strategy, and what alternative solutions were considered? Do we truly need AI here, or are we at risk of AI-washing?
-
What are our material vendor and business relationships that include AI components, and what safeguards are in place to monitor data use, access, and downstream risk across those partners?
-
Have our compliance processes changed to reflect where AI agents are operating within the organization, and what protocols are in place to govern their use, even if they are not autonomous?
RESOURCES
Effective AI Oversight for Directors Certificate Program – NACD
2024 Blue Ribbon Commission Technology Leadership in the Boardroom: Driving Trust and Value – NACD
Quantum Computing for Board Directors: Navigating the Future of Innovation – NACD Northern California
SPEAKERS
MODERATOR
Thank you to our partners.
![]() |
![]() |
![]() |
NACD Northern California
Contact Us
Lisa Spivey,
Executive Director
Kate Azima,
Director of Partnerships & Marketing
programs@northerncalifornia.nacdonline.org
Find a Chapter
By registering for an NACD or NACD Chapter Network event, you agree to the following Code of Conduct.
| NACD and the NACD Chapter Network organizations (NACD) are non-partisan, nonprofit organizations dedicated to providing directors with the opportunity to discuss timely governance oversight practices. The views of the speakers and audience are their own and do not necessarily reflect the views of NACD. |



