The AI Crossroads:
Governing Risk While Reimagining the Business Model
Archive
NACD Northern California
Contact Us
Lisa Spivey,
Executive Director
Kate Azima,
Director of Partnerships & Marketing
programs@northerncalifornia.nacdonline.org
Find a Chapter
About The Event
AI isn’t just changing operations—it’s redefining how companies create value, defend their market position, and structure work itself.
In an off-the-record conversation guided by Logan McDougal and Wendi X. Zhang from Egon Zehnder, and Joseph Talmadge and Tim Berthold from Heffernan Insurance Brokers, directors explored what the rapid implementation of AI now means for directors—specifically around accountability, business-model risk, and long-term strategic advantage.
KEY TAKEAWAYS
Strategy & Value Creation
- Every company must decide where artificial intelligence matters most, such as with product differentiation, service delivery, operating efficiency, or ecosystem positioning. Boards should ask: Where does AI create differentiated value for our business, not just incremental improvement?
- Rational experimentation is necessary but often siloed by function. Boards should press management on which AI pilots are moving into production, why those were selected, and what is being discontinued.
- AI adoption remains uneven; many companies market AI aggressively while internal deployment is limited, raising the question of whether investment is aligned with real value creation or hype.
- Customer acquisition and engagement models are shifting as younger demographics increasingly default to AI-mediated discovery. Boards should ask: How could AI fundamentally change how customers find, choose, and interact with us?
- AI is reshaping competitive dynamics, enabling faster, more nimble players to leapfrog incumbents. A best practice is to examine startups and AI-native competitors to understand where the market is heading.
- Boards should demand more than anecdotes and ask how management is measuring AI ROI in a way that rolls up across the enterprise and over time.
Governance, Risk & Accountability
- Data governance is now inseparable from AI governance. Boards should understand what data is being consumed by AI systems and whether contractual, regulatory, or compliance risks are being introduced.
- Ownership questions are becoming central: Who owns customer data, training data, prompts, and AI outputs—and are those outputs discoverable or disclosable?
- Access controls designed for SaaS models are breaking down in an environment driven by large language models, requiring boards to ensure AI policies, access management, and cybersecurity controls are integrated.
- AI does not create entirely new risks—it amplifies existing ones at speed and scale. While large-scale failures have been limited to date, when they occur, liability and reputational impact will move quickly. Boards should be prepared for the pace and magnitude of AI-related incidents.
- Insurance coverage for AI risk remains inconsistent and is evolving with the market; it will take time to mature. Boards should identify which AI-related risks are affirmatively covered today and where material gaps remain.
- As a best practice, some boards are conducting pre-mortems, postmortems (including peer incidents), red-team exercises, and AI-specific stress tests to pressure-test resilience.
- The AIUC-1 framework, from the Artificial Intelligence Underwriting Co., was mentioned. NACD Northern California’s virtual program, “The Board’s AI Risk Moment: Who Is Accountable?,” features a member of this organization.
Talent, Workforce & Leadership
- AI literacy is becoming a baseline expectation, not a niche skill. Boards should take a forward-looking view of AI fluency across the board of directors, executive leadership, and the broader workforce.
- Strategic decisions about the future workforce model need to be made, including how judgment is developed and how mentorship is preserved. Companies are increasingly split between:
- reducing entry-level roles and relying on managers to oversee AI agents, or
- retaining entry-level roles while using AI to accelerate learning and output.
- Boards should ask whether the chief human resources officer is positioned to lead workforce transformation and not just manage headcount.
- A best practice is using AI (e.g., summaries, curated learning, internal podcasts) to accelerate onboarding and upskilling while preserving human context and reducing demands on senior leadership time.
Board Operations & Oversight
- Many boards still need to define where AI sits within their governance model, rather than defaulting oversight to the audit committee.
- Effective boards are experimenting with AI tools themselves and sharing learnings to build collective fluency.
- Annual strategy cycles are giving way to more frequent future-casting and ecosystem analysis, with boards asking what their industry could look like in three to five years under different AI adoption scenarios and revisiting those assumptions over time. Boards should ask: If we looked back five years from now, what AI decisions would we regret most?
- Boards increasingly benefit from outside, AI-native perspectives to challenge assumptions and surface new strategic opportunities.

Thank you to our partners for making this event possible.
![]() |
![]() |
NACD Northern California
Contact Us
Lisa Spivey,
Executive Director
Kate Azima,
Director of Partnerships & Marketing
programs@northerncalifornia.nacdonline.org
Find a Chapter
By registering for an NACD or NACD Chapter Network event, you agree to the following Code of Conduct.
| NACD and the NACD Chapter Network organizations (NACD) are non-partisan, nonprofit organizations dedicated to providing directors with the opportunity to discuss timely governance oversight practices. The views of the speakers and audience are their own and do not necessarily reflect the views of NACD. |


