Who Owns the Intelligence?
AI, Workforce Mobility, and the Next Wave of Trade-Secret Risk
Archive
NACD Northern California
Contact Us
Lisa Spivey,
Executive Director
Kate Azima,
Director of Partnerships & Marketing
programs@northerncalifornia.nacdonline.org
Find a Chapter
About The Event
As AI becomes more embedded in organizations, traditional assumptions around talent, IP, and competitive advantage are beginning to shift. Companies are moving faster, while boards are being asked to deepen oversight in areas that are evolving in real time.
Over an evening discussion led by Logan McDougal, and Derek Bennett from Egon Zehnder and Quyen Ta, Emily Haffner, and Bijal Vakil from Skadden, directors highlighted the increasing complexity around ownership of data, of AI-generated output, and institutional knowledge that may no longer fit neatly within the organization. There is no fully formed playbook. It is still taking shape as technology shifts not only month by month but also day by day.
KEY TAKEAWAYS
Board Oversight in the Age of AI
- Directors face heightened expectations to oversee AI amid rapid technological change, with shorter planning cycles and increased pressure to balance speed with risk management.
- Oversight must evolve from periodic review to continuous engagement, including using AI tools to interrogate management narratives, surface gaps, and assess shifting external expectations.
- Boards must define acceptable AI use in governance processes (e.g., board materials, decision support) while maintaining confidentiality and fiduciary standards.
- Judgment remains central: AI augments but does not replace director accountability, particularly in ambiguous or fast-moving scenarios.
Policies, Norms, and Boardroom Practices
- Companies are struggling to define durable AI policies; principles-based approaches are favored over tool-specific rules due to rapid change.
- Clear guidance is required on acceptable use, particularly around confidentiality, recording meetings, and handling sensitive information.
- Board education is critical to close generational and fluency gaps, ensuring a consistent understanding of risks and capabilities.
- Companies should standardize secure communication channels (e.g., company email) to reduce discovery and cybersecurity risks.
Legal, Regulatory, and Litigation Exposure
- Trade secret litigation reached a record high in 2025, with over 1,500 cases reported. This surge is being driven by the rapid adoption of AI, heightened employee mobility, and growing geopolitical tensions.
- Enforcement activity is increasing, including greater involvement from the DOJ in areas such as wire fraud, trade secrets, and espionage, particularly with geopolitical implications.
- Plaintiffs are leveraging AI to identify litigation opportunities (e.g., benchmarking products, sourcing insider claims), increasing exposure for companies.
- Companies face growing discovery risks tied to AI use, including prompts, outputs, and data-handling practices.
- Regulatory fragmentation across jurisdictions (e.g., U.S. states vs. international regimes) complicates compliance and requires proactive monitoring.
Data Governance, IP, and Privilege
- Ownership of AI-generated IP remains unclear; outcomes depend heavily on licensing structures and the tools used.
- Use of public AI tools can compromise privilege and expose sensitive data; enterprise-controlled environments are increasingly essential (e.g., embedded AI in common software tools may be difficult to avoid).
- Trade secret protection standards are shifting; failure to implement “reasonable measures” (e.g., restricting AI inputs) may weaken legal defenses.
- Boards must ensure clear policies on data access, usage, and protection across employees and non-human agents.
Risk, Liability, and Accountability
- Liability frameworks for AI-driven outcomes (e.g., autonomous systems) remain underdeveloped, creating uncertainty over responsibility allocation.
- Risk includes both action and inaction: failing to adopt AI may be as consequential as adopting it without sufficient safeguards.
- Emerging risks include hallucinations, reliance on inaccurate outputs, and potential malpractice or negligence claims tied to AI-assisted decisions.
- Insurance coverage is tightening, with governance practices around AI increasingly scrutinized in underwriting decisions; boards should seek clarity on potential exclusions.
Talent, Workforce, and Operating Model Shifts
- Talent constraints are intensifying, particularly for advanced AI expertise, with large firms dominating access due to cost.
- Workforce models are shifting toward human–AI collaboration, potentially reducing the number of entry-level and mid-management roles.
- Pairing junior (AI-native) and senior (experience-driven) employees is an effective approach to closing knowledge gaps.
- Organizations must redesign roles around managing humans, agents, and automated systems.
- Boards should monitor workforce fluency in AI as a measurable capability and ensure reskilling keeps pace with adoption.
Governance of Non-Human Agents
- Rapid expansion of autonomous agents introduces new governance challenges, including identity management, authorization, and accountability.
- Boards must oversee frameworks for managing a non-human workforce, including security controls and operational boundaries.
- Questions of taxation, economic contribution, and regulatory treatment of agents are emerging but remain unresolved.
Strategic Implications and Competitive Dynamics
- Companies that move with speed and agility are likely to outperform; overly restrictive guardrails may hinder competitiveness.
- AI lowers barriers to entry, enabling a surge in entrepreneurship and increasing competitive pressure on incumbents.
- Competitive advantage is shifting toward proximity to customers and the ability to rapidly deploy AI-enabled solutions.
- Boards must guide management on where AI drives true transformation versus incremental efficiency gains.
Questions for Board Consideration
- Who owns AI-generated IP, and how should licensing and enterprise tools be structured to protect ownership and privilege?
- What constitutes “reasonable measures” for protecting trade secrets in an AI-enabled environment?
- How should liability be allocated for AI-driven decisions and autonomous system failures?
- What is the appropriate balance between speed of adoption and governance safeguards?
- How should the board measure and oversee AI fluency, workforce transformation, and agent governance?
Thank you to our partners for making this event possible.
![]() |
![]() |
NACD Northern California
Contact Us
Lisa Spivey,
Executive Director
Kate Azima,
Director of Partnerships & Marketing
programs@northerncalifornia.nacdonline.org
Find a Chapter
By registering for an NACD or NACD Chapter Network event, you agree to the following Code of Conduct.
| NACD and the NACD Chapter Network organizations (NACD) are non-partisan, nonprofit organizations dedicated to providing directors with the opportunity to discuss timely governance oversight practices. The views of the speakers and audience are their own and do not necessarily reflect the views of NACD. |



