Is Your Board Prepared for the Next Wave of AI-Powered Cyber Threats?
Archive
NACD Northern California
Contact Us
Lisa Spivey,
Executive Director
Kate Azima,
Director of Partnerships & Marketing
programs@northerncalifornia.nacdonline.org
Find a Chapter
About The Event
Board leaders from multiple industries gathered for an off-the-record dinner conversation on the escalating intersection of AI, cybersecurity, and board oversight. Led by Simona Agnolucci and Tracy Rubin, along with Rob Sloan, and special guest and author Helmuth Ludwig, directors discussed how AI is transforming the cybersecurity landscape—from nation-state attackers automating strikes and zero-click exploits, to the rising risks of always-on AI tools, supply-chain vulnerabilities, and voice replication fraud—while also addressing while also examining the human element in oversight and more.
KEY TAKEAWAYS
Note: These takeaways are a summary of the discussion and not legal advice. They reflect personal views and anecdotal experiences shared during the session, and are not necessarily evidence-based.
AI-Enhanced Cyber Threat Landscape
- AI is transforming the threat environment: nation-state attackers are using AI tools to automate offensive campaigns at a 4441massive scale, ransomware negotiations are now fully automated, and zero-click attacks hide instructions in emails that copilots execute without user interaction; even something as simple as accepting an automatic Microsoft calendar invite can give an attacker system access.
- Always-on AI tools (such as auto-transcription and note-taking) create quiet, persistent risk because users forget they’re active.
- Deepfake audio and video enables convincing impersonations, making it harder to distinguish fraud from legitimate requests.
- Supply-chain attacks remain some of the most aggressive, with malware injected into code during development or delivery; organizations with large software footprints must prioritize source verification and code integrity checks.
- North Korean operatives have infiltrated companies via HR hiring pipelines, underscoring the need for strong identity and employment verification (e.g. in-person interview, or live ID checks).
- Many organizations are reconsidering whether major financial transactions require in-person oversight; others rely on two-step verification, call-backs, and code-word protocols.
- Overall, companies must constantly determine what AI systems can safely automate vs. what still requires human judgment.
Employee Use, Data Exposure & Internal Governance
- Employee use of AI tools is accelerating rapidly, with many tools running continuously and capturing sensitive information by default.
- Blocking AI outright is ineffective—employees simply turn to personal accounts—so organizations need guardrails that monitor what data flows in and out of these tools, and that otherwise provide guidance around acceptable and unacceptable uses.
- Despite high usage (90% of employees use AI), only ~40% of companies have enterprise-level licensing (source: MIT), creating inconsistencies in security and oversight.
- Consolidating corporate data into a single enterprise LLM environment is safer, but also concentrates risk in one place should a breach or misuse occur.
AI Agents, Autonomy & Systemic Risk
- AI agents can behave unpredictably, take unintended actions, or amplify risks through autonomous decision-making.
- As systems move toward agent-to-agent communication, vulnerabilities multiply, especially when combined with third-party plug-ins and integrations that lack strong guardrails.
- New startups are emerging to control and monitor autonomous agents, but oversight remains inconsistent across the ecosystem.
- As AI becomes the new user interface—driven by plug-ins and autonomous agents—boards must ensure strong safety controls, clear checkpoints, and human-in-the-loop processes, so these systems can act on users’ behalf while limiting new pathways for exploitation.
Global Positioning & Regulatory Context
- The U.S. remains behind China in developing uniform AI guardrails and governance structures, creating uncertainty in oversight expectations.
- The speed of technological change makes long-term governance challenging; disciplined processes and checklists will matter more in the coming decade.
- Interest in insurance products tied to AI accountability is rising, and expectations for board involvement and liability are increasing in parallel.
Board Knowledge, Judgment & Oversight
- Cybersecurity is now inseparable from core board responsibility, yet only 2% of directors have cyber expertise (source: WSJ).
- Business judgment protections are narrowing as regulators expect boards to understand and oversee AI-related risks.
- Boards may use resources such as outside experts, structured education, and targeted delegation to stay informed in the rapidly evolving landscape.
Committee Structure & Third-Party Assessments
- Some boards are adding strategy or technology committees to handle AI and cyber oversight, while others continue using traditional structures with expanded mandates.
- Third-party cyber assessments can help boards see gaps clearly and provide CISOs with actionable roadmaps.
Rotating external assessors every two years brings fresh perspective and reduces blind spots.
Data Sharing, Vendor Risk & Ecosystem Practices
- Vendors and auditors may train models on client data—making it essential for companies to ask the right questions and understand what is being shared, how PII is treated, and decide what categories of data should never be used.
- Sharing non-sensitive information (e.g., threat intelligence) can strengthen the ecosystem, but must be done carefully.
- Contractual limits on data use are a good way to decipher what data can be used for what purpose. This doesn’t guarantee that all errors or misuse will be prevented, but evidences a common understanding and provides a path to recourse in the event of breach.
- Customer service bots and analytics tools can also unintentionally transmit PII; companies must monitor this closely.
- Boards should also track the ratio of security spend to operational outcomes and evaluate whether protections meaningfully improve over time.
Legal Risks & Liability
- AI tools that read emails or chats can trigger liability under outdated wiretapping laws (~$5k per transmission), with many cases settling due to unclear definitions of “receiving” data.
- Some data scrapers illegally repurpose and sell training data, increasing exposure for companies whose content was taken.
- New laws, such as California’s companion chatbot law (SB 243), preview how AI safety regulation will broaden.
- Early case patterns show liability spreading across multiple parties, often ending in settlements.
- AI models produce probabilistic outputs, which can sound authoritative but may be incorrect or misleading. Humans need to understand this and be aware of the impacts of this as a risk to the organization.
- As LLMs evolve into predictive systems, guardrails and oversight become more important because boards and regulators cannot keep pace.
Trust, Monetization & Behavioral Trends
- Public trust is declining as monetization of personal data becomes more pervasive. Although Gen Z’s preference for targeted, personalized digital experiences is welcomed.
- App ecosystems can easily over-collect data unless monitored; companies must flag abnormal data behavior like a security anomaly.
Safety, Guardrails & Red-Teaming
- Organizations must use AI to enhance human decision-making—not replace it. Over-reliance creates complacency and missed risks.
- Effective guardrails include deterministic boundaries, trigger-word and self-harm checks, red-teaming, layered safety systems, and data loss prevention tools.
- Some LLMs behave safely initially but produce unsafe outputs under persistent prompting, showing why safeguards must be continuous.
- Companies benefit when they hold each other accountable through transparent investigations and shared remediation.
Strategic Considerations for Boards
- As companies integrate AI into products and operations, boards must help guide where the true risks and opportunities lie.
- Pressure to adopt AI is high, but not all investments will deliver value; clarity on cost, outcomes, and success metrics is essential.
- Tracking employee AI usage provides visibility into risk.
- Continuous education on AI and cybersecurity is now mandatory for directors, not optional.

Thank you to our partners for making this event possible.
![]() |
![]() |
NACD Northern California
Contact Us
Lisa Spivey,
Executive Director
Kate Azima,
Director of Partnerships & Marketing
programs@northerncalifornia.nacdonline.org
Find a Chapter
By registering for an NACD or NACD Chapter Network event, you agree to the following Code of Conduct.
| NACD and the NACD Chapter Network organizations (NACD) are non-partisan, nonprofit organizations dedicated to providing directors with the opportunity to discuss timely governance oversight practices. The views of the speakers and audience are their own and do not necessarily reflect the views of NACD. |


