AI is moving fast, but your risk framework may not be keeping up. Organizations are adopting AI tools without fully understanding the implications for privacy, legal liabilities, and security. The consequences can be severe: loss of attorney-client privilege, data leaks, regulatory fines, contractual exposure, and reputational damage that is hard to undo. In this webinar, we will cut through the hype and get practical about what AI risk looks like in 2026, and what you can do about it today.
REGISTER
Learning Objectives
-
Legal exposure: What your AI vendor contracts probably don't protect you from, and where liability lands when things go wrong
-
Privacy considerations: How AI tools interact with sensitive data, and what your obligations are under GDPR, CCPA, and emerging AI-specific regulations
-
Practical frameworks: How to evaluate AI tools before you deploy them and build a risk posture that scales
-
Security risks: How AI expands your attack surface, enables new threat vectors, and creates vulnerabilities you may not even know exist
Who Should Attend
- NACD members and nonmembers
- Corporate board directors with oversight of audit, compliance, cybersecurity, or legal functions
- Directors with questions or concerns about how AI is impacting organizational risk
- Executives and governance professionals responsible for AI adoption decisions
Reduce Your Risk Exposure
Gain confidence that AI adoption decisions are being made with appropriate due diligence, and reduce exposure to data breaches, regulatory penalties, and litigation.
Strengthen AI Governance
Walk away with a starting point for building or refining an AI governance policy and a stronger foundation for board-level and executive conversations.
Understand the Regulatory Landscape
Develop a clear-eyed understanding of the real risks AI introduces beyond surface-level concerns, and awareness of where the regulatory environment is heading.
Align Teams Around AI Risk
Gain the language and frameworks needed to align security, legal, and business teams around AI risk, without slowing down innovation.