This tool, featured in the fifth edition of the NACD-ISA Director's Handbook on Cyber-Risk Oversight, provides information to help directors successfully understand rapidly evolving AI capabilities, the risks and opportunities of AI investments, and provide the leadership, governance, and oversight that will enable the business to thrive in the age of AI.
Introduction
Artificial intelligence (AI) capabilities are rapidly transforming the business landscape and our societal institutions. Because there are many types of AI capabilities and various use cases for the incorporation of AI across business units, many organizations may invest in multiple AI capabilities tuned for specific business functions and outcomes.
Key Terms
- Agentic AI: An autonomous AI system of algorithms that can independently observe the environment and proactively act with minimal human interaction (e.g., a system that sees that your connecting flight is canceled, automatically rebooks you to another flight, and notifies you and others, such as your car rental or hotel, of the change).
- Machine Learning (ML): Algorithms that learn from data to find patterns (e.g., your streaming TV service, favorite streaming music platform, or online retail sales platform use ML to predict what you would next like to see, hear, or buy and make tailored recommendations for you).
- Deep Learning: A subset of ML, deep learning uses a layered structure of algorithms called a neural network to intake raw data, identify patterns, and develop an output (e.g., your virtual assistant, your phone’s facial recognition unlock feature, self-driving cars, and other various products use deep learning to translate vast troves of data into meaningful outputs).
- Natural Language Processing (NLP): Algorithms that enable computers to understand human language.
- Computer Vision: Algorithms that allow AI to interpret and understand visual information (e.g., facial recognition).
- Generative AI: Systems that create new content, including images, video, conversations, stories, music, data synthesis, and more (e.g., ChatGPT, Gemini, etc.).
- Expert systems: Algorithms that mimic human decision-making for specific functions (e.g., decision trees).
- Shadow AI: Unauthorized use of AI tools, applications, or features by employees without the knowledge, approval, and oversight of the company, creating significant risks like data leakage, compliance failures, and security vulnerabilities.
Questions Boards Can Ask About AI
General
- How can AI transform our business and how do we measure success?
- How are our market competitors using AI and what risks are presented by their efforts?
- What are the risks of incorporating AI into our business model? What are the risks if we don’t?
- Does the board have the right expertise to make informed decisions regarding incorporating AI capabilities into the organization?
The Board’s Ability to Oversee AI
- Do we need to restructure the board to effectively manage our extended cyber risk due to our current and anticipated use of AI?
- Should our AI/cyber risk be considered as a separate matter for board discussion and action, or should it be integrated as part of our overall operations? Or both?
- What are the governance implications of the use of AI and related policies and controls?
Oversight and Management of AI
- Does our corporate structure ensure management is balancing the potential benefits of AI with potential risks?
- Is the board considering AI risks simultaneously with economic benefits from AI use cases?
- Who are our riskiest vendors, and how is our organization managing that risk?
Regulation of AI
- Have we explored the operational and regulatory challenges related to the proposed use of AI?
- Are policies and procedures in place to address AI risks from third-party software and other supply-chain issues?
- Have we reviewed our insurance policies for AI-related risks and use cases? Are we covered if our AI system fails or acts in a manner that results in an adverse effect external to our organization?
AI Risks and Cybersecurity
- Have we identified AI-related risks across all business functions (e.g., customer experience, user experience, intellectual property, operations, support functions, brand & reputation, etc.)?
- What is the state of our data governance program? Who is responsible for data governance? Do we have a complete inventory of our data, where it resides, appropriate security and access controls?
- What are the governance implications of the use of AI and related policies and controls?
- What is our third-party risk associated with AI?
- Who are our riskiest vendors, and how is our organization managing that risk?
- What actions are we taking to prevent Shadow IT and unauthorized AI use? How are we measuring success?
- What measures are we employing to protect our organizations and its employees from AI-enabled advanced social engineering attacks such as realistic deepfakes (e.g., audio, video, and text) and sophisticated, personalized targeting campaigns? How are we testing the efficacy of these measures?
- Are our cyber defenses capable of detecting and appropriately acting in response to an AI-enabled cyber attack? How do we know? How do we test their effectiveness?
- Are our AI systems hardened against cyber attacks, such as prompt injections, data poisoning, and “jailbreaking” (i.e., manipulating the AI system to bypass its security controls and guardrails to generate content it was designed to withhold)? How is the system tested, certified, and continuously monitored?
- How are we managing AI “identity” (i.e., ensuring AI agents are only granted protected access to authorized data sources, and nothing else)? How is this tested for performance and security effectiveness?
- What is our plan if our AI system fails or is unavailable? What is the impact on business? What is our plan B? How do we test our plan for effectiveness?
- What does the current AI-related threat environment look like, and where are we vulnerable? How are we addressing the risk?
Return: Toolkit For Action