Assessing the Risks and Opportunities of Generative AI

By John Rodi and Cliff Justice


The early months of 2023 have highlighted the startling advances in the development and use of generative artificial intelligence (AI)—the promises and perils of the technology—and its ability to create new, original content such as text, images, and videos.

Generative AI has been a focus of discussion in most every boardroom, as companies and their boards are seeking to understand the opportunities and risks posed by the technology, a challenge given the pace of the technology’s evolution.

While generative AI is still in its infancy, it is gaining rapid momentum and entering the mainstream. With most boards focused on understanding generative AI and its potential benefits and risks for the company, we hear three recurring themes in our conversations with directors:

  1. The need for board education so that all directors have a basic understanding of generative AI—its potential benefits and risks, and how the company might use the technology.
  2. The importance of establishing, early on, a governance structure and policies regarding the use of the technology by employees and updating those policies to address risks the technology poses.
  3. Board and committee oversight of generative AI.

Board education. Many boards are asking management for a high-level training session (with third-party experts, as necessary) on generative AI and its potential benefits and risks. The potential benefits will obviously vary by industry, but might include impacts to various business processes such as customer service, content creation, product design, development of marketing plans, and the creation of new drugs.

The training session should include an overview of the major risks posed by generative AI. Many of these risks pose additional reputational and legal risks for the company that can undermine stakeholder trust. Examples of these risks include the following:

  • Generative AI may produce inaccurate results. The accuracy of generative AI depends on the quality of the data it uses, which may be inaccurate or biased and come from the Internet and other sources. It is essential that management closely scrutinize the data results. Even so, an explanation of AI results is a challenge, as generative AI results are built on correlations and not causality.
  • Intellectual property risks posed by generative AI may include both the unintended disclosure of the company’s sensitive or proprietary company information to an open generative AI system by an employee, as well as the unintended access to third-party intellectual property (IP) when an employee’s prompt to an AI system generates the IP information.
  • Data privacy risk is also a major concern with generative AI, since user data is often stored to improve the quality of data.
  • Generative AI also may pose increased cybersecurity risks. Cybercriminals can use the technology to create more realistic and sophisticated phishing scams or credentials to hack into systems.
  • Given the rapidly evolving generative AI legislation globally, the use of the technology may pose compliance risks. Monitoring and complying with evolving AI legislation globally must be a priority for management.
  • Bad actors can use generative AI to create so-called deepfake images or videos with uncanny realism, which might negatively portray the company’s products, services, or executives.

Generative AI governance structure and policies. With a high-level understanding of the risks posed by generative AI, boards can begin to probe management as to what generative AI governance structure and policies are appropriate for the company. It’s important to develop a governance structure and policies regarding the use of this technology early on, while generative AI is still in its infancy. Among the key questions to ask during that process are the following:

  • How and when is a generative AI system or model—including a third-party model—to be developed and deployed, and who makes that decision?
  • How is management mitigating these risks and what generative AI risk management framework is used?
  • How is the company monitoring the array of federal, state, local, and global legislative and regulatory proposals to govern the use of generative AI?
  • Does the organization have the necessary generative AI-related talent and resources?

Board and committee oversight of generative AI. For many directors, there is not necessarily one committee that has oversight responsibility for generative AI (only 12 percent of global Fortune 500 companies have a standing technology committee, according to McKinsey & Co.). Given the strategic importance of this new and rapidly advancing technology, oversight should be a responsibility for the full board. In the oversight of generative AI, it’s clear that director education is critical to help ensure that the board, as a whole, is up to speed on the topic. Whether the board has or seeks directors with generative AI expertise or uses outside experts is an issue for each board to consider. Some directors caution against bringing on an expert, but acknowledge that having board members with significant business technology experience is helpful.

KPMG is an NACD strategic content partner, providing directors with critical and timely information, and perspectives. KPMG is a financial supporter of the NACD.


John Rodi is leader of the KPMG Board Leadership Center. 

Cliff Justice
Cliff Justice is US Leader, Enterprise Innovation, KPMG LLP.