What Audit Committees Should Know About Artificial Intelligence

By Vanessa Teitelbaum

02/21/2024

Artificial Intelligence Audit Committee Disruptive Technology Risk Oversight

In a rapidly changing technological landscape, artificial intelligence (AI), and specifically generative AI (GenAI), is a hot topic. As companies embrace this new technology, boards need to be aware of its risks and benefits as part of their oversight responsibilities.

What is AI?

AI has been around for decades, but the public deployment of GenAI in the past 15 months, powered by the likes of OpenAI’s ChatGPT, is capturing the attention of corporate leaders around the world. Traditional AI is a field of computer science that focuses on creating intelligent machines capable of performing tasks requiring human intelligence. As a result, machines work with increasing degrees of automation. Machine learning is a subfield of AI where the machine starts to learn from itself. For example, the machine learns that if something works in scenario X, then it can be applied to scenario Y. Robotics process automation is an example of this form of AI.

GenAI creates content. Its unique value proposition is its ability to instantly synthesize large amounts of text to generate insights. Large language models (LLMs) and natural language processing (NLP) power GenAI to change the way humans interact with technology. Simplistically, NLP means that the machine can understand written prompts or verbal commands in everyday language and then translate that language into computer code or terminology. NLP enables the machine or computer to better “read” and understand language. This drives the ability to synthesize and summarize large amounts of data—specifically text—such as a company’s policies and contracts, auditing standards and regulations, and emails to and from customers.

In a webinar organized by the Center for Audit Quality (CAQ) and NACD in January, Microsoft Corp.’s Modern Finance lead, Cory Hrncirik, described GenAI as another tool in the toolkit. It’s like a power saw or drill that can help you with a home renovation. However, you don’t use a power saw for every project, and you wouldn’t want it to be your only tool. Hrncirik described stacking GenAI “on top” of other technology. It easily and powerfully works with existing technology, but human oversight is still a necessary component when using generative AI. 

Richard Jackson, EY Global Artificial Intelligence Assurance leader, agreed and brought forth the example of the evolution of the typewriter to demonstrate how technology has fundamentally changed. With a typewriter, mistakes must be corrected with Wite-out or correction tape. That raises the question, Wouldn’t it be great if there was a way to not have to correct mistakes with Wite-out? 

Along came the word processer that offered a screen to automate error correction. Over time the technology evolved to tell the user when they made a spelling mistake in real time. Now, users don’t even need a keyboard; they can use voice recognition to interact with the machine. The latest evolution is generative AI. You don’t need to type all the words, instead inputting a prompt such as, “I want to write a paper about these subjects [insert topics].”

Jackson recognized that the transformation of the typewriter occurred over decades; we are now seeing this evolution with AI and GenAI in months or less. This example illustrates how technology has evolved over the last 25 years and how intelligence has been injected along the way.

Linda Zukauckas, Center for Audit Quality (CAQ) Audit Committee Council member and TransUnion board and audit committee member, said that GenAI is likely the next step of a company’s or finance function’s transformation journey. GenAI can be used as part of a company’s competitive advantage, including attracting talent. In the financial reporting world, generative AI can be used in a variety of ways, including in contract review, forecasting, modeling, and fraud detection. 

How do boards start their AI journeys?

Hrncirik offered the following four steps for boards and management to get started with AI oversight and implementation:

  1. Tone at the top: Lead from the front and be strategic innovators. Embrace and encourage the use of AI in a thoughtful, deliberate manner.
  2. Grassroots learning: Encourage employees to learn and train them on AI.
  3. Foster AI innovation: Encourage employees to use new technology.
  4. Strategically adopt AI: Reassess existing processes and strategically embed AI.

How can audit committees mitigate risks?

All new technologies pose risks, especially with a rapid rate of adoption. Management, the board, and the audit committee need to understand the potential risks, hallucinations, and biases of AI while ensuring that staff has the right skills and training to effectively utilize AI as a tool to supplement their work. Hrncirik, Jackson, and Zukauckas also cautioned that the more widely available AI is for employees to use, the more risks there will be.

Company leadership, audit committees, and board members should be asking questions about where, why, and how AI programs are being used. These questions could include the following:

  • How is the company embracing generative AI in the overall business strategy? Is it internal or customer facing?
  • How is the company balancing those opportunities with inherent risks? 
  • For companies dependent on third parties or using external technologies such as ChatGPT, how are the third parties implementing AI? What are the boundaries around usage and data, and what are the implications for the company? 
  • How is AI technology being used and deployed within the organization? Do management and directors know where AI is currently being used? Are public or open versions being used?
  • What data is being used and how is it protected?
  • With regulatory frameworks unfolding at state, federal, and global levels, how is the company monitoring and engaging in public policy dialogues? Is the company leveraging existing frameworks?

At the CAQ, we have been monitoring the rapid growth of AI and how it impacts the auditing profession. In our Fall 2023 Audit Partner Pulse Survey, we asked audit partners about how companies are deploying generative AI. We found that while there is interest across a range of industries, the top challenges for deploying generative AI as a tool for employees or customers include data quality concerns; the maturity, or lack thereof, of the technology; data security risks; and gaps in talent and expertise to implement and manage the technology. 

Given the speed of AI technology development, existing quality control processes should be assessed and updated with appropriate frequency. Additionally, companies should consider the increased likelihood of reputational risk as AI becomes more integrated and available for wider public consumption.

CAQ is a NACD partner, providing directors with critical and timely information, and perspectives. CAQ is a financial supporter of the NACD.

Vanessa Teitelbaum, CPA, is senior director, Professional Practice at the Center for Audit Quality.