Getty Images/Userba011d64_201

Online Exclusive

The Founding Fathers Used Their Real Names, But Deepfakes Use Yours

By Richard Torrenzano and Ronald J. Levine

05/21/2025

Artificial Intelligence Risk Oversight Article

What is the most lethal artificial intelligence attack an adversary could execute against your company’s reputation, and has that scenario been stress-tested at the board level? As deepfakes hijack identities and synthetic media blurs truth from fiction, boards must be stewards and sentinels, building trust before it’s tested and anticipating threats before they strike.

When the founding fathers signed the US Declaration of Independence, they did so publicly and boldly. Each signature was real and every word was traceable. There were no aliases and no shadows to hide behind.  

At its core, democratic discourse demands more than free speech; it requires integrity to take responsibility for what is said. Deepfakes are artificial intelligence (AI)-generated replicas of real people that are often made without consent, and they shatter the foundation of truth. Designed to deceive, they erase authorship, distort reality, and leave few behind to be accountable for the fallout.

Powered by vast datasets, this form of AI can craft convincing imitations of public figures using nothing more than publicly available interviews, earnings calls, or podcasts. These fabrications look and sound realistic enough to mislead stakeholders and manipulate perception. We’ve entered an era where seeing is sometimes no longer believing and hearing offers little certainty. 

Deepfakes are not tools of expression; they are weapons of distortion that are designed to obscure information, not to reveal it. Once the domain of elite technologists, the creation of deepfakes is now inexpensive, fast, and quite easy. They don’t need to be flawless, just believable enough to provoke and deceive stakeholders or disrupt operations.

Meanwhile, businesses and financial markets depend on attribution. To make informed decisions, stakeholders need to know who said what, when it was spoken, and why. Without reliable information, contracts weaken, statements lose weight, an organization’s reputation is challenged, sales drop, and systems fray. 

Behind the Illusion: Deepfake Types

As a first step in protecting reputation and market trust, directors should understand the three types of deepfakes outlined below.

Audio deepfakes clone a person’s voice from a few seconds of recorded speech. Scammers already use technology to impersonate political and government figures, CEOs, doctors, or relatives in distress. Victims have been tricked into wiring money, revealing passwords, or approving fake deals.

For example, on April 13, hackers broke into crosswalk systems in at least three California cities and replaced the standard audio with AI-generated voices imitating Elon Musk and Mark Zuckerberg. The voices delivered bizarre, satirical messages when the buttons designed to help visually impaired pedestrians were pressed. City officials shut down the audio guidance features while they investigated the breach.

Additionally, hackers are using AI-generated voice clones of senior US officials to breach government and personal accounts in a rapidly expanding deception campaign that threatens to unravel trust, compromise identities, and destabilize national security.

Video deepfakes are more potent than their audio counterparts. They are capable of showing a political leader announcing war, a CEO making false claims, or an individual confessing to crimes they never committed.

These videos don’t need to hold up under long-term scrutiny. They only need to go viral long enough to sow confusion or trigger reactions. In the time it takes to verify them, the reputational or geopolitical damage may already be irreversible.

For instance, in February 2024, Hong Kong police reported an incident involving an employee at Arup Group, a multinational design and engineering firm, that was tricked into transferring  about $25 million  through a video call that used deepfake versions of the company’s chief financial officer and other colleagues.

Photo and graphic deepfakes manipulate images. This type of deepfake can be used to depict events that never happened, as well as to alter legal evidence or frame individuals. 

In authoritarian regimes, such images are already being used to fabricate dissent, discredit opponents, or construct false narratives. In democracies, they are fueling mis- and disinformation campaigns and reputational smears.

In October 2024, after Hurricane Helene flooded Asheville, North Carolina and surrounding areas, AI-generated images of destruction and human suffering spread across social media, undermining rescue efforts and damaging public trust.

In March 2023, the Internet was captivated—and deceived—by an image of the late Pope Francis strutting in a sleek, white, Balenciaga-style puffer coat. Created by AI, the image fooled many before the truth of its creation surfaced. 

Deepfake Threats Start at the Top

While multiple states have enacted laws to police deepfakes through civil and criminal penalties, the federal government has been moving slowly. However, Congress recently passed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, also known as the Take It Down Act, to address the nonconsensual publication of graphic imagery created by deepfakes.   

Just as cybersecurity regulations haven’t prevented breaches, no law will fully protect a company’s reputation once a deepfake creates a video that goes viral or uses a CEO’s voice. Even if comprehensive laws were in place, deepfakes can still lie about a company or individual, and that information can be rapidly shared before a prosecutor or court acts.

Below are a few key actions boards can work with management to implement that protect the core assets most vulnerable to deepfakes: stakeholder trust, executive credibility, brand integrity, and market confidence. 

Rehearse rapid response protocols. Many corporate cultures, leaders, and their advisors remain unprepared and untrained to counter digital threats.

It is not a question of if but rather when deepfakes will threaten a company.  To mitigate the risks that may arise, boards should engage in scenario analyses and tabletop exercises with management to rehearse responses.

If a CEO’s or CFO’s voice is online, it can be cloned. If their images appear on a public platform, such as social media, they can be manipulated into deepfakes. 

Monitor and detect in real time. The organization should invest in systems capable of identifying AI-generated content in real time across all formats, including text, video, audio, and emerging media platforms. This includes overseeing AI protocols to continuously track mentions or images of executives, products, and brands, and to ensure key words and trends are current quarterly. 

Recognizing that bad actors weaponize new technologies before companies or governments can respond, the board's role is to push management to establish detection capabilities that are not only AI-driven but are actively advanced to stay ahead of evolving technology threats.

Build a trust reserve. Trust must be treated as strategic capital that is earned over time, depleted in seconds, and essential to resilience.

It also requires systems that track all stakeholder commentary and detect perception shifts the moment they occur. Crisis protocols should be activated instantly when a deepfake or false narrative emerges, as delays create vacuums and vacuums get filled by speculation.

Board oversight should include routine stress-testing of reputational defenses, just as it does for financial or cyber risk. Trust metrics, such as media tone, stakeholder and investor confidence, and internal morale, should be reviewed alongside earnings.

Outrun the Lie or Own the Fallout

Deepfakes are not a technology problem; they present new and challenging reputational and stakeholder trust issues as they are capable of spreading doubt and disruption. 

Directors can take a proactive approach to mitigating the risks that arise with deepfakes. This includes working with third parties, such as advisors with legal and digital proficiency who understand deepfakes threats and can evaluate organizational structures to confirm that the right talent is engaged. 

As Warren Buffett once said, “It takes 20 years to build a reputation and five minutes to ruin it. If you think about that, you'll do things differently.”

The views expressed in this article are the author's own and do not represent the perspective of NACD.

Robert Peak

Richard Torrenzano is chief executive of The Torrenzano Group, which helps organizations take control of how they are perceived. For nearly a decade, he was a member of the New York Stock Exchange management (policy) and executive (operations) committees. He coauthored Digital Assassination: Protecting Your Reputation, Brand, or Business Against Online Attacks. His new book, Command the Conversation: Next Level Communications Techniques, released May 2025. He is the coauthor of the forthcoming book, Artificial IntelliVENGEANCE™: What Every Leader Should Know.

Robert Peak

Ronald J. Levine is an accomplished attorney with more than 45 years of experience in consumer class action defense, technology, product liability, and food and beverage regulation. For 15 years, he cochaired litigation at Herrick, Feinstein, and also served as its general counsel. He is an adjunct professor at Rutgers University. He is the coauthor of the forthcoming book, Artificial IntelliVENGEANCE™: What Every Leader Should Know.