Survey
2025 Public Company Board Practices and Oversight Survey: Analysis: AI
Boards now stand at the inflection point of transitioning from AI education and awareness to more strategic and integrated AI governance within the board's operations.
Governance Surveys
Directorship Magazine
Governance Outlook
Artificial intelligence is the talk of the day, but board members should be ready to handle issues involving quantum computing, robotics, cybersecurity, and more.
Board members are steering their firms through an era of unprecedented disruption. Artificial intelligence (AI) is rapidly reshaping entire industries, technology giants are making huge strides in quantum computing, cryptocurrencies are proliferating into the upper echelons of Wall Street, and robotics startups are developing increasingly sophisticated humanoid robots and self-driving cars. In short, the future has arrived.
For corporate directors, remaining up to speed is a challenge. “There is a limited ability to have specialized expertise in all areas that are important for a company,” said Francesca Odell, leader of the Americas corporate practice at Cleary Gottlieb Steen & Hamilton LLP.
“The need for boards to have technical knowledge varies by company,” she continued. Broadly speaking, however, “if you're a director, you should make sure that you're learning … in order to exercise that oversight duty.”
According to experts consulted by NACD, below are the five areas that directors should have on their radar.
The rise of AI is only beginning. According to a June analysis by PwC, 100 percent of industries are increasing their use of AI, from financial services to agriculture, retail, and health care. For board members, this creates new oversight challenges—along with questions about the right way to invest in this unproven technology.
“The strongest organizations will invest in human capability and pair employees with AI systems that increase performance.”
That's according to Shelly Palmer, CEO of The Palmer Group and a member of the board at 1-800-Flowers.com Inc. “The weakest will treat AI as a short-term cost reduction tool and lose long-term competitiveness,” Palmer said.
There is no denying that artificial intelligence is already leading to job losses. The employment advisory firm Challenger, Gray & Christmas recently found that AI was the third most common reason for layoffs in September (trailing only market dynamics and business closures).
In Palmer’s view, layoffs are inevitable, but boards should also prioritize “institutional memory and a workforce committed to the mission, vision, and values of a company.”
Kirstie Tiernan, digital go-to-market leader and board member at BDO USA, agreed that AI tools should not simply be viewed as cost-cutting opportunities. Boards “should make sure management views AI as part of the talent strategy … They should ask how AI tools are audited for bias, how data is managed, and how fairness and privacy are protected in hiring and performance systems,” she said.
Tiernan noted that, so far, most of the firms generating financial returns from AI have done so through cost savings and productivity gains, rather than new direct revenue. Over the long term, that is likely to change.
In a September report, the Boston Consulting Group found that the financial impact of AI has so far been limited for many companies. Of the 1,250 firms it surveyed, 60 percent had seen “minimal revenue and cost gains (from the technology) despite substantial investment.”
Palmer added a point on this topic: In light of AI, traditional HR metrics, like headcount, engagement, and attrition, may fail to capture the ways AI is impacting a business. “Boards should request new key performance indicators that measure model accuracy, reliability, and compliance,” he said.
He argued for AI governance frameworks that mirror human oversight. “Each system should have a defined role, an accountable owner, and continuous evaluation,” he said. “These structures should be transparent to the board.”
Currently, workers interacting with AI are largely acting as “AI trainers,” Palmer continued. But in the coming years, these roles will transition into “AI supervisors.” He analogized the situation to self-driving cars. During early training periods, a human driver sits behind the wheel and helps train the software, but “over time, the AI learns to drive.” Eventually, he said, “the driver no longer drives. They simply tell the car where they want to go.”
“These profound shifts will require board members to more proactively communicate with stakeholders, including employees and investors,” said Andrew Chrostowski, who has served as a director at numerous firms. For example, “shareholders will want to see if the benefits of AI applications are lowering costs, increasing customer engagement, driving sales, or all of the above.”
Chrostowski also encouraged open communication about data center usage, because these centers consume enormous amounts of power and may be tied to corporate sustainability goals.
In the short term, Chrostowski said in summary, “boards should ensure that management is focused on hiring and retaining employees with strong AI skills and a high willingness to apply it.”
The risks of AI are as numerous as the upsides. Board members should consider asking themselves the following questions:
For directors, the stakes could not be higher. (For more, see the NACD Director Essentials Implementing AI Governance.)
In October, Google announced a breakthrough in its quantum computing research division, which leverages principles of quantum physics to solve problems exceptionally quickly. According to The New York Times, Google’s “quantum algorithm runs 13,000 times as fast as software written for a traditional supercomputer.”
In plain English, that means the computer—and others soon to be developed—could be used to more easily discover breakthroughs in the pharmaceutical industry, civil engineering, and a myriad of other potential applications.
“It’s a big deal,” Chrostowski said.
Chrostowski expects that quantum computing will affect the fields of telecommunications, banking, oil and gas exploration, aerospace, and beyond.
One of the risks, however, is that these ultrapowerful computers will be able to “break down encryption standards,” Tiernan said, which could have massive ramifications for firms’ ability to safeguard data and other proprietary information.
“It's not quite mainstream yet,” she continued. “But boards maybe should at least be aware of it and be understanding that it's going to have an impact in a variety of areas.”
Potential questions for directors include:
“A new field of science—synthetic biology—will create enormous optionality and risk for nearly every industry,” forecasted Amy Webb, CEO of Future Today Strategy Group. “We’ve started programming biology the way we program software. Startups are now designing microbes that sequester carbon, produce plastics, grow textiles, and synthesize pharmaceuticals all without mining, refining, or shipping.”
“Syn-bio’s inflection point is nearing,” the Boston Consulting Group concurred in a 2024 report. And since then, startups have made tremendous progress.
Webb predicted that a new “bioeconomy” will soon emerge, and she argued that these advancements will become “the most important technology platform of the 21st century.”
For board members, she said, the looming question is how to oversee these developments responsibly. “When you can design living systems, you also inherit the risks of mutation, misuse, and unintended consequences,” she said. Potential issues include:
“These technologies all seem far off, but the truth is that they are present-day reality, and they really do impact every leader,” Webb said.
The future of robotics looks nothing like Star Wars. Rather, said Marva Bailer, CEO of the tech advisory firm Qualaix, robots are being used far beyond high-tech factories, including for remote mining projects, autonomous vehicles, and precision surgical tools.
These machines hold the potential to become infinitely more sophisticated as AI “brains” become more widely adopted.
“Generalist robots will emerge over the next decade,” Accenture predicted earlier this year. These machines will then potentially evolve into “specialist robots” that can rapidly learn to perform different tasks, like helping fulfill warehouse orders.
Bailer cited Caterpillar Inc. as a firm that has “deployed fleets of autonomous mining trucks and excavators” that can operate in difficult environments. The equipment features advanced sensors that can “transmit constant data on terrain, fuel, and performance, allowing predictive maintenance and safer, more efficient operations.”
For boards across industries, Bailer said, developments in robotics will require careful oversight. Sensors “generate vast streams of sensitive operational and personal data, raising questions of privacy, ownership, and monetization,” she noted.
Moreover, companies may need to reskill employees if robotic machines render certain jobs obsolete. Meanwhile, Bailer continued, boards will need to monitor their competitors’ investments in robotics, because “laggards risk disruption.”
Questions for directors:
Cybersecurity risks and improvements in the near term are partly a byproduct of AI because AI systems can be used to write malicious code and mount attacks against computer networks.
According to a June report from BDO Digital, 68 percent of corporate leaders “say technological advances are intensifying cyber risks and generating new forms of cybercrime.”
To combat these growing threats, Tiernan said, “boards have to ask, are we investing in layered cyber defenses?”
Palmer outlined other key considerations for directors, such as:
As with all technologies, boards need to consider: What are the cybersecurity and other risks to our organization if we adopt this technology, and what is the risk if we don’t?
Still, even as these changes unfold, boards must continue paying attention to “old-fashioned” security risks, advised Niall Brennan, a board member at the Internet Security Alliance. That includes investigating fraud concerns, protecting executives, and monitoring “insider” technological risks from staff members and contractors.
Michael Gerstenzang, managing partner at Cleary Gottlieb, noted that new technologies may also diminish certain cybersecurity risks. AI, for example, could be harnessed to devise new methods for combating cyberattacks. How these transformations play out remains to be seen.
In summary, experts told NACD, considering the number of advancements on the horizon, board members need to study up. (See the 2024 Blue Ribbon Commission Report, Technology Leadership in the Boardroom: Driving Trust and Value for guidance and tools needed to strengthen technology oversight in the boardroom.)
“Technology is no longer a sector,” Webb said. “It’s the substrate of the global economy, the invisible infrastructure shaping every industry, every market, every decision.”
“2026 is not the year to delegate technology understanding,” Palmer concurred. “It’s the year every director becomes fluent in AI strategy, data ethics, and digital accountability. This is a fundamental fiduciary responsibility.”
Noah Kirsch is a contributing writer for Directorship and Directorship Online.
This article is part of the 2026 Governance Outlook report that provides governance insights for the year ahead.