Online Exclusive

Agentic AI: A Governance Wake-Up Call

By Syed Quiser Ahmed

07/17/2025

Artificial Intelligence Technology Oversight

As AI becomes autonomous, it will impact board oversight, regulatory compliance, and risk exposure.

The artificial intelligence landscape is undergoing a fundamental transformation that demands attention from corporate boards. While traditional AI systems have functioned as tools for automation with human oversight, agentic AI systems, or technology that operates with unprecedented autonomy, has emerged. These systems do not just follow instructions; they set goals, make decisions, and take actions independently.

Consider the difference between a traditional customer service chatbot that follows scripted responses and an agentic AI system that can analyze customer complaints, research company policies, coordinate with multiple departments, negotiate solutions, and even authorize refunds or service credits—all without human intervention. The efficiency gains are remarkable, but so are the risks.

For the boardroom, this shift presents unprecedented governance challenges.

Compliance in the Age of Autonomous Action

Regulatory compliance becomes exponentially more complex when AI systems can take thousands of actions daily without human review. Traditional compliance approaches that rely on periodic audits, approval workflows, and after-the-fact review are insufficient for systems that operate in real time across multiple jurisdictions and regulatory domains.

Forward-thinking organizations are implementing "embedded compliance," or building regulatory requirements directly into an AI system’s design and operation. This includes real-time monitoring systems that can detect potential violations before they occur, automated compliance checks that prevent potentially risky actions by flagging the need for human intervention and further checks, and comprehensive audit trails that document every decision the system makes and its rationale.

However, embedded compliance is only as good as the regulations it is designed to follow. AI development often outpaces regulatory frameworks, leaving organizations to navigate compliance in an environment of regulatory uncertainty. Boards should ensure that their organizations can adapt quickly as new regulations emerge while maintaining robust compliance postures in the interim. Creating a system to track and then assess the impact of regulatory updates from relevant sources, such as government bodies, industry associations, and regulatory intelligence providers, is a good first step toward ensuring compliance.

Risk Exposure in New Dimensions

Agentic AI introduces risks that many organizations have never encountered. Operational risks multiply when systems can initiate actions across multiple business functions simultaneously. Reputational risks escalate when AI agents interact directly with customers, partners, and the public without human oversight. Financial risks compound when systems can commit organizational resources autonomously.

Perhaps most concerning are the emergent risks, such as unexpected behaviors that arise from the complex interactions between AI systems, business processes, and external environments. These risks are difficult to predict, model, or prepare for when using traditional risk management approaches. They require new forms of monitoring like real-time feedback and AI-powered analytics, along with rapid response capabilities, and adaptive governance frameworks.

The interconnected nature of modern business amplifies these risks. An AI agent's decision in one part of the organization can cascade through supply chains, partner networks, and customer relationships in ways that are difficult to anticipate or control. Boards should understand these interconnections and ensure risk management frameworks that span across functions and processes are in place to account for the systemic nature of AI-driven decisions.

Building Effective AI Governance Frameworks

Effective governance of agentic AI, where AI takes more nuanced decisions, requires boards to move beyond traditional oversight models toward more dynamic, adaptive approaches. This can, in some cases, translate into real time feedback loops and escalation matrices that are prepared for immediate intervention and response. This begins with establishing clear AI governance principles that articulate the organization's values, risk tolerance, and operational boundaries for AI systems. These principles should be specific enough to guide system design and operation while also being flexible enough to accommodate rapidly evolving technology.

Board-level AI oversight should include dedicated expertise, whether through board composition, advisory relationships, or comprehensive director education. The technical complexity and rapid evolution of AI systems make it essential for boards to have access to deep technical understanding that includes potential risks and rewards, even if not every director needs to become an AI expert.

Boards should ask management to ensure that governance frameworks address the full lifecycle of AI systems, from initial development through deployment, operation, and eventual retirement. This includes oversight of data governance, model development, testing and validation, deployment processes, ongoing monitoring, and incident response. Each stage presents unique governance challenges that require board attention.

A Call to Action

The emergence of agentic AI is a governance inflection point comparable to the advent of other transformative technologies that have reshaped business and society.

For directors, the path forward requires acknowledging that traditional governance approaches are insufficient in the age of autonomous AI. It demands investing in new capabilities, frameworks, and mindsets that are comfortable operating in and adapting to a rapidly evolving technology landscape. Most importantly, it requires taking action now, such as by instituting robust feedback and response systems, while there is still time to shape the trajectory of AI deployment within their organizations.

The question is not whether agentic AI will transform business operations but whether boards will lead that transformation through effective oversight or find their companies struggling to govern systems and risks they do not understand. The time to make this choice is now.

The views expressed in this article are the author's own and do not represent the perspective of NACD.

Infosys is a NACD partner, providing directors with critical and timely information, and perspectives. Infosys is a financial supporter of the NACD.

Robert Peak

 

Syed Quiser Ahmed is head of Infosys’s Responsible AI Office, where he leads AI compliance and responsible AI research for the development of technical, process, and policy guardrails that ensure the technology meets legal, security, and privacy standards.