Online Exclusive

Rebuilding AI Governance to Oversee Agents

By Markus Bernhardt

04/29/2026

Artificial Intelligence Technology Oversight Member-Only
Key Points
  • AI governance must transition from measuring simple system activities to overseeing the organizational outcomes produced by autonomous agents.
  • Many organizations are unprepared for AI agent deployment because they lack the framework necessary to diagnose and classify the differing tiers of autonomy currently operating within their systems.
  • The rise of employees deploying unsanctioned, cross-platform AI agents creates a critical blind spot for management teams and boards that carries heavy legal weight under the EU AI Act, among other regulations.

This AI-generated summary, based on content on this page, was reviewed by NACD editors for accuracy.

These are the five questions boards should ask management about AI agents before the next audit committee meeting.

Your artificial intelligence governance framework was almost certainly written for a system that waits to be asked, does what it is told, and stops when it is done. That describes a tool. It does not describe what is now running inside many organizations.

Governance built for tools measures activity: what the system processed, how many queries it handled, and what tasks it completed. It does not measure outcomes, or what changed in the organization because the system acted. For example, an agent that reads emails ...

Thank you for your interest in this page.

Member-Only Content

For full access, please log in, or explore membership options.

Markus Bernhardt

 

 

Markus Bernhardt, PhD, is principal of Endeavor Intelligence, an independent research and advisory practice. He advises enterprise leaders and boards globally on AI strategy and organizational transformation.

This article was informative.

No