Online Exclusive

Five Questions Boards Should Ask about the AI Road Map

By Syed Quiser Ahmed

05/20/2025

Artificial Intelligence Strategy Article

These questions can guide discussions with management to drive strategic alignment and ensure AI initiatives deliver long-term value.

Artificial intelligence (AI) has rapidly evolved from a niche technology reserved for early adopters into a mainstream tool that is fundamentally reshaping the business landscape. The accelerating pace of innovation, driven by agentic AI frameworks, improved model architectures, and emerging communication protocols, is shortening innovation cycles and amplifying the potential for disruption across every sector.

For boards, effective AI oversight is no longer optional; it is a strategic necessity. To fulfill their fiduciary duties and ensure long-term value creation, boards should engage CEOs with precise, forward-looking questions that aim to reveal the technology’s strategic implications, risks, and opportunities.

Answers to the five essential questions below will help boards navigate the complex AI landscape and form a framework for how AI should be discussed in the boardroom.

1. How is AI integrated into the company’s core business strategy?

AI should be embedded into the company’s overarching strategy and transformation road map. The board’s role is to ensure AI investments are not treated as isolated technical projects that solve immediate pointed problems but as enablers of the company's long-term strategic vision. A clear connection between AI initiatives and business strategy helps prevent “AI washing,” or superficial investments made primarily for market positioning instead of genuine business value creation.

2. What is the company's approach to deploying “responsible AI by design” and managing risks?

AI introduces novel risks, such as bias in decision-making, privacy violations, security vulnerabilities, hallucinations, and copyright infringement, that can harm all stakeholders. AI can also fail in unpredictable ways, leading to operational disruptions, regulatory penalties, litigation, or reputational harm.

The board should seek assurance from the CEO that the company has clearly defined ethical principles for AI development and deployment, backed by concrete governance structures, processes, and technical guardrails. It should ensure that responsible AI principles are integrated across the technology deployment lifecycle. Risk management practices for AI should also be as rigorous as those applied to financial, operational, and cybersecurity risks. The CEO should explain how the organization systematically identifies, monitors, and mitigates AI-specific risks with appropriate controls, contingency planning, and escalation procedures.

3. How are outcomes from AI programs measured?

It is important for the board to focus on how the success of AI initiatives will be measured. This involves understanding the key performance indicators that will be used to track progress, the expected return on investment, and the methods for determining the impact of AI initiatives on business outcomes. The board should ensure that there is visibility into how progress is compared to industry benchmarks, the criteria used to scale or terminate initiatives, and how the road map is updated with upcoming innovations or capabilities.

4. How does the organization plan to scale AI across the enterprise?

For AI initiatives to successfully scale, a robust foundation is required. This calls for access to high-quality, well-governed data; skilled technical talent; and scalable, secure technology infrastructure. Without these building blocks, even the most visionary AI strategies will falter.

With this foundation in mind, the board should assess whether the company is well-positioned to execute on its AI roadmap. The CEO should detail how internal centers of excellence and teams are delivering on AI projects by sharing how the company plans to upskill existing employees, how change is managed, and how partnerships or acquisitions are leveraged to fill gaps. The CEO should also clearly detail specific initiatives to foster an AI-ready culture in which experimentation is encouraged, cross-functional collaboration thrives, and employees at all levels leverage AI assistants safely in their daily work.  

5. How will the organization adjust its AI initiatives to remain current and forward-looking?

Boards should ask how the company stays abreast of AI innovation, whether through internal research and development, strategic alliances, venture investments, or broader innovation ecosystems. The CEO should explain how the organization strategically balances building in-house AI capabilities and leveraging external partnerships. Continuous learning and strategic adaptability are vital. If AI initiatives are not periodically reviewed, recalibrated, and refreshed, the company risks falling behind competitors that capitalize on emerging AI advancements. Mechanisms should exist to scan the market and gather intelligence on upcoming developments, best practices, and risks, and channels to act on them.

In an era in which AI is reshaping entire industries, boards can no longer afford to take a passive stance. Effective AI oversight requires boards to ask pointed, strategic questions that probe beyond surface-level progress updates. By engaging CEOs on the integration, governance, risk management, innovation, and capabilities of AI, boards fulfill a critical role: ensuring that AI serves as a true driver of sustainable enterprise value rather than an unmanaged source of risk.

The views expressed in this article are the author's own and do not represent the perspective of NACD.

Infosys is a NACD partner, providing directors with critical and timely information, and perspectives. Infosys is a financial supporter of the NACD.

Robert Peak

Syed Quiser Ahmed is head of Infosys’s Responsible AI Office, where he leads AI compliance and responsible AI research for the development of technical, process, and policy guardrails that ensure the technology meets legal, security, and privacy standards.