Enterprises Want More AI � Just Not the Autonomous Kind
Enterprises are widening AI use but slowing the autonomy dial. S&P Global's Capital IQ Pro and McKinsey's adoption data sketch a market that values grounded outputs, traceable sources and human-in-the-loop accountability over flashy agents.

Many companies are taking a slower, more controlled approach to autonomous systems even as AI adoption grows. Instead of deploying systems that act on their own, they are leaning on tools that assist human decisions and keep control over outputs. The pattern is sharpest in sectors where errors carry real financial or legal risk.
The new default: assistance over autonomy
A clear example comes from S&P Global Market Intelligence, which threads AI features into its Capital IQ Pro platform. Analysts use the system to review company filings, earnings calls and market data � and the AI is explicitly designed to stay grounded in source material.
According to S&P Global Market Intelligence, the tools extract insights from structured and unstructured data, including transcripts and reports, while remaining tied to verified source data. That phrasing � verified source data � is doing a lot of work; it's the bit that makes the output usable for decisions auditors will eventually examine.
Adoption is ahead of autonomy
The current wave of enterprise AI is often pitched as a step toward fully autonomous agents. Systems may eventually plan tasks and act without direct human input. But most companies aren't there yet.
Adoption is already widespread, with a majority of organisations using AI in at least one part of their business, according to research from McKinsey & Company. Many of those organisations have yet to scale AI across the enterprise, exposing a gap between initial use and broader deployment.
Where AI is in production today, it tends to handle work like:
- Summarising documents
- Answering queries against trusted content
- Surfacing trends and outliers across large datasets
It rarely acts independently. S&P Global Market Intelligence's tools, for example, let users query large datasets through a chat interface, but the answers are tied to verified financial content. In many cases users can drill back into the underlying documents � which lowers the risk of unsupported outputs.
In its own research, the company describes AI governance as a process where systems are deliberately designed and continuously monitored, with attention to fairness and accountability.
High-risk sectors raise the bar
In finance, small errors compound into large consequences. That shapes how AI is built and used. Tools like Capital IQ Pro are designed to support analysts, not replace them. The system might surface insights or highlight trends, but final decisions still rest with a human.
The gap between adoption and value is starting to surface. Many organisations report a lag between AI deployment and measurable business outcomes, according to McKinsey & Company's findings. While autonomous systems may handle certain tasks well, businesses often need clear lines of accountability. When decisions affect investments, compliance or reporting, there has to be a way to explain how the decision was made.
S&P Global's research notes that organisations are increasingly focused on building governance frameworks to manage AI risks, including data quality issues and model bias. That isn't optional plumbing � it's the spine that makes the rest of the stack defensible.
Toward future systems
The distance between today's controlled AI tools and tomorrow's autonomous systems is still wide. Interest in more autonomous and agent-driven systems is growing, even as most organisations remain in early stages. The systems most likely to be trusted are the ones that:
- Explain their outputs.
- Show their sources.
- Operate inside clearly defined limits.
Autonomous agents may one day handle financial analysis or supply-chain planning with minimal input. But without clear control mechanisms, their use will stay limited.
These themes will feature at AI & Big Data Expo North America 2026 on May 18�19. S&P Global Market Intelligence is listed as a bronze sponsor of the event, with an agenda that includes AI governance and the use of AI in regulated industries.
Balancing capability and control
The push toward autonomous AI is unlikely to slow down. Advances in large language models and agent-based systems keep expanding what AI can do.
Enterprise users are asking the more interesting question: how do you keep those systems under control? S&P Global Market Intelligence's approach reflects that concern. By keeping AI grounded in verified data and putting humans at the centre of decision-making, it prioritises trust over autonomy.
As systems grow more capable, the ability to govern and control them could become just as important as the tasks they perform.
(Photo by Hitesh Choudhary.)
Related articles

Banks brace for Mythos: why Anthropic's most powerful model has finance ministers worried

From 'left-wing nut jobs' to 'productive meeting': the White House quietly reopens its Anthropic channel
