Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
Taming-the-AI-Agent-Wild-West-2

Technology

Taming the AI Agent Wild West: 4 steps for CIOs to take control 

Published July 1, 2025 in Technology • 8 min read

From the untamed rise of semi-autonomous agents to growing concerns around accountability and security risks, AI agents are creating new headaches for CIOs. Here’s how tech leaders can navigate the challenges.

Among 2025’s biggest predictions, AI agents are rapidly transforming day-to-day workflows. These virtual colleagues now handle tedious administrative tasks – from data entry to appointment scheduling to customer requests. Yet behind the efficiency gains lurks a potential headache for tech leaders. As Meredith Whittaker, President of encrypted communications app Signal, puts it, “There’s a profound issue with security and privacy that is haunting this hype around agents.”

The easy availability of agent-building tools enables unsanctioned “shadow AI” – an uncontrolled proliferation of semi-autonomous agents scattered across departments without consistent standards. This creates murky accountability: when an agent makes an error causing reputational or regulatory damage, who bears responsibility – the deploying organization, the agent provider, or the underlying model developer?

The risks compound quickly. Agents’ broad access privileges expand attack surfaces. Growing ethical concerns around transparency, bias, and human oversight demand strict governance frameworks. Perhaps most insidiously, agents can undermine organizational AI efforts by rapidly generating vast amounts of new data, creating “data pollution” through silos, duplicates, and inaccuracies without robust oversight.

This poses a fundamental question for Chief Digital and Information Officers: How can you design compliance guardrails, monitoring systems, and ethical guidelines that ensure AI agents deliver transformative value without compromising safety, quality, or trust?

Despite the current hype, AI agents aren’t new

What’s different about today’s agents?

Despite the current hype, AI agents aren’t new. Early autonomous and semi-autonomous systems performed specific tasks using predefined rules and decision trees. Machine learning agents based on structured data remain the backbone of platforms like Uber’s marketplace balancing system.

Similarly, Robotic Process Automation (RPA) used software “bots” to automate repetitive, rule-based tasks, such as entering data, processing transactions, generating reports, or handling simple customer inquiries. Unlike their rule-based predecessors, RPA bots mimic user actions on a screen, with Excel macros serving as an early example.

Today’s AI agents differ fundamentally in their data-handling capabilities. Think of them as “RPA for unstructured data.” Like their predecessors, they’re autonomous, software-based tools that carry out tasks, make decisions, and interact with users and data in real time. What makes them special is their ability to leverage Large Language Models (LLMs) for natural language interaction. As Tobias Guenthoer, CIO of German pharmaceutical company STADA explains, “AI agents are the natural evolution from RPA and GenAI – combining structure with creativity.” This creates unprecedented opportunities for value creation, alongside significant organizational risks.

These agents typically follow a three-layer architecture. The first layer interprets natural language requests and breaks them into specific tasks. The middle layer consists of specialized components acting outside the main agent environment – retrieving information, accessing databases, making purchases, or sending emails. This layer can include traditional tools like Excel spreadsheets and any software the agent can access from CRM systems to financial platforms. The final layer uses another LLM to combine the disparate outputs of the middle layer and convert them back into natural language for users.

This general-purpose design combined with strong unstructured data and natural language capabilities, as well as access to outside systems, makes agents extremely versatile – spanning virtual assistants and chatbots to advanced process automation systems.

“OpenAI’s Operator, announced in January 2025, browses the web independently, filling in forms, ordering groceries, and even creating memes.”

Real-world applications

Several products illustrate how AI agents are already transforming workflows. Anthropic demonstrated agents that handle data entry by observing a user’s screen, moving cursors, and filling out forms automatically. OpenAI’s Operator, announced in January 2025, browses the web independently, filling in forms, ordering groceries, and even creating memes. Though still in research preview, Operator is intended to save users time and effort while expanding how AI can interact with standard web interfaces.

Enterprise solutions are equally ambitious. Salesforce’s Agentforce helps teams handle surges in customer requests around the clock using pre-built skills for CRM to Slack integrations, extensible through APIs and MuleSoft connectors. Microsoft Studio lets users build custom agents within their existing ecosystem – scheduling appointments in Outlook, updating Excel with real-time data, or summarizing Teams discussions. China-based Manus recently launched what it claims is the world’s first general-purpose agent, promising “hands that work following users’ minds.”

Beyond tech giants, startups are driving innovation. Lyzr AI develops cloud-based agents, allowing firms to rapidly expand. As Founder and CEO, Siva Surendira explains, “We asked a different question: ‘What if we built a framework that allows enterprises to go beyond prototyping to scale from 10 to 100?'”

These agents are poised to proliferate across business and personal environments, growing more powerful and capable, while creating new challenges in the process.

The old principle “garbage in, garbage out” not only persists but is amplified: agents trained on bad data produce flawed outputs that cascade across interconnected systems

Why agents can become a nightmare for CDOs and CIOs

Unlike earlier automation tools, AI agents introduce unique risks due to their autonomy, learning capabilities, and potential deep integration with organizational data and systems.

Data pollution

AI agents consume and generate vast amounts of data. Without governance, they can quickly produce “data pollution” – creating silos, duplicate records, or inaccurate information that contaminates enterprise data assets. The old principle “garbage in, garbage out” not only persists but is amplified: agents trained on bad data produce flawed outputs that cascade across interconnected systems.

Demis Hassabis, CEO of DeepMind, and winner of the 2024 Nobel Prize in Chemistry, warns, “If your AI model has a 1% error rate and you plan over 5,000 steps, that 1% compounds like compound interest.”

Uncontrolled agent proliferation

Easy-to-use agent-building tools enable employees to spin up their own AI agents without central oversight, creating “Shadow AI.” This uncontrolled proliferation mirrors past issues with spreadsheets and shadow IT – dozens of semi-autonomous agents across departments can conflict, duplicate work, or introduce security vulnerabilities. Forrester identified around 400 vendors in this space, illustrating how the explosion of platforms is fueling adoption.

Accountability gaps

When agents autonomously make decisions, accountability becomes murky. As Cisco’s Anand Raghavan asks, “How do you audit what data different agents have accessed and constantly ensure compliance?” If an agent makes a costly error or a biased decision, does the blame fall on the user’s organization, the software provider of agents, or the underlying AI model developer?

Suppose Manus, the autonomous agent built on Anthropic’s Claude, produces a defamatory report or triggers an unintended action that causes damage – responsibility could fall anywhere along the chain. Agents may also unknowingly violate industry regulations or privacy laws, putting companies at compliance risk.

Expanded attack surface

AI agents may have broad access – reading and writing data, controlling applications, and sometimes even operating web browsers or system functions. This dramatically expands attack surfaces. Gartner predicts that abuse of AI agents will be to blame for 25% of enterprise security breaches by 2028. Key risks include agents being tricked through prompt injection into leaking sensitive information, executing unauthorized transactions when hijacked, imposter agents, or making mistakes that expose data. Agents working autonomously could bypass traditional security checkpoints. A rogue or compromised agent could have cascading effects across systems. Robust authentication, access control, and monitoring become essential alongside careful data permissioning and privacy impact assessments for AI processes.

To navigate these challenges effectively, technology leaders need a systematic governance approach.

What can CDOs and CIOs do to prepare for the age of agentic AI?

To navigate these challenges effectively, technology leaders need a systematic governance approach. Our four-part framework of Map, Monitor, Model, and Manage provides the foundation for responsible agent deployment.

Graph - Taming AI

Map and create an inventory of AI Agents.

A structured, step-by-step approach is essential to deploy AI agents safely and effectively. The first step is to map and create an inventory of all AI agents and their associated data flows. CDOs and CIOs should document each agent’s inputs and outputs, identifying which systems it interacts with and the information it processes. This detailed mapping will reveal any emerging data streams, overlaps, or silos. Organizations can then address gaps or unnecessary duplication, ensuring no agents operate under the radar, and that sensitive data access is tightly controlled.

Monitor agents and their data quality.

Setting clear standards for agent-generated data, such as accuracy, completeness, and timeliness, can prevent agents from injecting faulty information into enterprise systems. This may involve adopting data dictionaries, integration pipelines, and validation checks to unify both human and agent-created data in a single source of truth. Also, continuous monitoring and auditing is critical – AI agents must be tracked like high-privilege users, with automated logs, anomaly alerts, and regular performance reviews. These measures will help detect security threats, compliance breaches, and quality issues before they escalate.

Model a structured AI governance framework.

Governance policies should outline how agents are approved, deployed, and managed, ensuring that risk assessments, ethical guidelines, and security considerations are built into every stage. Like the technology itself, governance protocols are constantly emerging and evolving. The Model Context Protocol (MCP), popularized by Anthropic, establishes a universal, open standard for connecting AI systems with data sources. This framework enables organizations and individuals to seamlessly integrate their AI systems with the data they require, streamlining access and improving operational efficiency. Some companies establish a dedicated Center of Excellence (CoE) that brings together cross-functional teams, including legal, security, and IT, to oversee agent-related processes. This centralized approach keeps AI expansions controlled, prevents reactive fixes, and provides a shared framework for addressing complexities like regulatory compliance and bias mitigation.

Manage via human oversight: Define an ‘Agent Controller’ role.

Organizations may introduce roles such as an “Agent Controller” or “AI Wrangler” (reporting to the CDO or CIO) to supervise day-to-day agent operations, handle exceptions, and

continuously refine agent performance. By clearly assigning accountability, companies avoid confusion over who monitors the AI’s outputs, retrains the model, or responds to malfunctions. As agents proliferate, organizations may soon have more AI agents than employees, necessitating an ‘Agent HR’ department, led by a Chief Agent Officer.

On the other hand, they raise real concerns around data governance, security, and regulatory compliance if left unchecked.

The way forward: trust but verify.

Agents offer both an incredible opportunity and a set of serious challenges for CDOs and CIOs. On the one hand, they can dramatically improve how organizations handle unstructured data, automate complex tasks, and extract valuable insights. On the other hand, they raise real concerns around data governance, security, and regulatory compliance if left unchecked. By mapping out agent data flows, establishing strong governance, keeping a close eye on performance, and setting clear accountability, leaders can turn these digital tools into powerful partners. Still, without proper guardrails, AI agents can quickly become the problem instead of the solution. Whether they end up as friends or foes ultimately depends on the decisions and oversight of the people implementing them.

The AI-Centered Enterprise” by Ram Bala, Natarajan Balasubramanian, and Amit Joshi is available to order here.

Authors

Amit Joshiv - IMD Professor

Amit M. Joshi

Professor of AI, Analytics and Marketing Strategy at IMD

Amit Joshi is Professor of AI, Analytics, and Marketing Strategy at IMD and Program Director of the AI Strategy and Implementation program, Generative AI for Business Sprint, and the Business Analytics for Leaders course.  He specializes in helping organizations use artificial intelligence and develop their big data, analytics, and AI capabilities. An award-winning professor and researcher, he has extensive experience of AI and analytics-driven transformations in industries such as banking, fintech, retail, automotive, telecoms, and pharma.

José Parra-Moyano

José Parra Moyano

Professor of Digital Strategy

José Parra Moyano is Professor of Digital Strategy. He focuses on the management and economics of data and privacy and how firms can create sustainable value in the digital economy. An award-winning teacher, he also founded his own successful startup, was appointed to the World Economic Forum’s Global Shapers Community of young people driving change, and was named on the Forbes ‘30 under 30’ list of outstanding young entrepreneurs in Switzerland. At IMD, he teaches in a variety of programs, such as the MBA and Strategic Finance programs, on the topic of AI, strategy, and Innovation.

Ram Bala

Ram Bala

Associate Professor of AI & Analytics at Santa Clara University’s Leavey School of Business

Ram Bala is Associate Professor of AI & Analytics at Santa Clara University’s Leavey School of Business. He is actively involved in AI-driven business transformation, co-leading the Prometheus Lab on AI and Business, and co-founding Samvid, an Agentic AI startup for enterprise decision-making and collaboration.

Natarajan Balasubramanian 2

Natarajan Balasubramanian

Betty & Albert Hill Endowed Professor at the Whitman School of Management at Syracuse University

Natarajan Balasubramanian is the Betty & Albert Hill Endowed Professor at the Whitman School of Management at Syracuse University. He studies how technology, human capital, organizational learning, and innovation contribute to business value creation.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience