Dremio Blog

7 minute read · January 9, 2026

Every Rose Has Its Thorn: 5 Risks of AI Agents

Will Martin Will Martin Technical Evangelist
Start For Free
Every Rose Has Its Thorn: 5 Risks of AI Agents
Copied to clipboard

Key Takeaways

  • Agentic AI enables autonomous problem solving and decision making but poses risks like loss of control and unpredictability.
  • Cascading errors can occur with Agentic AI, compounding mistakes through a multi-step process quickly and often unnoticed.
  • Security vulnerabilities arise as Agentic AI has access to sensitive systems, creating potential points of exploitation and data leaks.
  • Over-reliance on AI agents can lead to deskilling, making users dependent on technology for critical tasks.
  • Bias amplification is a concern since AI agents may perpetuate existing biases from their training data, affecting sensitive decision-making processes.

Agentic AI offers several compelling benefits that are transforming how we work and solve problems. The key difference from traditional AI is agency, the agent’s ability to plan, use tools, take actions in sequence, and work toward goals rather than just responding to individual prompts.

There are many upsides to embracing AI agents in your work, such as autonomous problem solving, enhanced decision making, and cost efficiencies. However, these potential benefits are not without risks. Below we discuss some of the most relevant pitfalls to consider when implementing Agentic AI in your workflows or organisation.

Try Dremio’s Interactive Demo

Explore this interactive demo and see how Dremio's Intelligent Lakehouse enables Agentic AI

1) Loss of control and unpredictability: 

By being able to make its own decisions about how to achieve goals, Agentic AI can sometimes take unexpected actions or find loopholes in its instructions. This can range from inconveniences, like interpreting instructions incorrectly, to serious issues, such as executing harmful or irreversible actions users did not intend. There have been several horror stories from developers where AI agents have gotten past guardrails and wiped their hard disk without permission or even notification.

Remember, that agents can use tools, APIs, financial systems, or execute code. This means that without proper constraints, they could delete or change important files, make unapproved purchases, launch harmful code, or make unauthorised system changes.

The topic of control also relates to the concept of accountability. If AI agents are critical to making decisions at an organisation, who’s to blame when those decisions are bad? Is it the AI or the humans involved, such as the developer, vendor, or user? This ambiguity complicates legal and ethical issues around many automation technologies, most prominently in the self-driving car industry.

2) Cascading errors: 

When an agentic system makes a mistake early in a multi-step process, that error can compound through the whole workflow and all subsequent actions. Unlike a simpler Q&A interaction where you can catch mistakes immediately, an autonomous agent might execute an entire flawed strategy before anyone notices.

This can become a bigger issue when you take into account the speed that agentic AI systems operate. An agent is capable of replicating or compounding mistakes many times over and at a pace that exceeds a person. Unfortunately, these mistakes, while quick to make, are very rarely so fast (or easy) to remedy.

3) Security vulnerabilities: 

Agentic AI, by having access to tools, APIs, and systems, creates new vulnerabilities and attack surfaces for hackers to exploit. AI agents are great targets as they have access to systems attackers may want, have the ability to autonomously run actions, and can lack robust authentication or sandboxing; all powerful capabilities in the wrong hands.

Agents can potentially be manipulated through prompt injection, provide system access to unauthorised users, or expose sensitive data. The leaking of sensitive information can also be unintentional, either inadvertently exposed when pursuing task objectives or via backend processes that are storing logs or outputs insecurely. These are important vulnerabilities to consider and protect against when integrating agentic AI in your work.

4) Over-reliance and deskilling: 

As people delegate more complex tasks to AI agents, there is a risk of losing the skills and judgment needed to verify the AI's work or step in when it fails. This can create dangerous dependencies on this technology, especially in critical domains like healthcare or infrastructure. This applies to the execution of tasks but also the cognitive reasoning involved in problem solving and task planning, which can also be delegated to AI agents. This is a similar phenomenon to autopilot dependence in aviation; a growing safety concern of pilots becoming reliant on automation, leading to diminished flying skills, reduced vigilance, and potential complacency.

Over-reliance on automation via AI also presents a cost risk to organisations. If business operations become dependent on using AI, they become highly susceptible to pricing and performance changes in those models.

5) Bias amplification: 

AI agents are powered by LLMs which have been pre-trained on huge volumes of training data. Any biases in this data can be “learnt” by agents and perpetuated through the decisions and actions they make. This can have disastrous consequences in tasks sensitive to discrimination, such as staff hiring, loan approvals, or even data analysis.

This risk is an important consideration in the EU Artificial Intelligence Act, which has provisions on the use of AI for social scoring and assessing the risk of an individual committing criminal offences. In other words, any processes of evaluating, classifying, or profiling individuals which can materially impact their treatment or livelihood. As such, these processes are highly encouraged to include human supervision to review decisions and sign-off on any actions.

Summary

AI agents excel by being provided autonomy and the ability to take actions they deem necessary. This means adopting a “hands-off” management approach and accepting some loss of direct control. However, the actions taken and decisions made may not always be correct. To confidently utilise agentic AI your processes must include robust guardrails, monitoring, and governance frameworks. These are critical features that enable you to prevent, catch, and handle any potential issues caused by your agents.

Try Dremio Cloud free for 30 days

Deploy agentic analytics directly on Apache Iceberg data with no pipelines and no added overhead.