10 minute read · April 29, 2025
A Journey from AI to LLMs and MCP – 4 – What Are AI Agents — And Why They’re the Future of LLM Applications

· Head of DevRel, Dremio

We’ve explored how Large Language Models (LLMs) work, and how we can improve their performance with fine-tuning, prompt engineering, and retrieval-augmented generation (RAG). These enhancements are powerful — but they’re still fundamentally stateless and reactive.
To build systems that act with purpose, adapt over time, and accomplish multi-step goals, we need something more.
That “something” is the AI Agent.
In this post, we’ll explore:
- What AI agents are
- How they differ from LLMs
- What components make up an agent
- Real-world examples of agent use
- Why agents are a crucial next step for AI
What Is an AI Agent?
At a high level, an AI agent is an autonomous or semi-autonomous system built around an LLM, capable of:
- Observing its environment (inputs, tools, data)
- Reasoning or planning
- Taking actions
- Learning or adapting over time
LLMs generate responses, but agents make decisions. They don’t just answer; they think, decide, and act.
Think of the difference between a calculator and a virtual assistant. One gives answers. The other gets things done.
The Core Ingredients of an AI Agent
Let’s break down what typically makes up an agentic system:
1. LLM Core
The brain of the operation. Handles natural language understanding and generation.
2. Tools / Actions
Agents can execute external commands, like calling APIs, querying databases, or running code.
3. Memory
Persistent memory lets agents recall previous interactions, facts, or task states.
4. Planner / Executor Logic
This is where agents shine. They can:
- Break down complex goals into subtasks
- Decide which tools or steps to take
- Loop, retry, or adapt based on results
5. Context Manager
Decides what information (memory, documents, tool results) gets included in each LLM prompt.
LLM vs AI Agent — Key Differences

LLMs are the engine. Agents are the vehicle.
Examples of AI Agents in the Wild
Let’s explore how AI agents are already showing up in real-world applications:
1. Developer Copilots
Tools like GitHub Copilot or Cursor act as coding assistants, not just autocomplete engines. They:
- Read your project files
- Ask clarifying questions
- Suggest multi-line changes
- Run code against test cases
2. Document Q&A Assistants
Instead of just answering questions, agents:
- Search relevant documents
- Summarize findings
- Ask follow-up questions
- Offer next actions (e.g., generate reports)
3. Research Agents
Given a broad prompt like “summarize recent news on AI regulation,” agents:
- Plan a research strategy
- Browse the web or internal data
- Synthesize and refine results
- Ask for confirmation before continuing
Agents Enable Autonomy and Feedback Loops
Unlike plain LLMs, agents can:
- Use tools to gather more info
- Loop on tasks until a condition is met
- Store and recall what they’ve seen
- Chain multiple steps together
For example:
Task: Schedule a meeting with Alice
Agent:
- Search calendar availability
- Find Alice’s preferred times
- Draft an email proposal
- Wait for response
- Reschedule if needed
That’s not a single LLM prompt — that’s an intelligent system managing an evolving task.
How Are Agents Built Today?
A number of popular AI agent frameworks have emerged:
- LangChain: Modular orchestration of LLMs, tools, and memory
- AutoGPT: Autonomous task completion with iterative planning
- Semantic Kernel: Microsoft’s framework for embedding LLMs into software
- CrewAI / MetaGPT: Multi-agent systems with defined roles
These frameworks let developers prototype powerful workflows, but they come with challenges — especially around complexity, tool integration, and portability.
We’ll explore those challenges in the next post.
Limitations of Today’s Agent Implementations
While agents are promising, current frameworks have some limitations:
- Tight coupling to specific models or tools
- Difficult interoperability between agent components
- Context juggling: hard to manage what the model sees
- Security and control: risk of unsafe tool access
- Hard to debug: agents can go rogue or get stuck in loops
To address these, we need standardization — a modular way to plug in data, tools, and models securely and flexibly.
That’s where the Model Context Protocol (MCP) enters the picture.
Coming Up Next: AI Agent Frameworks — Benefits and Limitations
In our next post, we’ll explore:
- How modern agent frameworks work
- What they enable (and where they fall short)
- The missing layer that MCP provides
Sign up for AI Ready Data content