In the previous post, we looked at Resources in the Model Context Protocol (MCP): how LLMs can securely access real-world data to ground their understanding. But sometimes, reading isn’t enough.
Sometimes, you want the model to do something.
That’s where Tools in MCP come in.
In this post, we’ll explore:
What tools are in MCP
How tools are discovered and invoked
How LLMs can use tools (with user control)
Common tool patterns and security practices
Real-world examples: from file system commands to API wrappers
Let’s dive in.
What Are Tools in MCP?
Tools are executable functions that an LLM (or the user) can call via the MCP client. Unlike resources — which are passive data — tools are active operations.
Examples include:
Running a shell command
Calling a REST API
Summarizing a document
Posting a GitHub issue
Triggering a build process
Each tool includes:
A name (unique identifier)
A description (for UI/model understanding)
An input schema (JSON schema describing expected parameters)
Tools allow models to interact with the world beyond natural language — under user oversight.
Try Dremio’s Interactive Demo
Explore this interactive demo and see how Dremio's Intelligent Lakehouse enables Agentic AI
@mcp.tool() async def get_weather(city: str) -> str: """Return current weather for a city.""" data = await fetch_weather(city) return f"The temperature in {city} is {data['temp']}°C."
This tool will automatically appear in the tools/list response and can be invoked by the LLM or user.
Why Tools Matter for Agents
Agents aren’t just chatbots — they’re interactive systems. Tools give them the ability to:
Take real-world actions
Build dynamic workflows
Chain reasoning across multiple steps
Drive automation in safe, auditable ways
Combined with resources, prompts, and sampling, tools make LLMs feel like collaborative assistants, not just text predictors.
Tool Concepts Overview
Coming Up Next: Sampling and Prompts — Letting the Server Ask the Model for Help
In the final two posts of this series, we’ll explore:
Sampling — How servers can request completions from the LLM during workflows
Prompts — Reusable templates for user-driven or model-driven actions
Tools give LLMs the power to act. With proper controls and schemas, they become safe, composable building blocks for real-world automation.
Try Dremio Cloud free for 30 days
Deploy agentic analytics directly on Apache Iceberg data with no pipelines and no added overhead.
Intro to Dremio, Nessie, and Apache Iceberg on Your Laptop
We're always looking for ways to better handle and save money on our data. That's why the "data lakehouse" is becoming so popular. It offers a mix of the flexibility of data lakes and the ease of use and performance of data warehouses. The goal? Make data handling easier and cheaper. So, how do we […]
Aug 16, 2023·Dremio Blog: News Highlights
5 Use Cases for the Dremio Lakehouse
With its capabilities in on-prem to cloud migration, data warehouse offload, data virtualization, upgrading data lakes and lakehouses, and building customer-facing analytics applications, Dremio provides the tools and functionalities to streamline operations and unlock the full potential of data assets.
Aug 31, 2023·Dremio Blog: News Highlights
Dremio Arctic is Now Your Data Lakehouse Catalog in Dremio Cloud
Dremio Arctic bring new features to Dremio Cloud, including Apache Iceberg table optimization and Data as Code.