22 minute read · September 12, 2025

The Model Context Protocol (MCP): A Beginner’s Guide to Plug-and-Play Agents

Artificial Intelligence is moving beyond static chatbots and single-model applications into a new era of agentic AI, systems that can reason, plan, and act by coordinating with multiple tools and data sources. At the heart of this shift is the Model Context Protocol (MCP), a new open standard designed to make connecting AI systems to external services as simple and modular as possible.

Think of MCP as the universal adapter for AI: instead of every application building custom integrations for each API or database, MCP defines a common way for clients (like desktop apps or IDEs) to talk to servers (wrappers around APIs, databases, or tools). This means that whether your model of choice is Claude, GPT, or another LLM, it can access the same set of capabilities without modification.

For those new to the space, MCP is worth understanding because it illustrates a core principle of agentic AI, flexibility. You’re no longer locked into a single vendor, model, or integration pattern. With MCP, you can plug in a server for querying your data warehouse, another for sending emails, and another for running analytics, and have them all work together in a single workflow.

In this blog, we’ll explore how MCP works, what types of responses servers can provide, how clients orchestrate them, and walk through an example where a Dremio MCP server and a SendGrid MCP server combine forces to identify customers and send personalized emails, all automated through the protocol.

What Exactly Is the Model Context Protocol?

The Model Context Protocol (MCP) is a specification for how AI systems exchange information with external tools, data sources, and applications. It isn’t tied to any specific large language model (LLM) or vendor, instead, it defines a set of rules that make interoperability possible.

At its core, MCP revolves around three main roles:

  • Host – The environment where the AI model lives. This could be a desktop app like Claude Desktop or another AI-enabled platform.
  • Client – The connector that allows the host to communicate with external systems. Each client is tied to a single server.
  • Server – The wrapper around a tool, database, or service. A server exposes capabilities, such as querying a database, sending an email, or fetching a document, in a way that any MCP-compliant client can understand.

The brilliance of MCP is its modularity. Servers don’t care which model you’re using, and hosts don’t care which specific tool a server wraps. This means you could swap out an OpenAI-powered host for an Anthropic-powered one, or change your email server from SendGrid to another provider, without breaking the workflow.

Under the hood, MCP uses a JSON-RPC–based message format. This ensures every request and response is structured and predictable, making it easy for developers to implement. Hosts can discover what a server offers (through a tools/list call), invoke those tools with parameters (tools/call), and process the structured responses that come back.

This model-agnostic design is what makes MCP so powerful. Instead of building bespoke integrations for every AI use case, developers can now create reusable building blocks that plug into any MCP-enabled ecosystem.

The Core Building Blocks of MCP

To really understand how MCP works, it helps to look at the primitives, the foundational building blocks that every server can expose and every client can use. These are the “verbs” of the protocol that make meaningful interactions possible:

  • Tools – Think of tools as actions the server can perform. For example, a Dremio MCP server might expose a query_sql tool to run database queries, while a SendGrid MCP server might expose a send_email tool. Tools are discovered with a tools/list call and invoked with a tools/call message that passes parameters defined by a JSON Schema.
  • Resources – These are read-only pieces of data a server can provide, like a database schema, a CSV file, or even a snippet of documentation. They’re deterministic and side-effect-free, meaning they always return the same result for the same request. This makes them useful for giving an agent structured knowledge to work with.
  • Prompts – Servers can also provide reusable, parameterized prompt templates. For example, a SendGrid server might define a “Re-engagement Email” prompt that takes a customer name and offer as inputs. This allows the host to pull in prompts as reusable components for the agent to adapt.
  • Notifications – Unlike tools and resources, notifications don’t wait for a request. They’re a way for servers to push updates back to clients, such as “a long-running query is 50% complete” or “the list of available tools has changed.”

Together, these primitives create a common language for AI applications and external systems. No matter which model you’re using or which service you’re connecting to, these building blocks ensure the interactions follow a consistent pattern.

Try Dremio’s Interactive Demo

Explore this interactive demo and see how Dremio's Intelligent Lakehouse enables Agentic AI

How Clients and Servers Interact

Now that we’ve looked at the building blocks, let’s put them in motion. The interaction between a client and a server follows a clear, predictable lifecycle that makes MCP both powerful and beginner-friendly:

  1. Initialization – When a client connects to a server, it begins with a handshake. This step confirms both sides support the same MCP version and establishes what features are available.
  2. Discovery – Once initialized, the client asks the server what it can do. This is where calls like tools/list or resources/list come in. The server responds with a catalog of its capabilities: which tools it has, what parameters they accept, what resources are available, and so on.
  3. Invocation – After discovery, the client can start calling tools. For example, it might send a tools/call request to run a query on a Dremio MCP server. The parameters passed must match the JSON Schema the server defined, ensuring structured and validated input.
  4. Response – The server executes the tool and responds with structured data. This could be plain text, JSON, images, or references to resources. The client can then pass the response back to the host (the AI model) for reasoning or further processing.
  5. Notifications (Optional) – While most calls are request/response, servers can also push updates asynchronously. This is useful for long-running jobs, progress tracking, or alerting the client to changes (like a new resource becoming available).

This workflow keeps things simple for developers: instead of every tool reinventing how it connects to AI systems, MCP defines a universal pattern. From the AI’s perspective, it doesn’t matter whether it’s talking to a database, a file system, or an email service, the interaction looks the same.

What Types of Responses Can MCP Servers Provide?

When a client calls a tool on an MCP server, the reply isn’t limited to plain text. Instead, responses are returned as a content array, which gives servers flexibility in what they send back and lets hosts render or route the results in useful ways. Here are the most common response types:

  • Text – The simplest form, often used for short summaries, explanations, or query results. For example, a Dremio MCP server might return “Query executed successfully: 124 customers matched.”
  • Structured Data (JSON) – Many servers return JSON objects or arrays so the host or AI agent can reason with the data directly. A database server could provide rows of results in JSON, while a SendGrid server might return an object containing delivery status codes.
  • Resources – Instead of embedding large data inline, servers can return a reference to a resource (like a file, schema, or CSV export). This is useful when the response is large, deterministic, and may be reused later.
  • Rich Media – MCP doesn’t stop at text and JSON. Servers can return images or links to other binary assets, allowing hosts to display charts, dashboards, or visual analytics generated on demand.
  • Mixed Arrays – Because responses are arrays, servers can combine formats. For instance, a query result could include both a plain-text summary and a downloadable CSV file as a resource.

This flexibility is critical because it means MCP isn’t tied to a single use case or application type. Whether the agent needs structured data to feed into another tool, a human-readable summary, or a file to share with a colleague, MCP servers can provide the right format for the task.

How Clients Tie It All Together

With tools, resources, and responses defined, the last piece of the puzzle is the client, the connector that makes sure hosts (like your AI assistant) and servers (like Dremio or SendGrid) can talk to each other smoothly.

Here’s how clients operate in practice:

  • Hosting the AI model – The host (e.g., Claude Desktop, or another MCP-capable app) provides the AI environment. This is where your queries or instructions originate.
  • Connecting to servers – The client sets up the actual link between the host and each server. You can run servers locally via stdio (good for private data) or remotely via HTTP with secure authentication (good for SaaS services).
  • Capability discovery – When the client connects to a server, it automatically requests a list of what that server can do. This means you don’t have to hard-code APIs, the host just learns what’s available dynamically.
  • Coordinating tool calls – The client makes sure every tools/call request is properly formatted, passes inputs that match the server’s schema, and routes the response back to the host in a way the AI can understand.
  • Managing multiple servers – Perhaps the biggest strength of MCP: one client can connect the AI to many servers at once. This lets you orchestrate workflows that pull data from one system and push actions into another, all from a single conversation.

From the user’s point of view, this is where the magic happens. Instead of writing glue code or juggling APIs, you just tell the AI what you want. The client then discovers the right tools across connected servers, calls them in the right order, and streams the results back into your session.

A Practical Example: Using Dremio and SendGrid Together

To make this concrete, let’s walk through how MCP can connect two very different systems, Dremio for analytics and SendGrid for email marketing, into a seamless workflow.

Scenario: You want to identify customers who haven’t engaged recently but have a high lifetime value (LTV) and then send them a re-engagement email campaign. Traditionally, this would require exporting data, cleaning it, uploading it into a marketing tool, and manually configuring a campaign. With MCP, the entire flow can be automated in one AI-driven conversation.

Here’s how it works step by step:

  1. Connect the servers
    • The host (e.g., Claude Desktop) connects to both the Dremio MCP server and the SendGrid MCP server.
    • Each server exposes its available tools: Dremio might list query_sql and list_schemas, while SendGrid might list upsert_contacts and send_campaign.
  2. Discover capabilities
    • The client calls tools/list on both servers. Now the host knows exactly what actions it can ask for, without you hard-coding API calls.
  3. Run the analytics in Dremio
    • You ask the AI: “Find all customers with LTV over $5,000 who haven’t engaged in the last 30 days.”
    • The client sends a tools/call to Dremio’s query_sql tool, passing in the SQL query.
    • Dremio returns structured JSON results or a resource (like a CSV file) containing the targeted customer list.
  4. Send the campaign via SendGrid
    • The AI takes those results and calls SendGrid’s upsert_contacts tool to upload the list.
    • Then it calls send_campaign, filling in personalization details such as customer name or discount offer.
    • The server may also request a quick elicitation step for confirmation: “Ready to send to 124 contacts?”
  5. Receive results
    • SendGrid returns a structured response confirming delivery status. The client passes this back to the host, where the AI can summarize: “Campaign sent successfully to 124 high-value customers.”

What makes this powerful is the model-agnostic nature of MCP. Whether the host is running Claude, GPT, or another LLM, the workflow doesn’t change. You can swap out the model, or even replace SendGrid with a different email provider’s MCP server, and everything still works.

This example shows how MCP turns what used to be a multi-step, manual integration into a plug-and-play automation layer, unlocking agentic AI workflows that bridge analytics and action.

Why Model-Agnostic Design Matters

One of the most important features of MCP is that it is completely model-agnostic. Unlike older approaches where integrations were tightly coupled to a specific AI model or vendor, MCP separates how tools and data are accessed from which model is doing the reasoning.

Here’s why that matters:

  • Flexibility of choice – You can switch between Claude, GPT, Gemini, or any other model without breaking your integrations. The same Dremio and SendGrid servers work regardless of which LLM you’re running.
  • Future-proofing – As new models appear, you don’t need to rebuild your automation workflows. Your servers remain reusable building blocks, and your host simply plugs in a new model.
  • Separation of concerns – MCP servers only define what tools and resources they provide. The model is just another consumer of those capabilities. This makes servers lighter, more reusable, and easier to maintain.
  • Easier collaboration – Different teams in your organization might prefer different models for different tasks. With MCP, they can still share the same tool servers without being forced into the same AI vendor.

In short, MCP’s modularity means you’re not locked into one ecosystem. You can bring your data, tools, and workflows with you, no matter which model or host you decide to run. That’s a huge win for organizations that want to avoid vendor lock-in while still getting the benefits of agentic AI.

Getting Started: Your First MCP Workflow

If you’re new to the agentic AI space, it’s easy to feel overwhelmed. The good news is that MCP was designed to make experimentation approachable. Here’s a simple checklist you can follow to spin up your first workflow:

  1. Pick a Host
    • Start with a user-friendly host like Claude Desktop, which natively supports MCP.
    • Other MCP-capable environments are emerging, so choose one that matches where you plan to run your AI.
  2. Run a Local Server (Optional but Recommended)
    • Try a simple MCP server on your machine using stdio.
    • Many tutorials provide starter servers (e.g., a file server that lets you browse and read files). This helps you learn the basics in a safe, local environment.
  3. Explore the Tools
    • Once connected, use the host’s interface to list available tools (tools/list).
    • Try calling one or two tools to see the request/response flow in action.
  4. Add a Remote Server
    • Connect a real service such as Dremio MCP server for querying data or SendGrid MCP server for email campaigns.
    • Configure authentication (API keys, bearer tokens, or OAuth depending on the service).
  5. Experiment with a Simple Workflow
    • Example: Ask the AI to “query the customers table for all accounts created this month and export the results.”
    • Watch as the client handles discovery, calls the right tool, and returns structured results.
  6. Combine Servers for Automation
    • Once comfortable, connect two servers and create a cross-service workflow (like the Dremio + SendGrid scenario from earlier).
    • Observe how the host seamlessly coordinates multiple tools without custom integration code.
  7. Iterate and Scale
    • Add more servers as needed, CRM, cloud storage, marketing APIs, or analytics tools.
    • Because MCP is modular, each new connection expands what your AI can do without rewriting what you’ve already built.

By following these steps, you’ll quickly see why MCP is being called the “USB-C of AI.” It standardizes how AI connects to the outside world, giving you confidence that the workflows you build today will still work tomorrow, no matter which model or vendor you’re using.

Conclusion

The Model Context Protocol (MCP) represents a major step forward in making agentic AI practical, flexible, and scalable. Instead of reinventing integrations for every model and every service, MCP provides a universal way for AI systems to discover tools, access resources, and take meaningful actions, no matter which LLM is in use.

By standardizing the interaction between hosts, clients, and servers, MCP unlocks true modularity. You can swap models without breaking workflows, mix and match servers for analytics, email, or storage, and grow your AI capabilities incrementally. The Dremio + SendGrid example shows how easily analytics and action can come together, transforming what used to be manual, multi-step processes into fully automated workflows.

For newcomers to the agentic AI space, MCP lowers the barrier to experimentation. With a single host and just a couple of servers, you can start building workflows that blend data access, reasoning, and action. As more servers emerge, the ecosystem only becomes richer, giving you plug-and-play building blocks to create ever more powerful AI-driven systems.

In short, MCP is the connective tissue that lets AI agents move from isolated intelligence to collaborative, action-oriented ecosystems. If you’re exploring how to make AI not just smarter, but more useful, MCP is the standard to watch, and to start using today.

Ready to Get Started?

Enable the business to accelerate AI and analytics with AI-ready data products – driven by unified data and autonomous performance.