Dremio Blog: Open Data Insights
-
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 10 – Sampling and Prompts in MCP — Making Agent Workflows Smarter and Safer
That’s where Sampling comes in. And what if you want to give the user — or the LLM — reusable, structured prompt templates for common workflows? That’s where Prompts come in. In this final post of the series, we’ll explore: How sampling allows servers to request completions from LLMs How prompts enable reusable, guided AI interactions Best practices for both features Real-world use cases that combine everything we’ve covered so far -
Dremio Blog: Open Data Insights
The Case for Apache Polaris as the Community Standard for Lakehouse Catalogs
The future of the lakehouse depends on collaboration. Apache Polaris embodies the principles of openness, vendor neutrality, and enterprise readiness that modern data platforms demand. By aligning around Polaris, the data community can reduce integration friction, encourage ecosystem growth, and give organizations the freedom to innovate without fear of vendor lock-in. -
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 9 – Tools in MCP — Giving LLMs the Power to Act
Tools are executable functions that an LLM (or the user) can call via the MCP client. Unlike resources — which are passive data — tools are active operations. -
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 8 – Resources in MCP — Serving Relevant Data Securely to LLMs
One of MCP’s most powerful capabilities is its ability to expose resources to language models in a structured, secure, and controllable way. -
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 7 – Under the Hood — The Architecture of MCP and Its Core Components
By the end, you’ll understand how MCP enables secure, modular communication between LLMs and the systems they need to work with. -
Dremio Blog: Open Data Insights
Journey from AI to LLMs and MCP – 6 – Enter the Model Context Protocol (MCP) — The Interoperability Layer for AI Agents
What if we had a standard that let any agent talk to any data source or tool, regardless of where it lives or what it’s built with? That’s exactly what the Model Context Protocol (MCP) brings to the table. -
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 5 – AI Agent Frameworks — Benefits and Limitations
Enter agent frameworks — open-source libraries and developer toolkits that let you create goal-driven AI systems by wiring together models, memory, tools, and logic. These frameworks enable some of the most exciting innovations in the AI space… but they also come with trade-offs. -
Dremio Blog: Open Data Insights
What’s New in Apache Iceberg Format Version 3?
Now, with the introduction of format version 3, Iceberg pushes the boundaries even further. V3 is designed to support more diverse and complex data types, offer greater control over schema evolution, and deliver performance enhancements suited for large-scale, high-concurrency environments. This blog explores the key differences between V1, V2, and the new V3, highlighting what makes V3 a significant step forward in Iceberg's evolution. -
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 4 – What Are AI Agents — And Why They’re the Future of LLM Applications
We’ve explored how Large Language Models (LLMs) work, and how we can improve their performance with fine-tuning, prompt engineering, and retrieval-augmented generation (RAG). These enhancements are powerful — but they’re still fundamentally stateless and reactive. -
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 3 – Boosting LLM Performance — Fine-Tuning, Prompt Engineering, and RAG
this post, we’ll walk through the three most popular and practical ways to boost the performance of Large Language Models (LLMs): Fine-tuning Prompt engineering Retrieval-Augmented Generation (RAG) Each approach has its strengths, trade-offs, and ideal use cases. By the end, you’ll know when to use each — and how they work under the hood. -
Dremio Blog: Open Data Insights
Building a Basic MCP Server with Python
In this tutorial, we’ll walk you through building a beginner-friendly MCP server that acts as a simple template for future projects. You don’t need to be an expert in AI or server development—we’ll explain each part as we go. -
Dremio Blog: Open Data Insights
Disaster Recovery for Apache Iceberg Tables – Restoring from Backup and Getting Back Online
Unlike traditional databases, Iceberg doesn’t bundle storage, metadata, and catalog into a single system. Instead, it gives you flexibility—with the tradeoff that restoring from a backup requires understanding how those components fit together: -
Dremio Blog: Open Data Insights
Demystifying Apache Iceberg Table Services – What They Are and Why They Matter
While the table spec and catalog spec laid the groundwork for interoperability and governance, it’s Table Services that will determine whether your Iceberg tables thrive or degrade in the real world. They’re the unseen engine room that keeps data performant, cost-effective, and reliable—especially at scale. -
Dremio Blog: Open Data Insights
What is the Model Context Protocol (MCP) and Why It Matters for AI Applications
The Model Context Protocol is quietly reshaping how we build with language models — not by making the models smarter, but by making their environments smarter. -
Dremio Blog: Open Data Insights
Securing Your Apache Iceberg Data Lakehouse
In conclusion, securing an Apache Iceberg lakehouse demands a holistic strategy that encompasses multiple layers of control. By implementing robust security measures at the object storage level, such as encryption and access restrictions, organizations can protect the raw data.
- 1
- 2
- 3
- …
- 10
- Next Page »