Featured Articles
Popular Articles
-
Dremio Blog: Open Data Insights
What’s New in Apache Iceberg 1.10.0, and what comes next!
-
Dremio Blog: Various Insights
The Model Context Protocol (MCP): A Beginner’s Guide to Plug-and-Play Agents
-
Product Insights from the Dremio Blog
How Dremio Reflections Give Agentic AI a Unique Edge
-
Product Insights from the Dremio Blog
MCP & Dremio: Why a Standard Protocol and a Semantic Layer Matter for Agentic Analytics
Browse All Blog Articles
-
Product Insights from the Dremio Blog
Who Benefits From MCP on an Analytics Platform?
The MCP Server is a powerful alternative to the command line or UI for interacting with Dremio. But can only data analysts benefit from this transformative technology? -
Dremio Blog: Open Data Insights
Celebrating the Release of Apache Polaris (Incubating) 1.0
With the release of Apache Polaris 1.0, the data ecosystem takes a meaningful step forward in establishing a truly open, interoperable, and production-ready metadata catalog for Apache Iceberg. Polaris brings together the reliability enterprises expect with the openness developers and data teams need to innovate freely. -
Dremio Blog: Open Data Insights
Quick Start with Apache Iceberg and Apache Polaris on your Laptop (quick setup notebook environment)
By following the steps in this guide, you now have a fully functional Iceberg and Polaris environment running locally. You have seen how to spin up the services, initialize the catalog, configure Spark, and work with Iceberg tables. Most importantly, you have set up a pattern that closely mirrors what modern data platforms are doing in production today. -
Engineering Blog
Query Results Caching on Iceberg Tables
Seamless result cache for Iceberg was enabled for all Dremio Cloud organizations in May 2025. Since then, our telemetry has told us between 10% to 50% of a single project’s queries have been accelerated by result cache. That’s a huge cost saving on executors. Looking forward, Dremio is doing research on how to bring its reflection matching query re-write capabilities to the result cache. For example, once a user generates a result cache entry, it should be possible to trim, filter, sort and roll up from this result cache. Limiting the search space and efficient matching through hashes will be key features to make matching on result cache possible. Stay tuned for more! -
Product Insights from the Dremio Blog
Test Driving MCP: Is Your Data Pipeline Ready to Talk?
Back in April of this year Dremio debuted its own MCP server, giving the LLM of your choice intelligent access to Dremio’s powerful lakehouse platform. With the Dremio MCP Server the LLM knows how to interact with Dremio; facilitating authentication, executing requests against the Dremio environment, and returning results to the LLM. The intention is […] -
Dremio Blog: Open Data Insights
Benchmarking Framework for the Apache Iceberg Catalog, Polaris
The Polaris benchmarking framework provides a robust mechanism to validate performance, scalability, and reliability of Polaris deployments. By simulating real-world workloads, it enables administrators to identify bottlenecks, verify configurations, and ensure compliance with service-level objectives (SLOs). The framework’s flexibility allows for the creation of arbitrarily complex datasets, making it an essential tool for both development and production environments. -
Dremio Blog: Open Data Insights
Why Dremio co-created Apache Polaris, and where it’s headed
Polaris is a next-generation metadata catalog, born from real-world needs, designed for interoperability, and open-sourced from day one. It’s built for the lakehouse era, and it’s rapidly gaining momentum as the new standard for how data is managed in open, multi-engine environments. -
Dremio Blog: Open Data Insights
Understanding the Value of Dremio as the Open and Intelligent Lakehouse Platform
With Dremio, you’re not locked into a specific vendor’s ecosystem. You’re not waiting on data engineering teams to build yet another pipeline. You’re not struggling with inconsistent definitions across departments. Instead, you’re empowering your teams to move fast, explore freely, and build confidently, on a platform that was designed for interoperability from day one. -
Product Insights from the Dremio Blog
Using the Dremio MCP Server with any LLM Model
With traditional setups like Claude Desktop, that bridge is tightly coupled to a specific LLM. But with the Universal MCP Chat Client, you can swap out the brain, GPT, Claude, Gemini, Cohere, you name it, and still connect to the same tool ecosystem. -
Dremio Blog: News Highlights
Breakthrough Announcement: Dremio is the Fastest Lakehouse, 20x Faster on TPC-DS
At Dremio, we have spent the last few years developing not only query execution improvements but also game-changing autonomous data optimization capabilities. Dremio is by far and away the fastest lakehouse. The capabilities deliver 20x faster query performance compared to other platforms, without requiring any manual actions. -
Dremio Blog: Various Insights
Why Companies Are Migrating from Redshift to Dremio
Companies today are under constant pressure to deliver faster insights, support advanced analytics, and enable AI-driven innovation. Many organizations chose Amazon Redshift as their cloud data warehouse. However, as data volumes grow and workloads change, Redshift’s legacy warehouse architecture is not meeting their needs—driving many organizations to consider alternatives. Dremio’s intelligent lakehouse platform: a modern, […] -
Product Insights from the Dremio Blog
Building AI-Ready Data Products with Dremio and dbt
This guide will equip you with the expertise to easily build an AI-ready data product using Dremio and dbt. -
Dremio Blog: Open Data Insights
Extending Apache Iceberg: Best Practices for Storing and Discovering Custom Metadata
By using properties, Puffin files, and REST catalog APIs wisely, you can build richer, more introspective data systems. Whether you're developing an internal data quality pipeline or a multi-tenant ML feature store, Iceberg offers clean integration points that let metadata travel with the data. -
Engineering Blog
Too Many Roundtrips: Metadata Overhead in the Modern Lakehouse
The traditional approach of caching table metadata and periodically refreshing has various drawbacks and limitations. With seamless metadata refresh, Dremio now provides users with an effortless experience to query the most up-to-date versions of their Iceberg tables without wrecking the performance of their queries. So now a user querying a shared table in Dremio Enterprise Catalog powered by Apache Polaris for example can see updates from an external Spark job immediately with no delay, and they never even have to think about it. -
Dremio Blog: Partnerships Unveiled
Using Dremio with Confluent’s TableFlow for Real-Time Apache Iceberg Analytics
Confluent’s TableFlow and Apache Iceberg unlock a powerful synergy: the ability to stream data from Kafka into open, queryable tables with zero manual pipelines. With Dremio, you can instantly access and analyze this real-time data without having to move or copy it—accelerating insights, reducing ETL complexity, and embracing the power of open lakehouse architecture.
- « Previous Page
- 1
- 2
- 3
- 4
- …
- 32
- Next Page »