Featured Articles
Popular Articles
-
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 4 – What Are AI Agents — And Why They’re the Future of LLM Applications
-
Engineering Blog
Dremio’s Apache Iceberg Clustering: Technical Blog
-
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 3 – Boosting LLM Performance — Fine-Tuning, Prompt Engineering, and RAG
-
Dremio Blog: Various Insights
Accelerate Insights While Reducing TCO with An Intelligent Lakehouse Platform
Browse All Blog Articles
-
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 4 – What Are AI Agents — And Why They’re the Future of LLM Applications
We’ve explored how Large Language Models (LLMs) work, and how we can improve their performance with fine-tuning, prompt engineering, and retrieval-augmented generation (RAG). These enhancements are powerful — but they’re still fundamentally stateless and reactive. -
Engineering Blog
Dremio’s Apache Iceberg Clustering: Technical Blog
Clustering is a data layout strategy that organizes rows based on the values of one or more columns, without physically splitting the dataset into separate partitions. Instead of creating distinct directory structures, like traditional partitioning does, clustering sorts and groups related rows together within the existing storage layout. -
Dremio Blog: Open Data Insights
A Journey from AI to LLMs and MCP – 3 – Boosting LLM Performance — Fine-Tuning, Prompt Engineering, and RAG
this post, we’ll walk through the three most popular and practical ways to boost the performance of Large Language Models (LLMs): Fine-tuning Prompt engineering Retrieval-Augmented Generation (RAG) Each approach has its strengths, trade-offs, and ideal use cases. By the end, you’ll know when to use each — and how they work under the hood. -
Dremio Blog: Various Insights
Accelerate Insights While Reducing TCO with An Intelligent Lakehouse Platform
Enterprises today face increasing pressure to extract insights from data quickly while controlling spend. Yet, as data volumes explode across cloud and on-prem environments, traditional architectures often fall short—resulting in higher costs, rigid pipelines, and slower decision-making. The Dremio Intelligent Lakehouse Platform addresses these challenges by delivering faster insights and significant total cost of ownership […] -
Dremio Blog: Various Insights
A Journey from AI to LLMs and MCP — 2 — How LLMs Work — Embeddings, Vectors, and Context Windows
In this post, we’ll peel back the curtain on the inner workings of LLMs. We’ll explore the fundamental concepts that make these models tick: embeddings, vector spaces, and context windows. You’ll walk away with a clearer understanding of how LLMs “understand” language — and what their limits are. -
Dremio Blog: Various Insights
Enabling companies with AI-Ready Data: Dremio and the Intelligent Lakehouse Platform
Artificial Intelligence (AI) has become essential for modern enterprises, driving innovation across industries by transforming data into actionable insights. However, AI's success depends heavily on having consistent, high-quality data readily available for experimentation and model development. It is estimated that data scientists spend 80+% of their time on data acquisition and preparation, compared to model […] -
Dremio Blog: Various Insights
A Journey from AI to LLMs and MCP – 1 – What Is AI and How It Evolved Into LLMs
This post kicks off our 10-part series exploring how AI evolved into LLMs, how to enhance their capabilities, and how the Model Context Protocol (MCP) is shaping the future of intelligent, modular agents. -
Dremio Blog: Product Insights
AI Agents for Dremio Utilizing MCP
Why SQL Must Evolve in the Era of Agentic Apps and Data-Aware AI SQL has long been the universal language of data. But with the rise of Generative AI and agentic applications, a major shift is underway. We're entering an era where natural language is the interface, and agents are the client. There are two […] -
Dremio Blog: Product Insights
Syncing Documentation with Dremio + dbt
By leveraging the dbt-dremio adaptor, Analytics Engineers can seamlessly sync model descriptions and tags from dbt projects with Dremio to generate wikis and labels for Business Users and Data Analysts. -
Engineering Blog
Pre-Computing Secure Materializations
Integrating row column access control with materializations enables Dremio Reflections to deliver high-performance query execution without compromising on security or flexibility, making it an ideal solution for scalable, secure data access in the lakehouse architecture. Furthermore, by enabling pre-compute materializations to be re-usable across users and roles, significant cost savings can be achieved through more efficient engine resource utilization. -
Engineering Blog
Autonomous Reflections: Technical Blog
At Dremio, we implemented Autonomous Reflections in our own internal Data Lakehouse. We are happy to report that Autonomous Reflections exceeded our expectations. In just days, we saw significant improvements -
Engineering Blog
Credential Vending with Iceberg REST Catalogs in Dremio
Credential vending support in Dremio opens up a more secure and convenient way to query external Iceberg catalogs. By obtaining temporary, table-scoped credentials on the fly, Dremio minimizes long-lived secrets and ensures access is tightly controlled by the catalog’s policies. -
Dremio Blog: Product Insights
What’s New in Dremio’s Newest Release: Accelerate AI with Intelligent Automation
Today, we're excited to announce the general availability of Dremio's latest release, delivering accelerated AI and analytics through intelligent automation. Marking the next generation of Dremio, this release represents a significant milestone in our mission to eliminate technical complexity and resource waste through autonomous capabilities, empowering teams to innovate rather than maintain. In today's economic […] -
Dremio Blog: Product Insights
Autonomous Reflections: Intelligent Automation for Accelerated AI and Analytics
Is Query Performance Slowing Down Your AI and Analytics Initiatives? Slow analytics and AI workloads frustrate users and delay critical insights, draining productivity. If waiting for queries to load feels like the norm, you're not alone. But what if query performance could be accelerated—automatically, without requiring any specialized expertise or manual intervention? Enter Autonomous Reflections. […] -
Dremio Blog: Product Insights
Introducing the Enterprise Catalog, Powered By Apache Polaris (Incubating)
Companies of all sizes now use lakehouse architectures to power their analytics and AI workloads. Lakehouses give companies a single, trusted source of data for analytics and AI tools to access, and eliminate the need for data duplication and vendor lock-in. The catalog, or metastore, is an integral part of the lakehouse that enables tools […]
- 1
- 2
- 3
- …
- 29
- Next Page »