November 13, 2025

Performance Meets Automation: Powering AI-Driven Analytics with Dremio

This session explores how Dremio’s latest autonomous performance features supercharge both AI and BI workloads without placing extra burden on data teams. As AI agents, MCP-connected LLMs, and RAG-style applications generate increasingly dynamic and unpredictable query patterns, traditional manual tuning simply doesn’t scale. You’ll learn how Dremio’s result cache instantly returns answers for repeat queries on unchanged data, and how autonomous reflections continuously analyze seven days of workload history to build, refresh, and retire materializations that accelerate common query patterns—without ever serving stale data or requiring users to design or manage reflections themselves.

We’ll then dive into how Dremio optimizes performance when queries need to hit the tables directly. You’ll see how Iceberg clustering (powered by Z-ordering) delivers partition-like performance without the complexity and ingestion penalties of traditional partitioning and bucketing, and how the enhanced OPTIMIZE operation repairs small files, tombstoned deletes, and metadata bloat efficiently—even on very large, heavily-partitioned tables. Finally, we’ll cover Columnar Cloud Cache (C3), which keeps hot Parquet blocks close to compute to minimize object store latency and reduce cost. All five capabilities—result cache, autonomous reflections, clustering, optimize, and C3—work together, behind the scenes, to deliver fast, consistent query experiences for AI agents and human users alike, so teams can focus on building data products instead of babysitting performance.

Topics Covered

Agentic AI
AI
Apache Iceberg
Data Lake
Use Cases

Sign up to watch all Subsurface 2025 sessions