Dremio Blog

9 minute read · February 16, 2026

Stop Waiting on Data: How 4 Dremio Customers Slashed Time to Insight

Will Martin Will Martin Technical Evangelist
Start For Free
Stop Waiting on Data: How 4 Dremio Customers Slashed Time to Insight
Copied to clipboard

Key Takeaways

  • Data engineering teams face challenges translating business questions into answers due to fragmented data and outdated processes.
  • The Dremio data lakehouse architecture accelerates time to insight by streamlining data access and eliminating complex ETL processes.
  • Tanobel saved over 3,000 hours monthly by reducing query times to under 1 second after implementing Dremio.
  • A large healthcare provider shrank month-end reporting from 2 weeks to 90 minutes using Dremio's unified platform.
  • The World Bank Treasury cut trade processing time by 96% with a Dremio-powered strategy, enhancing efficiency and accuracy.

Data engineering teams are often defined by the gap between a business question and a verified answer. Requests languish in backlogs while engineering teams struggle with ETL pipelines and the technical debt of siloed warehouses. When reports finally arrive, the data is frequently stale, leading to debating the accuracy of the numbers rather than the strategy they should inform.

To scale, organisations must prioritise time to insight, the speed at which raw data becomes a business decision. The Dremio data lakehouse architecture delivers revolutionary insight acceleration by eliminating the need for complex ETL pipelines and data movement, allowing organisations to query data directly in the data lake.

Read on to learn how four (of many) companies collapsed their time to insight timelines and unified siloed data by adopting Dremio. 

Try Dremio’s Interactive Demo

Explore this interactive demo and see how Dremio's Intelligent Lakehouse enables Agentic AI

Tanobel: Saving 3,000 Hours Monthly

Tanobel, a publicly traded beverage leader in Indonesia, faced a classic scalability crisis. Their data landscape was fragmented across multiple sites, creating performance bottlenecks that hindered operational agility. Standard reports required 15 minutes to process. The data preparation needed for new insights was worse, spanning two weeks.

The technical friction directly impacted their enterprise resource planning (ERP) systems. Reporting queries frequently locked up real-time workloads, forcing the IT team to segregate data manually just to maintain business continuity.

To solve this, Tanobel built a high-performance architecture on the Red Hat OpenShift Container Platform. Using OpenShift Data Foundation (ODF) with SSDs for block and object storage, they deployed Dremio as their lakehouse engine. By implementing Dremio Reflections and Apache Iceberg, Tanobel reduced query times from 15 minutes to under 1 second.

This overhaul saved 3,000+ employee hours every month. Beyond the speed, it enabled a shift toward "reusable datasets." Instead of building isolated tables for every request, the team now creates curated data products that can be shared across manufacturing, supply chain, and finance. Consequently, report creation time dropped from two weeks to just one day.

Healthcare Provider: Month-End Reporting Compressed from 2 Weeks to 90 Minutes

A large integrated healthcare system with over 14,000 employees struggled with a siloed infrastructure that spanned hospitals, clinics, and research institutes. Data lived in 13 different databases, including Oracle and SQL Server environments. For the Revenue Services team, the lack of integration made month-end reporting a grueling two-week manual process.

The organisation replaced this legacy setup with the Dremio lakehouse platform running on Amazon S3. The architecture uses Apache Iceberg for table management and automated file ingestion to eliminate manual extraction. By unifying these 13 sources into a single environment, the Revenue Services team reduced their month-end reporting cycle from two weeks to 90 minutes.

To sustain this speed, the organization established an Analytics Center of Excellence (ACE). This program empowers individual departments to manage their own reporting needs through self-service analytics. By using automation tools like Mizer and Airflow for orchestration, the core data engineering team is no longer a bottleneck. 

KION Group: Complex Queries Accelerated from 30 Minutes to 3 Seconds

KION Group, the largest manufacturer of forklift trucks in Europe, manages over 1.7 million trucks worldwide. After years of acquisitions, they were left with a monolithic IT infrastructure that traditional BI cubes could not handle. In the Warranty Division, analysing repair data across 200 million records was critical for cost savings, but their Power BI dashboards took over 30 minutes to refresh.

The KION Analytics Platform was designed to reduce the total cost of ownership by moving away from expensive data extracts in Azure Synapse. By using Dremio to query data directly in Azure Data Lake Storage, KION achieved "live query" performance.

The result was a performance jump from 30 minutes to 3 seconds. KION introduced the "Analytics Toolbox", allowing users to choose the tools best suited for their specific needs. While data scientists use Databricks for machine learning, the majority of reporting relies on the Dremio and Power BI combination. This allows managers to switch product models or locations in their dashboards instantly, providing the transparency needed to identify manufacturing defects in near real-time.

World Bank Treasury: Reducing Trade Processing Time by 96%

The World Bank Treasury manages a global portfolio exceeding $100 billion. Historically, their financial landscape was fragmented across 80+ systems, leading to conflicting metrics and a lack of trust in reporting. The most severe bottleneck involved capital markets trading. Traders were spending six to eight hours per trade manually extracting data from term sheets and entering it into systems.

The Treasury launched their "Finance One Lake" strategy, powered by Dremio, to unify 70% of its finance data. A centralised semantic layer ensures all tools, from Power BI to Tableau, pull from a golden copy of data. This architecture supports SHASTRA, an AI-powered trade automation system using Azure OpenAI on Azure Kubernetes Service (AKS).

By integrating AI with the lakehouse, the Treasury reduced trade processing time from eight hours to 15 minutes with 95%+ accuracy. This shift allowed highly skilled traders to move to an "exception-based review" model. Instead of performing clerical data entry, they now focus on market analysis and pricing strategy.

Summary: The Performance Leap

The following table highlights the impact of moving to a modernised lakehouse architecture across these organisations.

OrganisationPrevious TimeframeDremio TimeframeKey Features
Tanobel15 minutes (Query)< 1 secondSSD-backed Iceberg & Reflections
Healthcare Provider2 weeks (Reporting)90 minutesUnified Semantic Layer
KION Group30 minutes (Query)3 secondsLive Queries on ADLS Gen2
World Bank Treasury6–8 hours (Processing)15 minutesZero-ETL, open lakehouse architecture

Time to insight is a competitive necessity. By collapsing silos and enabling self-service analytics through a unified engine, organisations like Tanobel, KION, and the World Bank (and many more!) have recovered thousands of hours and eliminated the engineering bottlenecks that previously stalled their growth.

Try Dremio Cloud free for 30 days

Deploy agentic analytics directly on Apache Iceberg data with no pipelines and no added overhead.