High Frequency Small Files vs. Slow Moving Datasets

Before implementing Apache Iceberg, we had a small file problem at Adobe. In Adobe Experience Platform’s (AEP) data lake, one of our internal solutions was to replicate small files to use at a very high frequency of 50K files per day for a single dataset. A streaming service we called Valve processed those requests in parallel, writing them to our data lake and asynchronously triggering a compaction process. This worked for some time but had two major drawbacks. First, if the compaction process was unable to keep up, queries on the data lake would suffer due to expensive file listings. Second, with our journey with Iceberg underway, we quickly realized that creating thousands of snapshots per day for a single dataset would not scale. We needed an upstream solution to consolidate data prior to writing to Iceberg.In response, a service called Flux was created to solve the problem of small files being pushed into a slow-moving tabular dataset (aka Iceberg v1). In this presentation, we will review the design of Flux, its place in AEP’s data lake, the challenges we had in operationalizing it and the final results.

Topics Covered

Azure Data Lake Storage - Dremio
Unlocking Potential with Apache Iceberg
get started

Get Started Free

No time limit - totally free - just the way you like it.

Sign Up Now
demo on demand

See Dremio in Action

Not ready to get started today? See the platform in action.

Watch Demo
talk expert

Talk to an Expert

Not sure where to start? Get your questions answered fast.

Contact Us

Ready to Get Started?

Bring your users closer to the data with organization-wide self-service analytics and lakehouse flexibility, scalability, and performance at a fraction of the cost. Run Dremio anywhere with self-managed software or Dremio Cloud.