Iceberg at Adobe: Challenges, Lessons & Achievements

We were on our second iteration of Adobe Experience Platform’s (AEP) data lake when Apache Iceberg first came up on our radar. At the time, we were struggling with our internal metastore that was suffering from scalability and consistency issues that we had to solve. We quickly saw the synergy with Iceberg and appreciated its design, which was both lightweight and extensible. Iceberg addressed critical issues for us around data reliability, such as filtering partial results due to failures and data consistency with parallel reads and writes. It also provided support for partition pruning with partition ranges and file skipping with column-level stats that our queries could benefit from immediately. Iceberg enabled new possibilities for us around time travel, point-in-time recovery and incremental reads. Finally, Iceberg was open sourced and had a vibrant community evolving it.This presentation will share our journey at Adobe with building a data lake on top of Iceberg. Customers use AEP to centralize and standardize their data across the enterprise resulting in a 360-degree view of their data of interest. That view can then be used with intelligent services to drive experiences across multiple devices, run targeted campaigns, classify profiles and other entities into segments, leverage advanced analytics, and more. As a consequence of this, we are processing millions of batches per day (TB of data), writing to thousands of datasets, and scanning petabytes of data on the data lake. The journey wasn’t all rainbows and unicorns, rather we did hit some bumps along the way. This talk will go into detail on how we solved problems like high-frequency small files, exactly once writes, support for Adobe’s Experience Data Model (XDM) schemas, fine-tuning query optimizations and tombstoning of data to comply with the EU’s General Data Protection Regulation (GDPR) prior to the v2 format. Having successfully migrated our customers to Iceberg-backed datasets, we will share how Iceberg is performing in production and what’s next with Iceberg at Adobe.

Topics Covered

Azure Data Lake Storage - Dremio
Dremio Subsurface for Apache Spark
Unlocking Potential with Apache Iceberg

Ready to Get Started? Here Are Some Resources to Help


Dremio’s Well-Architected Framework

read more
Whitepaper Thumb


Harness Snowflake Data’s Full Potential with Dremio

read more
Whitepaper Thumb


Simplifying Data Mesh for Self-Service Analytics on an Open Data Lakehouse

read more
get started

Get Started Free

No time limit - totally free - just the way you like it.

Sign Up Now
demo on demand

See Dremio in Action

Not ready to get started today? See the platform in action.

Watch Demo
talk expert

Talk to an Expert

Not sure where to start? Get your questions answered fast.

Contact Us

Ready to Get Started?

Bring your users closer to the data with organization-wide self-service analytics and lakehouse flexibility, scalability, and performance at a fraction of the cost. Run Dremio anywhere with self-managed software or Dremio Cloud.