
Lessons Learned From Running Apache Iceberg at Petabyte Scale
Apache Iceberg is an open table format that allows data engineers and data scientists to build efficient and reliable data lakes with features that are normally present only in data warehouses. Specifically, Iceberg enables ACID compliance on any object store or distributed system, boosts the performance of highly selective queries, provides reliable schema evolution, and offers time travel and rollback capabilities. Iceberg lets companies simplify their current architectures as well as unlock new use cases on top of data lakes.This talk will describe how to maintain Iceberg tables in their optimal shapes while running at petabyte scale. In particular, the presentation will focus on how to efficiently perform metadata and data compaction on Iceberg tables with millions of files without any impact on concurrent readers and writers.
Topics Covered
Ready to Get Started? Here Are Some Resources to Help


Guides
What Is a Data Lakehouse?
The data lakehouse is a new architecture that combines the best parts of data lakes and data warehouses. Learn more about the data lakehouse and its key advantages.
read more
Whitepaper
Simplifying Data Mesh for Self-Service Analytics on an Open Data Lakehouse
The adoption of data mesh as a decentralized data management approach has become popular in recent years, helping teams overcome challenges associated with centralized data architecture.
read more