6 minute read · July 11, 2024

Apache Iceberg Crash Course: What is a Data Lakehouse and a Table Format?

Alex Merced

Alex Merced · Senior Tech Evangelist, Dremio

Apache Iceberg Education Resources:

Welcome to the "Apache Iceberg Crash Course" blog series, which complements our webinar series of the same name. Each blog post in this series is designed to provide a comprehensive summary of the content covered in the corresponding webinar session. 

In this inaugural post, we will explore the fundamentals of Data Lakes and Data Lakehouse table formats, setting the stage for a deeper dive into Apache Iceberg. Whether you're joining us for the webinars or just following along with the blog series, you'll gain valuable insights into the latest advancements in data architecture and management. Let's get started!

 What is a Data Lake?

In traditional data systems such as databases and data warehouses, several abstractions play a crucial role in managing and utilizing data. These systems abstract how the data is stored, allowing users to focus on data manipulation and analysis without understanding the underlying storage mechanisms. They provide a way to recognize and organize data as different tables, simplifying data management and retrieval. Cataloging these tables is another essential feature, ensuring users can easily find and reference the needed data. Furthermore, these systems handle the parsing, planning, and execution of queries, optimizing performance and efficiency.

While these features make databases and data warehouses excellent for working with data, they also create limitations. Specifically, the data in each system is confined to that system, necessitating data movement across multiple systems to meet various analytical and operational needs. This movement can be cumbersome and resource-intensive.

Enter the data lake paradigm. A data lake utilizes a storage layer like Hadoop or object storage to store structured and unstructured data. Structured data can be stored in file types like CSV, JSON, Parquet, and ORC, while unstructured data, including videos, images, and audio, can also be accommodated. This approach allows for consolidating diverse data types in a single location. In the data lake world, various tools can operate on the stored data, providing greater flexibility and enabling more agile data processing and analysis.

What is a Data Lakehouse?

While data lakes democratized access to data, they were difficult to use as they weren't designed to replace databases and data warehouses. Without the table and catalog abstractions, users didn't have the same performance, ease of use, and guarantees provided by traditional systems. However, modern advancements have introduced table formats like Apache Iceberg and catalogs like Nessie and Polaris to fill these gaps. These innovations allow a data warehouse-like experience on a data lake, a pattern known as the data lakehouse. The core pillar of a data lakehouse is the table format, which enables database-like tables and SQL operations on the data lakehouse.

What is a table format?

In the early days of data lakes using Hadoop, analytics were performed using a framework called MapReduce, which required writing code in Java—a barrier for many analysts. The goal was to enable SQL-like querying capabilities similar to those in traditional databases and data warehouses. This led to the creation of Hive, a framework that converted SQL statements into equivalent MapReduce jobs.

To achieve this, there needed to be a way to group files into predefined datasets intuitively. Hive's approach considered any files in a particular folder as part of a table's definition. While this allowed Hive to enable SQL on data lakes, it had limitations in performance and couldn't provide the ACID guarantees expected from traditional data systems.

This is where modern table formats like Apache Iceberg, Apache Hudi, and Delta Lake come into play. These formats list files in a table using a separate metadata structure, reducing reliance on how data is stored. This enables ACID transactions, table evolution, and granular updates with ACID guarantees, truly realizing the data lakehouse dream.

Conclusion

Understanding the evolution from traditional databases and data warehouses to data lakes and now to data lakehouses is crucial for navigating the modern data landscape. While data lakes democratized data access, they also introduced challenges that hindered their usability compared to traditional systems. The advent of table formats like Apache Iceberg and catalogs like Nessie and Polaris has bridged this gap, enabling the data lakehouse architecture to combine the best of both worlds. By leveraging these modern technologies, organizations can achieve the performance, ease of use, and data management capabilities of databases and data warehouses while also benefiting from the flexibility and scalability of data lakes. Stay tuned for the next installment in our "Apache Iceberg Crash Course" series.

Want to learn more about how Apache Iceberg can enhance your current data architecture, contact us!

Ready to Get Started?

Enable the business to create and consume data products powered by Apache Iceberg, accelerating AI and analytics initiatives and dramatically reducing costs.