
Rethinking Ingestion: CI/CD for Data Lakes
At first glance, ingesting data into a data lake may seem like a one-step process — you simply add files to an object store. What else is there to do?!It turns out that there is more you can do, and blindly writing new data introduces a host of potential problems. For example, how do you know the data you write is accurate and conforms to schema? The truth is, once you’ve written it to the lake, in a sense, it’s already too late.What we propose and will cover in this talk, is a new strategy for data lake ingestion. One where new data can be added in isolation, then tested and validated, before “going live” in a production table. Finally, we’ll show how git-for-data tools like lakeFS and Nessie enable this ingestion paradigm in a seamless way.
Ready to Get Started? Here Are Some Resources to Help


Guides
What Is a Data Lakehouse?
The data lakehouse is a new architecture that combines the best parts of data lakes and data warehouses. Learn more about the data lakehouse and its key advantages.
read more
Whitepaper
Simplifying Data Mesh for Self-Service Analytics on an Open Data Lakehouse
The adoption of data mesh as a decentralized data management approach has become popular in recent years, helping teams overcome challenges associated with centralized data architecture.
read more