
Compression, Dedupe and Encryption Conundrums in Cloud Data Lakes
Cloud data lake footprints are in exabytes and exponentially growing, and companies pay billions of dollars to store and retrieve data. In this talk, we will cover some of the space and time optimizations, which have historically been applied to on-premises file storage, and how they would be applied to objects stored in cloud data lakes.Deduplication and compression are techniques that have been traditionally used to reduce the amount of storage used by applications. Data encryption is table stakes for any remote storage offering, and today we have client-side and server-side encryption support by cloud providers.Combining compression, encryption and deduplication for object stores in the cloud is challenging due to the nature of overwrites and versioning, but the right strategy can save millions of dollars for an organization. We will cover some strategies for employing these techniques, depending on whether an organization prefers client-side or server-side encryption, and discuss online and offline deduplication of objects.Companies such as Box and Netflix employ a subset of these techniques to reduce their cloud footprint and provide agility in their cloud operations.
Ready to Get Started? Here Are Some Resources to Help


Guides
What Is a Data Lakehouse?
The data lakehouse is a new architecture that combines the best parts of data lakes and data warehouses. Learn more about the data lakehouse and its key advantages.
read more
Whitepaper
Simplifying Data Mesh for Self-Service Analytics on an Open Data Lakehouse
The adoption of data mesh as a decentralized data management approach has become popular in recent years, helping teams overcome challenges associated with centralized data architecture.
read more