Build Data Lake Pipelines at Scale – Using only SQL
Building data pipelines for cloud data lakes is fraught with complexity as organizations aspire to analyze every data type, especially semi-structured event data. Pipelines have become painful and tedious for data engineers to develop and maintain in the face of accelerating scale and frequent change cycles.
This talk will cover:
- The pipeline operations work that burdens data engineering including orchestration, data lake table management and infrastructure management.
- Upsolver’s declarative approach, where you define pipelines using only SQL transformations on raw data. All of the mundane engineering work is automated.
- High-scale pipeline examples across several industries and use cases.
Ori Rafael is co-founder and CEO of Upsolver, the only no-code data lake engineering platform. He has more than 15 years of experience in databases, data integration and big data. Before founding Upsolver he held a variety of technology management roles in the Israeli Defense Force (IDF) elite technology intelligence unit.
Ready to Get Started? Here Are Some Resources to Help
What Is a Data Lakehouse?
The data lakehouse is a new architecture that combines the best parts of data lakes and data warehouses. Learn more about the data lakehouse and its key advantages.read more
Simplifying Data Mesh for Self-Service Analytics on an Open Data Lakehouse
The adoption of data mesh as a decentralized data management approach has become popular in recent years, helping teams overcome challenges associated with centralized data architecture.read more