Organizations today are looking beyond traditional approaches to BI and analytics so they can accelerate time to insight, enable self-service, and provide flexibility.
What’s needed is an open data stack that provides flexibility to bring multiple tools and engines directly to the data and not the other way around, meeting business goals while avoiding silos and lock-in. This has long been the goal, but only somewhat recently have data and cloud technologies matured to the point where it’s achievable for organizations of all sizes.
But what makes up an open data stack that you can trust? And how can it work to achieve your goals?
Hear from these founders and pioneers of open source projects like Apache Arrow, Apache Iceberg, Project Nessie, dbt, Apache Airflow, and Apache Superset on how the data community is tackling some of the biggest challenges in data architecture.
Tomer Shiran is the CPO and founder of Dremio. Prior to Dremio, he was VP Product and employee #5 at MapR, where he was responsible for product strategy, roadmap, and new feature development. As a member of the executive team, Tomer helped grow the company from five employees to over 300 employees and 700 enterprise customers. Prior to MapR, Tomer held numerous product management and engineering positions at Microsoft and IBM Research. He holds a master’s degree in electrical and computer engineering from Carnegie Mellon University and a bachelor’s in computer science from Technion – Israel Institute of Technology, as well as five U.S. patents.
Tristan Handy is the Founder and CEO of dbt Labs (formerly Fishtown Analytics), a Philadelphia startup pioneering the practice of modern analytics engineering. dbt is used by over 5,000 companies to organize, catalog, and distill knowledge from the data in their data warehouses, including companies like JetBlue, HubSpot, GitLab, and the ACLU.
Tristan has been working in data for two decades in both in-house and consulting roles with both large enterprises and small startups.
Maxime Beauchemin recently joined Lyft as a Software Engineer after some time at Airbnb as a data engineer developing tools to help streamline and automate data engineering processes. He is also the creator and lead committer on Apache Airflow and Apache Superset. He mastered his data warehousing fundamentals at Ubisoft and was an early adopter of Hadoop/Pig while at Yahoo in 2007. More recently, while at Facebook, he developed analytics-as-a-service frameworks around engagement and growth metrics computation, anomaly detection, and cohort analysis. He’s a father of three, and in his free time he’s a digital artist.
Ryan Blue is the co-creator of Apache Iceberg and works on open source data infrastructure. He is an Avro, Parquet, and Spark committer.