Unlocking the Future of Analytics: Dremio’s Updated Architecture Guide
Discover how Dremio delivers lightning-fast queries, seamless collaboration, and cost-efficient scalability with cutting-edge technologies like Apache Arrow and Iceberg. Unlock the full potential of your data lakehouse today!
Mastering Dremio’s Well-Architected Framework: Overview and Security
In this session, you’ll discover how Dremio’s Hybrid Iceberg Lakehouse, paired with Pure Storage’s data platform, empowers your teams to accelerate access to insights, simplify data management, and reduce operational costs. Learn best practices for moving from Hadoop to a modern object storage based lakehouse, unlocking performance gains, simplifying management, and achieving environmental sustainability.
Mastering Dremio’s Well-Architected Framework: Performance Efficiency and Cost Optimization
In this session, you’ll discover how Dremio’s Hybrid Iceberg Lakehouse, paired with Pure Storage’s data platform, empowers your teams to accelerate access to insights, simplify data management, and reduce operational costs. Learn best practices for moving from Hadoop to a modern object storage based lakehouse, unlocking performance gains, simplifying management, and achieving environmental sustainability.
In this session, you’ll discover how Dremio’s Hybrid Iceberg Lakehouse, paired with Pure Storage’s data platform, empowers your teams to accelerate access to insights, simplify data management, and reduce operational costs. Learn best practices for moving from Hadoop to a modern object storage based lakehouse, unlocking performance gains, simplifying management, and achieving environmental sustainability.
Mastering Dremio’s Well-Architected Framework: Reliability and Operational Excellence
In this session, you’ll discover how Dremio’s Hybrid Iceberg Lakehouse, paired with Pure Storage’s data platform, empowers your teams to accelerate access to insights, simplify data management, and reduce operational costs. Learn best practices for moving from Hadoop to a modern object storage based lakehouse, unlocking performance gains, simplifying management, and achieving environmental sustainability.
Revolutionizing Data Analytics: How Dremio, Snowflake, and the Polaris Catalog unlock the power of the Hybrid Lakehouse
In today's data-driven world, enterprises face the challenge of managing, analyzing, and deriving insights from massive amounts of data across different environments. Join industry experts from Dremio, Snowflake, and the Polaris Catalog for an exclusive webinar that explores the future of Hybrid Iceberg Lakehouse and data warehousing.
Part 3: Apache Iceberg Catalogs, Deep Dive Course – Other Catalogs
Our final session will explore other catalog options like Unity, Gravitino and beyond. We’ll compare different lakehouse catalog solutions, highlighting their unique capabilities and how to choose the right one for your organization.
Mastering Self-Service Analytics with Dremio’s Semantic Layer
Learn how to leverage the Semantic Layer to create a unified view of your data, simplify data governance, and enable seamless access for both technical and non-technical users. Through expert-led demonstrations and actionable use cases, you'll gain the skills to optimize your Dremio implementation and drive greater adoption across your organization.
In this session, we’ll take a closer look at the Nessie and Polaris catalogs and how they enable efficient data management in Apache Iceberg environments. We’ll cover their key features, implementation strategies, and how they improve upon traditional approaches.
Moving Past Hadoop to a Modern Data Platform with Pure Storage & Dremio
Discover how Dremio’s Hybrid Iceberg Lakehouse, paired with Pure Storage’s data platform, empowers your teams to accelerate access to insights, simplify data management, and reduce operational costs. Learn best practices for moving from Hadoop to a modern object storage based lakehouse, unlocking performance gains, simplifying management, and achieving environmental sustainability.
An In-Depth Exploration on the World of Data Lakehouse Catalogs (Iceberg, Polaris, Nessie, Unity, etc.) – What are Data Lakehouse Catalogs?
Watch our kickoff session to explore how catalogs like Nessie, Polaris, and Unity drive data versioning, governance, and optimization in modern data ecosystems. Gain insights into choosing the right catalog to elevate your data management strategy.
An Apache Iceberg Lakehouse Crash Course – Ingesting Data into Apache Iceberg with Dremio
"Ingesting Data into Apache Iceberg with Dremio” is the final episode of our special edition series, focused on optimizing data ingestion into Apache Iceberg. Learn to connect Dremio to various data sources, prepare and analyze data, and apply techniques through live demos and practical examples.
Charting the Course: The Evolution and Future of Apache Iceberg and Polaris
Join us for an in-depth exploration of Apache Iceberg and Apache Polaris (incubating), where we delve into the past, present, and future of these transformative technologies. This session will provide a comprehensive overview of Iceberg’s journey, its current role within the data ecosystem, and the promising future it holds with the integration of Polaris.
An Apache Iceberg Lakehouse Crash Course – Ingesting Data into Apache Iceberg with Apache Spark
"An Apache Iceberg Lakehouse Crash Course," a comprehensive webinar series designed to deepen your understanding of Apache Iceberg and its role in modern data lakehouse architectures. Over ten sessions, we'll cover everything from the basics of data lakehouses and table formats to advanced topics like partitioning, optimization, and real-time streaming with Apache Iceberg. Don't miss this opportunity to enhance your data platform skills and learn from industry experts.
An Apache Iceberg Lakehouse Crash Course – Versioning with Apache Iceberg
Watch Alex delve into the versioning features of Apache Iceberg and their benefits. This episode will explain - How versioning works in Apache Iceberg. - Benefits of data versioning for analytics and compliance. - Strategies for managing versions and rollbacks. - Examples of versioning in action.
From Hadoop & Hive to Minio & Dremio: Moving Towards a Next Gen Data Architecture
Join this session to learn how Legacy data platforms often struggle with performance, processing, and scaling for robust AI/ML initiatives, particularly in complex multi-cloud environments. The combined power of MinIO and Dremio creates a data lakehouse platform that overcomes these challenges. Learn how this architecture simplifies critical AI tasks, scales seamlessly to meet demands, and streamlines data management for IT teams.
An Apache Iceberg Lakehouse Crash Course – The Role of Apache Iceberg Catalogs
Learn the importance of catalogs in Apache Iceberg:
- Different catalog options (Nessie, Polaris, AWS Glue)
- How catalogs facilitate data management and discovery
- Integrating catalogs with your existing data infrastructure
- Practical usage scenarios and tips
The "Apache Iceberg Q&A" is an interactive event featuring Alex Merced, Co-author of Apache Iceberg: The Definitive Guide. Attendees will have the unique opportunity to ask Alex their most pressing questions about Apache Iceberg, whether pre-submitted or asked live during the event. This session is perfect for both beginners and experienced professionals looking to deepen their understanding of Apache Iceberg. Make sure to register and submit your questions ahead of time to ensure they are addressed during the event.
What’s New in Dremio: Improved Automation, Performance + Catalog for Iceberg Lakehouses
Discover the new Dremio capabilities designed to make your Apache Iceberg data lakehouse the most efficient, scalable, and manageable platform for analytics and AI. We’ll cover enhancements in performance, data ingestion, data processing, and federated query capabilities, aimed at helping you achieve the fastest, most scalable, and easiest to use lakehouse for all of your data.
An Apache Iceberg Lakehouse Crash Course – Streaming with Apache Iceberg
"An Apache Iceberg Lakehouse Crash Course," a comprehensive webinar series designed to deepen your understanding of Apache Iceberg and its role in modern data lakehouse architectures. Over ten sessions, we'll cover everything from the basics of data lakehouses and table formats to advanced topics like partitioning, optimization, and real-time streaming with Apache Iceberg. Don't miss this opportunity to enhance your data platform skills and learn from industry experts.
"An Apache Iceberg Lakehouse Crash Course," a comprehensive webinar series designed to deepen your understanding of Apache Iceberg and its role in modern data lakehouse architectures. Over ten sessions, we'll cover everything from the basics of data lakehouses and table formats to advanced topics like partitioning, optimization, and real-time streaming with Apache Iceberg. Don't miss this opportunity to enhance your data platform skills and learn from industry experts.
Unite Data Across Dremio, Snowflake, Iceberg, and Beyond
Join this session to learn how Dremio, the Unified Lakehouse Platform for Self-Service Analytics and AI, empowers Snowflake customers to harness their data's full potential. Learn how Dremio eliminates data silos by federating analytics across Snowflake, Iceberg and other data sources, and combining that with an intelligent semantic layer.
An Apache Iceberg Lakehouse Crash Course – Understanding Apache Iceberg’s Partitioning Features
Gain a comprehensive understanding of Apache Iceberg's partitioning capabilities. This episode will discuss: - Different partitioning strategies in Apache Iceberg. - How partitioning enhances query performance. - Understanding how Hidden Partitioning works. - Understanding how Partition Evolution Works.
Mastering Semantic Layers: The Key to Data-Driven Innovation
A semantic layer in data analytics acts as a bridge between complex data and business users, simplifying data access and interpretation. It enables self-service analytics, ensures data consistency, and accelerates time-to-insight, transforming your data strategy for faster, data-driven decision-making.
An Apache Iceberg Lakehouse Crash Course – The Read and Write Process for Apache Iceberg Tables
"An Apache Iceberg Lakehouse Crash Course," a comprehensive webinar series designed to deepen your understanding of Apache Iceberg and its role in modern data lakehouse architectures. Over ten sessions, we'll cover everything from the basics of data lakehouses and table formats to advanced topics like partitioning, optimization, and real-time streaming with Apache Iceberg. Don't miss this opportunity to enhance your data platform skills and learn from industry experts
An Apache Iceberg Lakehouse Crash Course – The Architecture of Apache Iceberg, Apache Hudi, and Delta Lake
Dive into the architectural intricacies of leading table formats Apache Iceberg, Apache Hudi, and Delta Lake. This session will cover: - Core components and design principles of each table format - Comparison of features and use cases - How to choose the right table format for your needs - Practical scenarios and best practices
Apache Iceberg Lakehouse Crash Course – What is a Data Lakehouse and What is a Table Format?
"An Apache Iceberg Lakehouse Crash Course," a comprehensive webinar series designed to deepen your understanding of Apache Iceberg and its role in modern data lakehouse architectures. Over ten sessions, we'll cover everything from the basics of data lakehouses and table formats to advanced topics like partitioning, optimization, and real-time streaming with Apache Iceberg. Don't miss this opportunity to enhance your data platform skills and learn from industry experts.
Build the next-generation Iceberg lakehouse with Dremio and NetApp
Join our upcoming webinar to explore the future of data lakes and discover how NetApp and Dremio can revolutionize your analytics by delivering the next-generation of lakehouse with Apache Iceberg.
The "Best of Subsurface 2024" webinar offers a comprehensive recap of the top moments from the Subsurface conference. Attendees will gain insights from industry leaders on data lakehouse implementations, open source advancements, and the future of data engineering.
Scania’s Journey in Navigating and Implementing Data Mesh
Learn about Scania’s journey implementing a data mesh to improve delivery and business outcomes. Highlights on architecture, prioritization, platform selection, and data culture transformations will be shared. Included will be Dremio’s Field CDO and Agile Lab Co-Founder to share additional lessons learned via other global enterprises on similar journeys.
Optimize Analytics Workloads with Dremio + Snowflake
You need to put all of your data to work. When your data is distributed across Snowflake and other sources, it can be complex to unify data access. Learn how Dremio helps you drive analytic insight across all your data, gaining the fastest insight from Snowflake, the data lake, and everywhere.
What’s New in Dremio: New Capabilities for the Best Apache Iceberg Lakehouse
Join our upcoming webinar to discover the new Dremio capabilities designed to make your Apache Iceberg data lakehouse the most efficient, scalable, and manageable platform for analytics and AI. We’ll cover enhancements for data ingestion, data processing, and data optimization aimed at helping you achieve the fastest, most scalable, and easiest to use lakehouse for all of your data.
Join this session and learn a comprehensive talk that delves into the evolution of data analytics, culminating in the data lakehouse model with Dremio at its core. The session will feature a live demonstration of Dremio's capabilities, showcasing how it streamlines the journey from data storage to insightful analysis, thereby facilitating a win-win scenario for data engineers and analysts alike.
Learn how to reduce your Snowflake cost by 50%+ with a lakehouse
Join Alex Merced, Developer Advocate at Dremio to explore the future of data management and discover how Dremio can revolutionize your analytics TCO, enabling you to do more with less.
Dremio’s unified lakehouse platform for self-service analytics enables data consumers to move fast while also reducing manual repetitive tasks and ticket overload for data engineers.
Next-Gen Data Pipelines are Virtual: Simplify Data Pipelines with dbt, Dremio, and Iceberg
Learn how to streamline, simplify, and fortify your data pipelines with Dremio's next-gen DataOps, saving time and reducing costs. Gain valuable insights into managing virtual data pipelines, mastering data ingestion, optimizing orchestration with dbt, and elevating data quality.
How S&P Global is Building an Azure Data Lakehouse with Dremio
Join us in this webinar and learn how S&P Global built an Azure data lakehouse with Dremio Cloud for FinOps analysis. If you are looking for ways to eliminate expensive data extracts in BI cubes, then this will be a great episode to check out.
Empowering Analytics: Unleashing the Power of Dremio Cloud on Microsoft Azure
Companies are struggling with the complex, brittle, and expensive nature of the data lifecycle in existing analytical environments. Dremio is announcing the availability of Dremio Cloud on Microsoft Azure, providing companies the ability to simplify and optimize their analytical environment.
What’s new in Dremio: New Gen-AI capabilities, advances for 100% query success, plus now on Azure
Learn what’s new in Dremio - and how you can accelerate self-service analytics at scale - including new Gen AI capabilities, Dremio Cloud SaaS on Microsoft Azure, advances to ensure 100% query reliability, and expanded Apache Iceberg capabilities to streamline Iceberg adoption and improve performance.
ZeroETL & Virtual Data Marts: The Cutting Edge of Lakehouse Architecture
Join us for an enlightening talk on "ZeroETL & Virtual Data Marts: The Cutting Edge of Lakehouse Architecture." In this session, we'll explore the pains of data engineering, unveil innovative solutions, and guide you through practical implementation with tools like Dremio and DBT. Transform your data landscape!
Hands-on Workshop: Build an Iceberg Lakehouse in 60 Minutes with Dremio Cloud
Led by Mark Hoerth, Escalations Engineer, this workshop will guide you through the process of creating tables in your Iceberg catalog, ingesting Iceberg Tables into Amazon S3, creating a clean data product, enabling governed self-service for your organization, and ultimately querying the data through our SQL Runner and a BI Tool.
How Dremio provides you fast and easy data access while saving you money?
Discover how Dremio revolutionizes data access, offering speed, simplicity, and cost savings. Join us to explore real-world use cases and optimize your data infrastructure!
Building a Data Science Platform on Apache Iceberg and Nessie
Discover the future of data science and machine learning pipelines with Jacopo Tagliabue of Bauplan Labs in this webinar. Learn why modern data platforms are embracing Apache Iceberg and Nessie, and explore the transformative benefits of Nessie's git-like features for data management.
Build an Iceberg Lakehouse Workshop: 60-Minute Challenge
Led by Isha Sharma, Senior Director of Product Management, this workshop will guide you through the process of ingesting Iceberg Tables into Amazon S3, creating a clean data product, enabling governed self-service for your organization, and ultimately querying the data through our SQL Runner and a BI Tool.
Simplify Lakehouse Operations with Zero-Copy Clones and Multi-Table Transactions
Join this session to learn how Dremio Arctic, a lakehouse management service, enables data teams to deliver a consistent and accurate view of their data lake with zero-copy clones of production data and multi-table transactions.
Your Lakehouse Just Got Gnarlier: What’s New in Dremio, including Next Gen Reflections
Learn what’s new in Dremio - and how you can accelerate self-service analytics at scale - including Generative AI text-to-SQL capabilities, even faster analytics and better query performance, our new native Apache Iceberg catalog, Arctic, and more.
Unravel the intricacies of materialized views and Dremio's Data Reflections in our upcoming webinar. Delve into their distinct features, advantages, and how they shape modern data acceleration.
The Who, What and Why of Data Lakehouse Table Formats
Dive into the transformative world of Data Lakehouse table formats, exploring Apache Iceberg, Delta Lake, and Apache Hudi. Learn their pivotal roles in reshaping data storage, analytics, and the unparalleled advantages they offer.
Introduction to Dremio Arctic: Catalog Versioning and Iceberg Table Optimization
Join this webinar for an introduction to Dremio Arctic, a data lakehouse management service that features easy catalog versioning with data as code and automatic optimization for your Apache Iceberg tables. Learn how Arctic helps data teams deliver a consistent, accurate, and high quality view of their data to all of their data consumers with a no-copy architecture.
Unlock the potential of data engineering in our "ELT, ETL & the Dremio Data Lakehouse" webinar! Discover how Dremio's no-copy architecture revolutionizes ETL & ELT patterns, optimizing data processing and cutting costs.
For analytical workloads, data teams today have various options to choose from in terms of data warehouses and lakehouse query engines. To enable self-service, they provide a semantic layer for end users, usually with materialized views, BI extracts, or OLAP cubes. The problem is, this process creates data copies and requires end users to understand the underlying physical data model.
Workshop: Build an Iceberg Lakehouse in 60 Minutes
In this hands-on workshop you’ll get hands-on with Dremio Cloud and rapidly build an Iceberg Lakehouse. Isha Sharma, Senior Director of Product Management, will walk you through ingesting Iceberg Tables into Amazon S3, creating a clean data product, enabling governed self-service for the business, and ultimately querying the data from our SQL Runner and a BI Tool.
Data as Code with Dremio Arctic: ML Experimentation & Reproducibility on the Lakehouse
In this episode of Gnarly Data Waves, we will discuss how Dremio Arctic and data as code enable data science use cases like Machine learning experimentation and reproducibility on a consistent view of your data in a no-copy architecture.
What’s New in the Apache Iceberg Project: Version 1.2.0 Updates, PyIceberg, Compute Engines
In this episode of Gnarly Data Waves, Dremio’s Developer Advocate, Dipankar will highlight some of the key new capabilities that have been added to the Apache Iceberg project in the version 1.2.0 along with discussions around compute engines & the PyIceberg Python library.
Data Mesh In Practice: Accelerating Cancer Research with Dremio’s Data Lakehouse
Memorial Sloan Kettering Cancer Center (MSK) is the largest private cancer center in the world and has devoted more than 135 years to exceptional patient care, innovative research, and outstanding educational programs. Today, MSK is one of 52 National Cancer Institutes designated as Comprehensive Cancer Centers, with state-of-the-art science flourishing side by side with clinical studies and treatment.
Best Practices for Modernizing Your Hadoop Workloads to AWS with Dremio
Many companies turned to HDFS to address the challenge of storing growing volumes of semistructured and unstructured data, and find themselves with a two-tiered data architecture and siloed data. In this session, learn how to easily and seamlessly modernize with a data lakehouse architecture on AWS while maintaining business continuity for critical analytic workloads.
Unified Access for Your Data Mesh: Self-Service Data with Dremio’s Semantic Layer
Data silos and a lack of collaboration between teams have been long-standing challenges in data management. This is where data mesh comes into play as an architectural and organizational paradigm, providing an solution by enabling decentralized teams to work collaboratively and share data in a governed manner across the enterprise.
Easy Data Lakehouse Management with Dremio Arctic’s Automatic Data Optimization
Join this episode of Gnarly Data Waves to learn how Dremio Arctic makes data lakehouse management easy with automatic data optimization. See how these features ensure high performance analytics and optimal resource consumption for enterprise data volumes.
As organizations strive to provide value faster to end users, data silos makes it difficult to provide insights on time. Learn how Dremio’s data lakehouse accelerate data delivery and discovery, without copies.
Enabling data mesh with Dremio Arctic and Data as Code
This webinar will explore how businesses adopting a data mesh architecture can leverage the data as code capabilities of Dremio Arctic to easily build, manage, and share data products across their organizations.
Making the Move: Five Factors to Consider When Migrating from Hadoop to the Data Lakehouse
Join Donald Farmer, Principal at TreeHive Strategy, and Tony Truong, Senior Product Marketing Manager at Dremio, as they discuss the five key considerations that organizations should take into account when migrating from Hadoop to the data lakehouse.
How to Modernize Hive to the Data Lakehouse with Dremio and Apache Iceberg
We all want to overcome the many challenges we find with data drift, infra costs and performance. In this talk we’ll discuss the path to taking your hadoop based data lake and using Dremio and Apache Iceberg to modernize it into a full blown data lakehouse that will simplify workflows, increase performance and lower costs.
Optimizing Data Files in Apache Iceberg: Performance strategies
Optimized query speed is a must when processing 100s of petabytes of data on the data lake, especially when data grows over time. Join Dremio’s Developer Advocate, Dipankar Mazumdar as he walks through the various performance strategies available in Apache Iceberg.
Build your open data lakehouse on Iceberg with Fivetran and Dremio
Join Fivetran’s Coral Trivedi and Dremio’s Brett Roberts to learn how you can use Fivetran and Dremio to build and query your data lake with Apache Iceberg Tables.
Join Dremio’s Anushka Anand and Alex Merced as they discuss the emergence of data as code, its impact on data management, and how it can be used to deliver a consistent and accurate view of data in the data lake.
Getting Started with Hadoop Migration and Modernization
Data teams inheriting on-prem Hadoop face bottlenecks with the high cost of infrastructure and operational overhead. Join the latest episode of Gnarly Data Waves and learn how modernizing Hadoop to the data lakehouse with Dremio solves these challenges.
As enterprise data platforms look to operate at a more efficient level, they face the pressure to pivot their data management strategies. Join Dremio and Forrester, as we talk about the three-year Total Economic Impact™ of the data lakehouse and quantifiable benefits to productivity across all teams.
Best Practices for Optimizing Tableau Dashboards with Dremio
Join Nick Brisoux, Senior Director of Product Management at Tableau and Brett Roberts, Principal Alliances Solutions Architect at Dremio, to learn how Dremio helps Tableau users accelerate access to data, including cloud data lakes, and how Dremio can dramatically improve query performance, delivering analytics for every data consumer at interactive speed.
Iceberg has been gaining wide adoption in the industry as the de facto open standard for data lakehouse table formats. Join Dremio Developer Advocate Alex Merced as we help you learn the options and strategies you can employ when migrating tables from Delta Lake to Apache Iceberg.
Migrating a BI Dashboard to your Data Lakehouse with Apache Superset and Dremio
Dashboards are the backbone of an organization’s decision-making process. Join Dremio Developer Advocate Dipankar Mazumdar to learn how to easily migrate a BI dashboard (Apache Superset) to your data lakehouse for faster insights.
Every organization is working to empower their business users with data and insights, but data is siloed, hard to discover, and slow to access. With Dremio, data teams can easily connect to all of their data sources, define and expose the data through a business-friendly user experience, and deliver sub-second queries with our query acceleration technologies.
Enable the business to create and consume data products powered by Apache Iceberg, accelerating AI and analytics initiatives and dramatically reducing costs.