Alex Merced

Senior Tech Evangelist, Dremio

Alex Merced is a Senior Tech Evangelist for Dremio, a developer, and a seasoned instructor with a rich professional background. Having worked with companies like GenEd Systems, Crossfield Digital, CampusGuard, and General Assembly.

Alex is a co-author of the O’Reilly Book “Apache Iceberg: The Definitive Guide.”  With a deep understanding of the subject matter, Alex has shared his insights as a speaker at events including Data Day Texas, OSA Con, P99Conf and Data Council.

Driven by a profound passion for technology, Alex has been instrumental in disseminating his knowledge through various platforms. His tech content can be found in blogs, videos, and his podcasts, Datanation and Web Dev 101.

Moreover, Alex Merced has made contributions to the JavaScript and Python communities by developing a range of libraries. Notable examples include SencilloDB, CoquitoJS, and dremio-simple-query, among others.

Alex Merced's Articles and Resources

Blog Post

Experience the Dremio Lakehouse: Hands-on with Dremio, Nessie, Iceberg, Data-as-Code and dbt

Welcome to the cutting-edge world of the Dremio Lakehouse, where the convergence of data lakes and data warehouses forms a powerful platform for data management and analytics. In this blog, we’ll dive into how Dremio, in collaboration with Nessie, Apache Iceberg, and tools like dbt, revolutionizes data handling by providing a cohesive environment that supports […]

Read more ->

Blog Post

What’s new in Dremio, Delivering Market Leading Performance for Apache Iceberg Data Lakehouses 

Version 25 of Dremio’s Intelligent SQL Query Engine heralds a new era in lakehouse analytics, promising market-leading performance coupled with the lowest total cost of ownership (TCO) and exceptional price performance. Unmatched Query Performance for Business Insights Dremio’s bold claim isn’t just talk; tangible technological advancements back it. The platform’s intelligent query engine is engineered […]

Read more ->

Blog Post

What’s New in Dremio,  Improved Administration and Monitoring with Integrated Observability

Dremio’s version 25 sets a new standard in the ease of administration and monitoring for analytical platforms. This release underscores Dremio’s commitment to providing a seamless and comprehensive monitoring experience, positioning it as one of the market’s most user-friendly lakehouse analytics platforms. Enhanced Observability for Efficient Management Dremio’s Unified Lakehouse Platform introduces an intuitive, self-service […]

Read more ->

Blog Post

What’s New in Dremio,  Setting New Standards in Query Stability and Durability

The Version 25 release underscores Dremio’s commitment to providing an SQL query engine that excels in performance and sets new benchmarks in stability and durability. A New Standard in Query Performance Dremio’s bold assertion that it offers market-leading stability and durability isn’t just a claim—it’s a commitment backed by substantial advancements in technology. The platform […]

Read more ->

Blog Post

What’s New in Dremio, Improved Data Ingestion and Migration into Apache Iceberg

Dremio’s version 25 marks a significant milestone in data lakehouse management, particularly with its native support for Apache Iceberg, an open table format gaining momentum in the data community. This release cements Dremio’s position as the foremost analytics engine tailored for Apache Iceberg, delivering unparalleled ease-of-management and performance. A Unified Apache Iceberg Experience Dremio’s latest […]

Read more ->

Blog Post

Streaming and Batch Data Lakehouses with Apache Iceberg, Dremio and Upsolver

The quest for a unified platform that seamlessly integrates streaming and batch data processing has led to the emergence of robust solutions like Apache Iceberg, Dremio, and Upsolver. These technologies are at the forefront of a new wave of data architecture, enabling businesses to build robust data lakehouses that cater to diverse analytical and operational […]

Read more ->

Blog Post

Dremio’s Commitment to being the Ideal Platform for Apache Iceberg Data Lakehouses

The data lake and data warehousing space is facing major disruption spearheaded by innovative table formats like Apache Iceberg. Iceberg has now become the cornerstone of modern data architecture. In the Apache Iceberg ecosystem, Dremio has emerged as the frontrunner, championing the use of Apache Iceberg to redefine the potential of data lakes. Dremio has […]

Read more ->

Blog Post

From MongoDB to Dashboards with Dremio and Apache Iceberg

Moving data from source systems like MongoDB to a dashboard traditionally involves a multi-step process: transferring data to a data lake, moving it into a data warehouse, and then building BI extracts and cubes for acceleration. This process can be tedious and costly. However, this entire workflow is simplified with Dremio, the Data Lakehouse Platform. […]

Read more ->

Blog Post

From SQLServer to Dashboards with Dremio and Apache Iceberg

Moving data from source systems like SQLServer to a dashboard traditionally involves a multi-step process: transferring data to a data lake, moving it into a data warehouse, and then building BI extracts and cubes for acceleration. This process can be tedious and costly. However, this entire workflow is simplified with Dremio, the Data Lakehouse Platform. […]

Read more ->

Blog Post

BI Dashboards with Apache Iceberg Using AWS Glue and Apache Superset

Business Intelligence (BI) dashboards are invaluable tools that aggregate, visualize, and analyze data to provide actionable insights and support data-driven decision-making. Serving these dashboards directly from the data lake, especially with technologies like Apache Iceberg, offers immense benefits, including real-time data access, cost-efficiency, and the elimination of data silos. Dremio as a data lakehouse platform, […]

Read more ->

Blog Post

From Postgres to Dashboards with Dremio and Apache Iceberg

Moving data from source systems like Postgres to a dashboard traditionally involves a multi-step process: transferring data to a data lake, moving it into a data warehouse, and then building BI extracts and cubes for acceleration. This process can be tedious and costly. However, this entire workflow is simplified with Dremio, the Data Lakehouse Platform. […]

Read more ->

Blog Post

Run Graph Queries on Apache Iceberg Tables with Dremio & Puppygraph

The allure of the data lakehouse architecture, particularly with the Apache Iceberg table format, lies in its ability to be utilized across various systems, eliminating the need for expensive data movement and migration planning. In this article, we will explore how Apache Iceberg tables are employed within Dremio—a data lakehouse platform that serves as a […]

Read more ->

Blog Post

Top Reasons to Attend the Subsurface Conference for Apache Iceberg Fans

Suppose you’re a data engineer, data scientist, or data analyst. In that case, the Subsurface conference is an unmissable event held on May 2nd and 3rd, live online and in person in New York City. This premier gathering shines a spotlight on the innovative world of data lakehouses, offering a deep dive into the latest […]

Read more ->

Blog Post

BI Dashboards 101 with Dremio and Superset

Business Intelligence (BI) Dashboards are dynamic, data visualization tools used to display the current status of metrics and key performance indicators (KPIs) for an organization. Essentially, they provide a visual and interactive representation of data, enabling users to make informed decisions based on the latest information. BI dashboards pull data in from various sources, including […]

Read more ->

Blog Post

Data Lakehouse Versioning Comparison: (Nessie, Apache Iceberg, LakeFS)

DataOps, a collaborative data management practice focused on improving the communication, integration, and automation of data flows between data managers and data consumers across an organization, has emerged as a focal point for data-driven cultures. At the core of effective DataOps is versioning — creating, managing, and tracking different versions of data sets. Versioning is […]

Read more ->

Gnarly Data Waves Episode

Getting Started with Dremio: Build a Data Lakehouse on your Laptop

Want to experience Data Lakehouse architecture? Join us and build a data lakehouse on your laptop in this exciting workshop.
Read more ->

Gnarly Data Waves Episode

Learn how to reduce your Snowflake cost by 50%+ with a lakehouse

Transcript Note: This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Opening Alex Merced: Hey, everybody! This is Alex Merced, and welcome to another episode of Gnarly Data Waves presented by Dremio. In this episode, we’re going to have an exciting topic about learning […]

Join Alex Merced, Developer Advocate at Dremio to explore the future of data management and discover how Dremio can revolutionize your analytics TCO, enabling you to do more with less.
Read more ->

Blog Post

Git for Data with Dremio’s Lakehouse Catalog: Easily Ensure Data Quality in Your Data Lakehouse

When it comes to data, there are several challenges that may impact the quality of data you provide consumers, which can result in complex and fragile pipelines and sometimes make the visibility of issues worse. Luckily, the data lakehouse comes to the rescue. The combination of Dremio and Apache Iceberg allows you to simplify many […]

Read more ->

Blog Post

What is Lakehouse Management?: Git-for-Data, Automated Apache Iceberg Table Maintenance and more

The concept of a “data lakehouse” has emerged as a beacon of efficiency and flexibility, promising to deliver the best of both data lakes and data warehouses. However, as organizations rush to adopt this promising architecture, they often encounter a complex landscape of data management challenges. Enter the realm of lakehouse management, a market for […]

Read more ->

Blog Post

What is DataOps? Automating Data Management on the Apache Iceberg Lakehouse

  The ability to manage and manipulate vast amounts of data efficiently is not just an advantage; it’s a necessity. As organizations strive to become more agile and data-centric, a new discipline has emerged at the intersection of data management and operations: DataOps. This article delves into the essence of DataOps, its goals, and why […]

Read more ->

Blog Post

What is Nessie, Catalog Versioning and Git-for-Data?

Data is not just an asset, but the backbone of innovation and strategic decision-making; managing this data efficiently becomes paramount. Traditional data systems have struggled to keep pace with the explosion of data, evolving data formats, and the accelerating shift towards data lakes and cloud-based storage solutions. Enter Project Nessie, a new approach to lakehouse […]

Read more ->

Gnarly Data Waves Episode

Getting Started with Dremio

Dremio’s unified lakehouse platform for self-service analytics enables data consumers to move fast while also reducing manual repetitive tasks and ticket overload for data engineers.
Read more ->

Blog Post

Ingesting Data Into Apache Iceberg Tables with Dremio: A Unified Path to Iceberg

Apache Iceberg and Dremio have emerged as significant players in the data lakehouse space, offering robust data management and analytics solutions. Apache Iceberg, an open source table format, provides a high-performance platform for large-scale data analytics, enabling better data management through hidden partitioning, schema evolution, and efficient data file management. Likewise, Dremio, a cutting-edge data […]

Read more ->

Blog Post

Trends in Data Decentralization: Mesh, Lakehouse, and Virtualization

The scale, speed, and variety of data are growing exponentially, presenting new challenges for traditional data architectures. Conventional systems, relying on extensive data pipelines from source systems to data lakes and warehouses, are increasingly seen as too slow, rigid, and costly. In response, a transformative approach is emerging: data decentralization. This blog post delves into […]

Read more ->

Blog Post

What Is a Data Lakehouse Platform?

The concept of a data lakehouse is gaining significant traction. This innovative approach represents a paradigm shift from the traditional data warehouses many businesses have relied upon for years. At its core, a data lakehouse is a hybrid that combines the flexibility and scalability of a data lake with the structured organization and management features […]

Read more ->

Blog Post

How Dremio delivers fast Queries on Object Storage: Apache Arrow, Reflections, and the Columnar Cloud Cache

Dremio is a pioneering data lakehouse platform, renowned for its high-speed query engine. What sets Dremio apart is its ability to execute queries directly on data lake storage, eliminating the need to transfer data to other systems. This capability is powered by cutting-edge technologies like Apache Arrow, reflections, and the Columnar Cloud Cache (C3). Dremio’s […]

Read more ->

Blog Post

Open Source and the Data Lakehouse: Apache Arrow, Apache Iceberg, Nessie and Dremio

The “open lakehouse” concept is gaining prominence as the apex of the evolution of data lakehouse architecture. This approach leverages open source components to create a robust data management ecosystem in terms of tool interoperability, performance, and resilience by design. This article aims to delve into the critical open source components that form the backbone […]

Read more ->

Blog Post

Why Lakehouse, Why Now?: What is a data lakehouse, and How to Get Started

The story of the data lakehouse is a tale of evolution, responding to the growing demands for more adept data processing. In this article, we delve into this journey and explore how each phase in data management’s evolution contributed to the data lakehouse’s rise. This solution promises to harmonize the strengths of its predecessors while […]

Read more ->

Blog Post

ZeroETL: Where Virtualization and Lakehouse Patterns Unite

Organizations continually strive to harness the full potential of their data. The traditional approach involves moving data from various sources into a data lake and then into a data warehouse. This process is facilitated by layers of extract, transform, load (ETL) pipelines. While ETL has been a cornerstone of data management strategies, it presents several […]

Read more ->

Gnarly Data Waves Episode

Next-Gen Data Pipelines are Virtual: Simplify Data Pipelines with dbt, Dremio, and Iceberg

Join our upcoming Gnarly Data Waves Webinar, 'Next-Gen Data Pipelines are Virtual: Simplify Data Pipelines with dbt, Dremio, and Iceberg' to learn how to streamline, simplify, and fortify your data pipelines with Dremio's next-gen DataOps, saving time and reducing costs.…
Read more ->
get started

Get Started Free

No time limit - totally free - just the way you like it.

Sign Up Now
demo on demand

See Dremio in Action

Not ready to get started today? See the platform in action.

Watch Demo
talk expert

Talk to an Expert

Not sure where to start? Get your questions answered fast.

Contact Us

Ready to Get Started?

Bring your users closer to the data with organization-wide self-service analytics and lakehouse flexibility, scalability, and performance at a fraction of the cost. Run Dremio anywhere with self-managed software or Dremio Cloud.