Alex Merced

Senior Tech Evangelist, Dremio

Alex Merced is a developer advocate for Dremio, a developer, and a seasoned instructor with a rich professional background. Having worked with companies like GenEd Systems, Crossfield Digital, CampusGuard, and General Assembly.

Alex is a co-author of the O’Reilly Book “Apache Iceberg: The Definitive Guide.”  With a deep understanding of the subject matter, Alex has shared his insights as a speaker at events including Data Day Texas, OSA Con, P99Conf and Data Council.

Driven by a profound passion for technology, Alex has been instrumental in disseminating his knowledge through various platforms. His tech content can be found in blogs, videos, and his podcasts, Datanation and Web Dev 101.

Moreover, Alex Merced has made contributions to the JavaScript and Python communities by developing a range of libraries. Notable examples include SencilloDB, CoquitoJS, and dremio-simple-query, among others.

Alex Merced's Articles and Resources

Blog Post

Dremio’s Commitment to being the Ideal Platform for Apache Iceberg Data Lakehouses

The data lake and data warehousing space is facing major disruption spearheaded by innovative table formats like Apache Iceberg. Iceberg has now become the cornerstone of modern data architecture. In the Apache Iceberg ecosystem, Dremio has emerged as the frontrunner, championing the use of Apache Iceberg to redefine the potential of data lakes. Dremio has […]

Read more ->

Blog Post

From MongoDB to Dashboards with Dremio and Apache Iceberg

Moving data from source systems like MongoDB to a dashboard traditionally involves a multi-step process: transferring data to a data lake, moving it into a data warehouse, and then building BI extracts and cubes for acceleration. This process can be tedious and costly. However, this entire workflow is simplified with Dremio, the Data Lakehouse Platform. […]

Read more ->

Blog Post

From SQLServer to Dashboards with Dremio and Apache Iceberg

Moving data from source systems like SQLServer to a dashboard traditionally involves a multi-step process: transferring data to a data lake, moving it into a data warehouse, and then building BI extracts and cubes for acceleration. This process can be tedious and costly. However, this entire workflow is simplified with Dremio, the Data Lakehouse Platform. […]

Read more ->

Blog Post

BI Dashboards with Apache Iceberg Using AWS Glue and Apache Superset

Business Intelligence (BI) dashboards are invaluable tools that aggregate, visualize, and analyze data to provide actionable insights and support data-driven decision-making. Serving these dashboards directly from the data lake, especially with technologies like Apache Iceberg, offers immense benefits, including real-time data access, cost-efficiency, and the elimination of data silos. Dremio as a data lakehouse platform, […]

Read more ->

Blog Post

From Postgres to Dashboards with Dremio and Apache Iceberg

Moving data from source systems like Postgres to a dashboard traditionally involves a multi-step process: transferring data to a data lake, moving it into a data warehouse, and then building BI extracts and cubes for acceleration. This process can be tedious and costly. However, this entire workflow is simplified with Dremio, the Data Lakehouse Platform. […]

Read more ->

Blog Post

Run Graph Queries on Apache Iceberg Tables with Dremio & Puppygraph

The allure of the data lakehouse architecture, particularly with the Apache Iceberg table format, lies in its ability to be utilized across various systems, eliminating the need for expensive data movement and migration planning. In this article, we will explore how Apache Iceberg tables are employed within Dremio—a data lakehouse platform that serves as a […]

Read more ->

Blog Post

Top Reasons to Attend the Subsurface Conference for Apache Iceberg Fans

Suppose you’re a data engineer, data scientist, or data analyst. In that case, the Subsurface conference is an unmissable event held on May 2nd and 3rd, live online and in person in New York City. This premier gathering shines a spotlight on the innovative world of data lakehouses, offering a deep dive into the latest […]

Read more ->

Blog Post

BI Dashboards 101 with Dremio and Superset

Business Intelligence (BI) Dashboards are dynamic, data visualization tools used to display the current status of metrics and key performance indicators (KPIs) for an organization. Essentially, they provide a visual and interactive representation of data, enabling users to make informed decisions based on the latest information. BI dashboards pull data in from various sources, including […]

Read more ->

Blog Post

Data Lakehouse Versioning Comparison: (Nessie, Apache Iceberg, LakeFS)

DataOps, a collaborative data management practice focused on improving the communication, integration, and automation of data flows between data managers and data consumers across an organization, has emerged as a focal point for data-driven cultures. At the core of effective DataOps is versioning — creating, managing, and tracking different versions of data sets. Versioning is […]

Read more ->

Gnarly Data Waves Episode

Getting Started with Dremio: Build a Data Lakehouse on your Laptop

Want to experience Data Lakehouse architecture? Join us and build a data lakehouse on your laptop in this exciting workshop.
Read more ->

Gnarly Data Waves Episode

Learn how to reduce your Snowflake cost by 50%+ with a lakehouse

Transcript Note: This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Opening Alex Merced: Hey, everybody! This is Alex Merced, and welcome to another episode of Gnarly Data Waves presented by Dremio. In this episode, we’re going to have an exciting topic about learning […]

Join Alex Merced, Developer Advocate at Dremio to explore the future of data management and discover how Dremio can revolutionize your analytics TCO, enabling you to do more with less.
Read more ->

Blog Post

Git for Data with Dremio’s Lakehouse Catalog: Easily Ensure Data Quality in Your Data Lakehouse

When it comes to data, there are several challenges that may impact the quality of data you provide consumers, which can result in complex and fragile pipelines and sometimes make the visibility of issues worse. Luckily, the data lakehouse comes to the rescue. The combination of Dremio and Apache Iceberg allows you to simplify many […]

Read more ->

Blog Post

What is Lakehouse Management?: Git-for-Data, Automated Apache Iceberg Table Maintenance and more

The concept of a “data lakehouse” has emerged as a beacon of efficiency and flexibility, promising to deliver the best of both data lakes and data warehouses. However, as organizations rush to adopt this promising architecture, they often encounter a complex landscape of data management challenges. Enter the realm of lakehouse management, a market for […]

Read more ->

Blog Post

What is DataOps? Automating Data Management on the Apache Iceberg Lakehouse

  The ability to manage and manipulate vast amounts of data efficiently is not just an advantage; it’s a necessity. As organizations strive to become more agile and data-centric, a new discipline has emerged at the intersection of data management and operations: DataOps. This article delves into the essence of DataOps, its goals, and why […]

Read more ->

Blog Post

What is Nessie, Catalog Versioning and Git-for-Data?

Data is not just an asset, but the backbone of innovation and strategic decision-making; managing this data efficiently becomes paramount. Traditional data systems have struggled to keep pace with the explosion of data, evolving data formats, and the accelerating shift towards data lakes and cloud-based storage solutions. Enter Project Nessie, a new approach to lakehouse […]

Read more ->

Gnarly Data Waves Episode

Getting Started with Dremio

Dremio’s unified lakehouse platform for self-service analytics enables data consumers to move fast while also reducing manual repetitive tasks and ticket overload for data engineers.
Read more ->

Blog Post

Ingesting Data Into Apache Iceberg Tables with Dremio: A Unified Path to Iceberg

Apache Iceberg and Dremio have emerged as significant players in the data lakehouse space, offering robust data management and analytics solutions. Apache Iceberg, an open source table format, provides a high-performance platform for large-scale data analytics, enabling better data management through hidden partitioning, schema evolution, and efficient data file management. Likewise, Dremio, a cutting-edge data […]

Read more ->

Blog Post

Trends in Data Decentralization: Mesh, Lakehouse, and Virtualization

The scale, speed, and variety of data are growing exponentially, presenting new challenges for traditional data architectures. Conventional systems, relying on extensive data pipelines from source systems to data lakes and warehouses, are increasingly seen as too slow, rigid, and costly. In response, a transformative approach is emerging: data decentralization. This blog post delves into […]

Read more ->

Blog Post

What Is a Data Lakehouse Platform?

The concept of a data lakehouse is gaining significant traction. This innovative approach represents a paradigm shift from the traditional data warehouses many businesses have relied upon for years. At its core, a data lakehouse is a hybrid that combines the flexibility and scalability of a data lake with the structured organization and management features […]

Read more ->

Blog Post

How Dremio delivers fast Queries on Object Storage: Apache Arrow, Reflections, and the Columnar Cloud Cache

Dremio is a pioneering data lakehouse platform, renowned for its high-speed query engine. What sets Dremio apart is its ability to execute queries directly on data lake storage, eliminating the need to transfer data to other systems. This capability is powered by cutting-edge technologies like Apache Arrow, reflections, and the Columnar Cloud Cache (C3). Dremio’s […]

Read more ->

Blog Post

Open Source and the Data Lakehouse: Apache Arrow, Apache Iceberg, Nessie and Dremio

The “open lakehouse” concept is gaining prominence as the apex of the evolution of data lakehouse architecture. This approach leverages open source components to create a robust data management ecosystem in terms of tool interoperability, performance, and resilience by design. This article aims to delve into the critical open source components that form the backbone […]

Read more ->

Blog Post

Why Lakehouse, Why Now?: What is a data lakehouse, and How to Get Started

The story of the data lakehouse is a tale of evolution, responding to the growing demands for more adept data processing. In this article, we delve into this journey and explore how each phase in data management’s evolution contributed to the data lakehouse’s rise. This solution promises to harmonize the strengths of its predecessors while […]

Read more ->

Blog Post

ZeroETL: Where Virtualization and Lakehouse Patterns Unite

Organizations continually strive to harness the full potential of their data. The traditional approach involves moving data from various sources into a data lake and then into a data warehouse. This process is facilitated by layers of extract, transform, load (ETL) pipelines. While ETL has been a cornerstone of data management strategies, it presents several […]

Read more ->

Gnarly Data Waves Episode

Next-Gen Data Pipelines are Virtual: Simplify Data Pipelines with dbt, Dremio, and Iceberg

Join our upcoming Gnarly Data Waves Webinar, 'Next-Gen Data Pipelines are Virtual: Simplify Data Pipelines with dbt, Dremio, and Iceberg' to learn how to streamline, simplify, and fortify your data pipelines with Dremio's next-gen DataOps, saving time and reducing costs.…
Read more ->

Blog Post

Why Use Dremio to Implement a Data Mesh?

Organizations continuously seek architectures that can effectively handle modern data ecosystems’ complexities. Enter the concept of a data mesh, an architectural paradigm that alters the division of labor in how data is handled, processed, and delivered. This article explores why you should implement a data mesh with Dremio, a cutting-edge data lakehouse platform. Data mesh […]

Read more ->

Blog Post

Using dbt to Manage Your Dremio Semantic Layer

Accessing, analyzing, and managing vast amounts of information efficiently is crucial for thriving businesses. Dremio, as the premiere data lakehouse platform, enables efficient access to data across multiple sources, teams, and users. One of its core strengths lies in providing robust, unified, self-service access to data through a single, cohesive platform. Dremio’s semantic layer is […]

Read more ->

Blog Post

The Who, What, and Why of Data Products

The term “data products” has become increasingly prevalent today, especially concerning the growing trend of data mesh. Data products are often associated with cutting-edge data-driven strategies to break up massive centralized curation of data, previously treating data as a byproduct of operations and moving to shift thinking about curating data as a series of individual […]

Read more ->

Blog Post

Overcoming Data Silos: How Dremio Unifies Disparate Data Sources for Seamless Analytics

Effective management and utilization of data are crucial for the success of any business. However, one significant hurdle that many organizations face in their quest to become data-driven is the prevalence of data silos. Data silos occur when information is isolated in separate departments or systems within an organization, making it inaccessible or invisible to […]

Read more ->

Blog Post

Connecting to Dremio Using Apache Arrow Flight in Python

The quest for efficient and powerful data management and retrieval solutions is perpetual. Dremio and Apache Arrow Flight, when combined, simplify and speed up the way we interact with large datasets. This blog delves into the synergy of these technologies, particularly through the lens of Python, a language synonymous with data. Dremio is a data […]

Read more ->

Blog Post

Using Dremio to Reduce Your Snowflake Data Warehouse Costs

DOWNLOAD WHITEPAPER: Reduce Analytics TCO by 50% with a Data Lakehouse Snowflake has emerged as a powerful and flexible solution for organizations seeking to manage and analyze their data efficiently. However, as the volume and complexity of data grows, so do the associated costs. While flexible, Snowflake’s pay-as-you-go pricing model can sometimes lead to unforeseen […]

Read more ->
get started

Get Started Free

No time limit - totally free - just the way you like it.

Sign Up Now
demo on demand

See Dremio in Action

Not ready to get started today? See the platform in action.

Watch Demo
talk expert

Talk to an Expert

Not sure where to start? Get your questions answered fast.

Contact Us

Ready to Get Started?

Bring your users closer to the data with organization-wide self-service analytics and lakehouse flexibility, scalability, and performance at a fraction of the cost. Run Dremio anywhere with self-managed software or Dremio Cloud.