Apache Hive

What is Apache Hive?

Apache Hive is a data warehouse technology that facilitates querying and managing of large datasets stored in distributed storage systems like Hadoop. Hive is built on top of Hadoop and provides a SQL-like language called HiveQL (HQL) to query data stored in Hadoop Distributed File System (HDFS) or other data sources like Apache HBase.

How does Apache Hive work?

Hive translates SQL-like queries written in HiveQL into MapReduce or Apache Tez jobs that can be executed on a Hadoop cluster. By doing so, Hive enables data analysts and scientists to perform complex analysis and data processing tasks on large datasets with familiar SQL-based tools and techniques. Hive also provides a schema-on-read approach, which means that data schemas are applied when querying data rather than when data is ingested into the system, offering more flexibility and agility over traditional data warehousing techniques.

Why use Apache Hive?

Apache Hive offers the following benefits:

  • Ability to process and analyze large datasets on a Hadoop cluster
  • SQL-like interface that is familiar to data analysts and scientists
  • Schema-on-read approach that offers more flexibility and agility over traditional data warehousing techniques
  • Integration with other Hadoop ecosystem technologies like Pig, HBase, and Spark

Hive architecture

Hive consists of the following components:

  • Metastore: A database that stores metadata about tables, columns, partitions, and other entities in the Hive system
  • HiveQL: A query language that allows users to express queries in SQL-like syntax
  • Driver: Coordinates activities among the metastore and Hadoop Distributed File System (HDFS); accepts queries and executes them by converting them into a series of MapReduce or Tez jobs
  • Execution engine: Executes the series of MapReduce or Tez jobs generated by the driver
  • Storage handler: Allows users to write custom code for interfacing with external storage systems


While Hive offers several advantages, it also has some limitations:

  • Query latency can be high because HiveQL queries have to be translated into MapReduce or Tez jobs, which can take time to execute
  • Table-level locks can lead to contention in heavily concurrent environments
  • Hive's schema-on-read approach can lead to data inconsistency if schemas are not properly defined


Apache Hive is a powerful tool for data warehousing and analysis on a Hadoop cluster. Its SQL-like interface and schema-on-read approach make it easy for data analysts and scientists to query and process large datasets using familiar tools and techniques. While Hive has some limitations, its benefits far outweigh its drawbacks for many businesses and organizations.

Why Dremio users should know about Apache Hive

Dremio enables users to run federated queries across multiple data sources like Apache Hive, HDFS, AWS S3 and many others. By knowing how to query data with Apache Hive, Dremio users can leverage the power of Hive for data processing and analysis in their federated queries, further increasing the functionality of Dremio.

Get Started Free

No time limit - totally free - just the way you like it.

Sign Up Now

See Dremio in Action

Not ready to get started today? See the platform in action.

Watch Demo

Talk to an Expert

Not sure where to start? Get your questions answered fast.

Contact Us