36 minute read · April 29, 2025

What is Apache Polaris

Alex Merced

Alex Merced · Head of DevRel, Dremio

Download a Free Copy of "Apache Polaris: The Definitive Guide" from O'Reilly

Modern data architectures demand open standards, multi-engine interoperability, and a robust foundation for governance. As data lakehouses gain popularity, the need for a centralized, cloud-native, and open metadata catalog has become increasingly urgent. Apache Polaris (Incubating) addresses this challenge directly by providing an open, Iceberg REST-based catalog for managing Apache Iceberg tables across diverse engines and clouds.

Built specifically for the open data ecosystem, Polaris is a next-generation catalog that enables organizations to securely organize, govern, and access Iceberg tables at scale. Unlike legacy solutions, Polaris is designed from the ground up to support an open table format and work seamlessly with engines like Spark, Flink, Dremio, and Snowflake. It separates metadata from storage and provides strong access controls, offering the flexibility and security that modern data teams require.

In this guide, we will take a deep dive into what Apache Polaris is, why it matters, its architecture, and key capabilities, as well as how to get started leveraging Polaris to unify your metadata and governance strategy for the data lakehouse.

What Is Apache Polaris?

Overview and Purpose

Apache Polaris is an open-source catalog service purpose-built for managing Apache Iceberg tables in a distributed, multi-engine environment. At its core, Polaris implements the Iceberg REST Catalog API, providing a standardized, cloud-native method for connecting query engines with Iceberg metadata, without requiring tight coupling to storage systems.

The primary purpose of Polaris is to centralize metadata management, governance, and access control for Iceberg tables. Traditionally, each compute engine managed its own metadata or relied on legacy systems, such as Hive Metastore. This fragmentation created challenges in consistency, security, and scalability. Polaris addresses these issues by offering a unified, open catalog layer that engines can interact with consistently, regardless of the underlying cloud provider or storage format.

Polaris not only manages metadata but also enforces governance policies through an integrated role-based access control (RBAC) system. It supports credential vending, issuing temporary, fine-grained storage credentials to query engines during query execution. This significantly enhances security while reducing the overhead of manual credential management.

In practical terms, Polaris enables organizations to easily create and manage multiple catalogs, logically organize tables into namespaces, and apply governance policies centrally, all while maintaining full compatibility with the growing Iceberg ecosystem.

The Role of the Polaris Catalog in the Open Data Lakehouse

The emergence of the open data lakehouse model demands that metadata catalogs be open, interoperable, and decoupled from specific compute or storage providers. Polaris was built precisely for this vision.

In a lakehouse architecture, metadata acts as the bridge between raw object storage and sophisticated analytics engines. Without a robust catalog, operations like table evolution, time travel, schema enforcement, and transactional consistency become difficult to maintain. Polaris fulfills this need by serving as a metadata operating system for Iceberg tables.

Key roles Polaris plays in the open data lakehouse include:

  • Centralized Metadata Source: Polaris acts as the single source of truth for Iceberg table metadata, ensuring that all engines see the same consistent view of data.
  • Engine Interoperability: By adhering to the Iceberg REST Catalog specification, Polaris supports Spark, Flink, Dremio, Snowflake, and any other compatible engine without vendor lock-in.
  • Multi-Cloud Readiness: Polaris works across Amazon S3, Azure Blob Storage, and Google Cloud Storage, allowing organizations to span multiple clouds with a consistent metadata layer.
  • Governance Enforcement: Through Polaris' built-in RBAC model, administrators can enforce granular access control policies over catalogs, namespaces, tables, and views.
  • Open Ecosystem Compatibility: Polaris is designed to complement and enhance the open table format movement, making it easy for organizations to adopt open standards without sacrificing manageability or security.

In an era where data architectures are becoming increasingly decentralized and multi-modal, Apache Polaris offers a critical foundation for building resilient, open, and scalable lakehouses.

Why the Polaris Catalog Matters

Understanding why Polaris matters requires first examining the pain points with legacy catalogs and the growing need for an open, interoperable metadata solution.

Challenges with Legacy Catalogs (Hive Metastore, AWS Glue)

Many data engineering teams today are still reliant on catalogs like the Hive Metastore or cloud-native offerings like AWS Glue. While these systems served important roles during the early phases of big data adoption, they present significant challenges for modern lakehouse architectures:

  • Tight Coupling to Specific Compute Engines: Hive Metastore was designed with Hadoop and Hive in mind. Integrating it with newer engines like Spark, Flink, or Trino often requires complex workarounds that introduce operational overhead and risk.
  • Limited Transactional Guarantees: Legacy catalogs typically do not support atomic table operations, which leads to consistency issues when multiple engines simultaneously read from or write to the same table.
  • Vendor Lock-In: Proprietary cloud services like AWS Glue often limit portability, making it harder to migrate workloads across cloud and on-prem environments or adopt open standards fully.
  • Governance Complexity: Implementing consistent access control and audit policies across different catalogs and environments becomes difficult, forcing teams to stitch together siloed governance solutions.
  • Scale Limitations: As data volumes grow, legacy catalogs can become performance bottlenecks, impacting query latency, metadata refresh cycles, and overall system reliability.

These limitations not only increase operational complexity but also hinder an organization’s ability to innovate with new analytics, machine learning, and AI initiatives.

The Need for Open, Multi-Engine Metadata Coordination

The future of analytics is increasingly open, distributed, and multi-engine. Organizations no longer want to be locked into a single vendor's ecosystem, nor do they want to build and maintain bespoke integrations across different query engines. Instead, they seek a metadata layer that:

  • Works uniformly across engines: Whether an organization is running batch ETL jobs with Apache Spark, streaming pipelines with Apache Flink, ad-hoc analytics with Trino, or machine learning workloads in Snowflake, the metadata catalog must support all these engines seamlessly.
  • Supports cloud flexibility: As hybrid and multi-cloud architectures become more common, teams need a metadata system that can operate across AWS, Azure, and Google Cloud without imposing constraints.
  • Embraces open standards: Adopting open table formats like Apache Iceberg is not enough if the metadata layer itself is proprietary or closed. Truly open interoperability demands a metadata service that is standards-compliant and transparent.
  • Simplifies governance at scale: Managing who can see, modify, or query data should not require per-engine configuration. A centralized, policy-driven governance model is critical for security, compliance, and ease of operations.

Apache Polaris was designed to fulfill these needs. By building on the Iceberg REST Catalog API and introducing a robust, cloud-native architecture with integrated governance, Polaris enables organizations to realize the true potential of the open data lakehouse.

It provides a single point of control for managing all Iceberg tables, allowing organizations to focus more on delivering value from their data and less on stitching together brittle infrastructure components.

Key Capabilities of Polaris

Apache Polaris is more than just a metadata catalog. It is a purpose-built platform for modern metadata management that emphasizes interoperability, governance, scalability, and simplicity. Polaris offers a range of capabilities designed to address the core technical and operational challenges that organizations face when managing data across a diverse set of engines and storage platforms.

This section explores the key features that make Polaris a foundational layer for the open data lakehouse.

Centralized Metadata and Namespace Management

At its core, Polaris provides a centralized source of truth for all metadata associated with Apache Iceberg tables. Instead of maintaining scattered catalogs across different systems or engines, Polaris enables organizations to manage all of their tables, namespaces, and views within a single, coherent framework.

Key functions include:

  • Catalogs: Logical groupings of Iceberg tables, which can be internally managed or externally synced from other sources like Snowflake.
  • Namespaces: Organizational structures within catalogs that allow for nested, hierarchical grouping of tables, similar to schemas or databases.
  • Tables and Views: Full support for Iceberg tables and Iceberg views, with transactional metadata updates and schema evolution capabilities.

By consolidating metadata management, Polaris reduces operational complexity and ensures that all connected engines operate against a consistent view of the data environment.

Built-in Data Governance and Access Controls

Governance is a first-class citizen in Polaris, not an afterthought. Polaris implements a role-based access control (RBAC) system that allows fine-grained management of permissions across catalogs, namespaces, tables, and views.

The governance model in Polaris is structured around three core constructs:

  • Principal Roles: Logical groupings of service principals (users or services) based on their responsibilities.
  • Catalog Roles: Resource-specific roles that define privileges on specific catalogs and their contents.
  • Privileges: Specific actions like creating tables, reading data, or altering schemas that can be granted to catalog roles.

Credential vending ensures that query engines receive short-lived, temporary credentials during query execution, minimizing the risk of long-lived storage credentials being compromised.

With this model, Polaris enables secure, compliant, and auditable data access across a distributed environment, even as organizations scale their data operations across multiple teams and projects.

Multi-Engine Interoperability

One of the defining features of Polaris is its ability to support multiple query engines natively and without friction. Polaris is fully compatible with any engine that supports the Apache Iceberg REST Catalog API, including:

  • Apache Spark
  • Apache Flink
  • Trino
  • Snowflake
  • Dremio

This interoperability allows different teams within an organization to work with their preferred engines while sharing the same underlying datasets. A table created by a streaming job in Flink can be immediately queried by a machine learning job in Spark or a business intelligence dashboard powered by Trino.

Polaris effectively decouples compute from metadata management, enabling true flexibility and choice in the data lakehouse stack.

Polaris Catalog for Apache Iceberg: Native Support for Open Table Formats

Unlike traditional or proprietary catalogs, Polaris was built specifically to embrace open table formats, with deep, native support for Apache Iceberg.

Key capabilities include:

  • Schema Evolution: Polaris tracks schema changes, allowing tables to evolve without breaking downstream applications.
  • Partition Evolution: Supports changes to partitioning strategies over time, without requiring disruptive data rewrites.
  • Time Travel and Snapshot Isolation: Enables users to query historical versions of a table or rollback to previous states.
  • Atomic Table Operations: Ensures that operations like inserts, deletes, and updates are transactionally consistent across engines.

By tightly aligning with the Iceberg table specification, Polaris empowers organizations to confidently build lakehouse architectures that are open, flexible, and future-proof.

Polaris Architecture and Integration

Apache Polaris was architected from the ground up to serve as a cloud-native, engine-agnostic metadata platform for the open data lakehouse. Its design reflects a deep understanding of modern distributed systems requirements, focusing on scalability, security, and interoperability.

In this section, we explore the architecture of Polaris, how it separates metadata from storage, and how it integrates securely across engines and cloud environments.

How Polaris Separates Metadata from Storage

One of the core architectural principles of Polaris is the separation of metadata from data storage. In Polaris:

  • Metadata files like metadata.json are tracked by Polaris allowing engines to discover an iceberg tables entry point.
  • Data remains stored independently in external cloud object storage, such as Amazon S3, Azure Blob Storage, or Google Cloud Storage. Polaris will provide temporary credentials to access this storage to an authorized engine.

This separation delivers several important benefits:

  • Portability: Organizations are not locked into a specific cloud vendor or storage system.
  • Consistency: Multiple engines can interact with the same tables without conflicting metadata operations.
  • Flexibility: Metadata can evolve independently from the underlying data files, supporting use cases like schema evolution, partition evolution, and table versioning.

Polaris manages the metadata lifecycle by maintaining pointers to the latest metadata snapshots of each Iceberg table, enabling atomic updates and rollback capabilities. This model ensures that updates to tables are isolated, consistent, and immediately visible across all connected engines.

Unified View of Data Across Engines and Clouds

Polaris offers a global, unified view of metadata across multiple engines and storage backends. Whether data resides in AWS, Azure, or Google Cloud, Polaris catalogs present a consistent organizational structure to clients.

Key aspects of this unified integration include:

  • Multi-Cloud Storage Support: Polaris can connect to different cloud storage systems using customizable storage configurations. Each catalog can be independently configured with its storage credentials and settings.
  • Service Connections for Engines: Query engines like Spark, Flink, Dremio, and Snowflake connect to Polaris via standard REST APIs. Service principals authenticate and assume roles to interact with metadata securely.
  • Catalog Synchronization: Polaris can manage both internal catalogs (fully managed within Polaris) and external catalogs (read-only synced from external systems like Nessie, Gravitino, Lakekeeper).

This architecture enables true multi-cloud, multi-engine interoperability, allowing data engineers and analysts to work without worrying about underlying infrastructure details or access inconsistencies.

Security and Role-Based Access Control (RBAC)

Security and governance are first-class concerns in Polaris, baked directly into the platform's architecture through a Role-Based Access Control (RBAC) model.

In Polaris:

  • Service Principals represent users, services, or applications connecting to the catalog.
  • Principal Roles group service principals according to organizational policies or responsibilities.
  • Catalog Roles define specific sets of privileges (such as table creation, data read/write, namespace management) scoped to a particular catalog.
  • Privileges are granted to catalog roles and, through them, to principal roles and their associated principals.

This design allows administrators to enforce fine-grained, consistent policies across:

  • Catalogs
  • Namespaces
  • Tables
  • Views

Additionally, Polaris uses credential vending to issue temporary credentials to query engines at runtime. Rather than requiring engines to have permanent, broad access to cloud storage, Polaris securely vends scoped credentials that expire after the query completes. This dramatically improves security by reducing the attack surface and enabling tighter control over data access.

Together, the RBAC model and credential vending ensure that every interaction with Polaris-managed metadata and storage is governed, auditable, and secure by default.

Benefits for Data Teams and Organizations

Apache Polaris delivers more than just a modern metadata catalog; it fundamentally transforms how data teams manage, govern, and access data in the open data lakehouse. By addressing long-standing operational challenges and enabling true openness and interoperability, Polaris empowers organizations to work more efficiently, securely, and at greater scale.

This section explores the key benefits that Polaris brings to technical teams and enterprises building data-driven platforms.

Simplified Governance and Compliance

Governance has traditionally been a major pain point in distributed data systems. Polaris simplifies governance by offering a centralized, role-based framework that spans multiple engines and cloud environments.

Key advantages include:

  • Unified Access Control: Administrators can define access policies once and apply them consistently across catalogs, namespaces, tables, and views.
  • Credential Vending: Short-lived, dynamically issued credentials ensure that query engines only have the minimum necessary access, reducing exposure risks.
  • Auditable Operations: All interactions with the catalog are governed through clear, enforceable policies, making it easier to satisfy regulatory compliance and internal audits.

Instead of managing fragmented permissions across different systems, Polaris enables a single, coherent model for securing data access at scale.

Reliable Multi-Engine Collaboration

In a modern analytics environment, different teams often prefer different engines for different workloads. Data engineers might run ETL pipelines in Apache Spark, data scientists might perform feature engineering in Snowflake, and analysts might use Trino for interactive queries.

Polaris supports this diversity natively:

  • REST-based Integration: Any engine that supports the Iceberg REST Catalog API can easily connect to Polaris without custom adapters.
  • Consistency Across Engines: All engines interacting with Polaris see the same metadata view, eliminating synchronization problems and stale reads.
  • Transactionally Safe Metadata Updates: Schema changes, partition updates, and snapshot management are atomic and immediately visible to all engines.

By acting as a neutral metadata layer, Polaris fosters collaboration across organizational boundaries without forcing teams to standardize on a single technology stack.

Self-Service Discovery and Analytics Enablement

One of the goals of the data lakehouse is to empower more users to independently find, understand, and use data. Polaris plays a critical role in enabling this vision by making metadata easily accessible and consistent.

Benefits for self-service analytics include:

  • Logical Organization of Data: Through catalogs and namespaces, users can browse datasets based on business domains, projects, or organizational units.
  • Rich Metadata Visibility: Polaris exposes table schemas, properties, and version histories, making it easier for users to understand the structure and evolution of data assets.
  • Standardized Discovery Across Engines: Whether users connect through Spark, Trino, or another tool, they have a unified and consistent view of the available datasets.

This democratization of data access allows organizations to accelerate innovation, improve time-to-insight, and reduce the dependency on central data engineering teams for everyday analytics needs.

Polaris vs Traditional and Vendor-Locked Catalogs

As organizations modernize their data platforms, they are increasingly encountering the limitations of traditional metadata catalogs and the risks associated with vendor-locked metadata services. Apache Polaris offers a fundamentally different approach, purpose-built for openness, flexibility, and multi-cloud interoperability.

In this section, we will compare Polaris against legacy catalogs like Hive Metastore and AWS Glue, and explain how Polaris complements and enhances the broader Apache Iceberg and open data ecosystem.

Polaris vs Hive Metastore and AWS Glue

Traditional metadata catalogs such as Hive Metastore and AWS Glue were created for earlier generations of data architectures. While they provided important building blocks for the big data era, they present critical limitations when applied to modern, cloud-native, and multi-engine environments.

FeatureHive MetastoreAWS GlueApache Polaris
Open StandardsNo (Hive-specific)Yes (Glue Catalog APIs and Iceberg REST)Yes (Iceberg REST Catalog API)
Multi-Engine SupportFull (Spark, Flink, Trino, Dremio)Full (Spark, Flink, Trino, Snowflake, Dremio)Full (Spark, Flink, Trino, Snowflake, Dremio)
Transactional Metadata UpdatesNoYes (atomic operations via Open Table Formats)Yes (atomic operations via Iceberg)
Role-Based Access Control (RBAC)Basic/External OnlyProprietary IAMNative, Open RBAC with credential vending
Cloud Provider Lock-InNo (but Hadoop-centric)Yes (AWS specific)No (multi-cloud: AWS, Azure, GCP)
Credential VendingNoIf using Iceberg REST APIYes
Schema Evolution and Time TravelLimited supportFull Support via Open Table FormatsFull support via Iceberg features

Key Takeaways:

  • Openness: Polaris was built from the ground up to support the Iceberg open standard, ensuring interoperability and long-term data portability.
  • Engine Flexibility: Unlike legacy catalogs that tie metadata management to a specific engine or platform, Polaris supports a wide variety of modern query engines.
  • Cloud Neutrality: Polaris is designed for true multi-cloud deployments, enabling organizations to span AWS, Azure, and Google Cloud with consistent metadata governance.
  • Security First: With integrated RBAC and credential vending, Polaris offers a more secure and manageable approach to data access compared to legacy catalogs that rely heavily on static credentials.

How Polaris Complements Apache Iceberg and the Open Data Stack

Apache Iceberg introduced a major evolution in table format design, solving critical challenges such as hidden partitioning, schema evolution, and transactional consistency. However, a powerful table format needs an equally capable metadata service to unlock its full potential.

Polaris enhances the Apache Iceberg ecosystem by providing:

  • REST-Based Catalog Interactions: Polaris supports Iceberg’s REST Catalog API, enabling truly decoupled, cloud-native metadata operations.
  • Metadata Consistency Across Engines: All Iceberg-compatible engines can interact with the same tables without risk of metadata divergence or operational inconsistencies.
  • Scalable Governance: Polaris overlays Iceberg’s technical capabilities with enterprise-grade security, access control, and auditing mechanisms.
  • Future-Proof Architecture: By adhering to open standards and avoiding vendor-specific dependencies, Polaris ensures that organizations maintain control over their data as technologies and requirements evolve.

In an open data stack built on technologies like Apache Iceberg, Apache Spark, Trino, and others, Polaris acts as the metadata backbone. It provides the consistency, security, and flexibility necessary for building resilient lakehouse platforms that are open by design and ready for the future.

Getting Started with Apache Polaris

Apache Polaris is designed to be approachable for data engineers, architects, and platform teams who want to integrate a centralized, open metadata catalog into their data lakehouse architecture. Whether you are exploring Polaris for the first time or planning to deploy it into production, the process is straightforward and well-documented.

This section walks through the basic steps for getting up and running with Polaris, including deployment options, integration best practices, and enabling the Polaris Catalog for Apache Iceberg.

Hands-on with Apache Polaris OSS Walkthrough Tutorial

Deployment Options and Integration Best Practices

Polaris provides flexibility in how you deploy and manage the catalog, depending on your environment and operational requirements.

Local Deployment for Evaluation:

  • Polaris can be deployed quickly for development or evaluation purposes using Docker.
  • You can also build Polaris from source using Gradle if you prefer direct source control.

To deploy using Docker:

  1. Clone the Polaris repository: bashCopyEditgit clone https://github.com/apache/polaris.git cd polaris
  2. Start the service with Docker Compose: cssCopyEditdocker compose -f docker-compose.yml up --build
  3. Polaris will become available at localhost:8181.

This lightweight approach is ideal for initial experimentation, integration testing, and demonstrations.

Production Deployment:

  • For production environments, you can deploy Polaris as a standalone Java application.
  • You will need Java 21 and Gradle to build and run the application manually.
  • Production deployments should be configured with persistent metadata storage, external cloud storage (S3, Azure, GCS), and properly secured service principals.

Integration Best Practices:

  • Secure service credentials using Polaris' built-in credential vending mechanisms.
  • Isolate environments (development, staging, production) by creating separate Polaris catalogs or namespaces.
  • Monitor metadata health and refresh catalog pointers periodically to maintain consistency.
  • Integrate with Iceberg clients by using the Iceberg REST Catalog protocol for seamless interoperability.

By following these best practices, organizations can ensure that Polaris becomes a stable, scalable, and secure foundation for metadata management.

How to Enable the Polaris Catalog for Apache Iceberg

Connecting Apache Iceberg tables to Polaris is a straightforward process that leverages the Iceberg REST Catalog API. The general steps are:

1. Bootstrap Polaris and Create a Catalog:

  • After deploying Polaris, create a new catalog with your desired storage backend (e.g., S3, Azure, GCS).
  • Example CLI command to create a catalog:
    ./polaris \ --client-id ${CLIENT_ID} \ --client-secret ${CLIENT_SECRET} \ catalogs create \ --storage-type s3 \ --default-base-location ${DEFAULT_BASE_LOCATION} \ --role-arn ${ROLE_ARN} \ quickstart_catalog

2. Define Service Principals and Roles:

  • Create service principals representing the query engines or applications that will interact with Polaris.
  • Assign appropriate principal roles and catalog roles to control access rights (for example, granting TABLE_READ_DATA or TABLE_WRITE_DATA privileges).

3. Configure Your Iceberg Client or Query Engine:

  • Connect engines like Apache Spark to Polaris using the REST Catalog configuration.
  • Example Spark configuration:
    --conf spark.sql.catalog.quickstart_catalog=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.quickstart_catalog.catalog-impl=org.apache.iceberg.rest.RESTCatalog --conf spark.sql.catalog.quickstart_catalog.uri=http://localhost:8181/api/catalog --conf spark.sql.catalog.quickstart_catalog.credential='CLIENT_ID:CLIENT_SECRET' --conf spark.sql.catalog.quickstart_catalog.scope='PRINCIPAL_ROLE:ALL'

4. Begin Managing Tables:

  • Once connected, you can create namespaces, register tables, perform schema evolution, and take advantage of Iceberg’s features like time travel and partition evolution.

Example: Create a Table in Spark:

CREATE TABLE quickstart_namespace.quickstart_table (
id BIGINT,
data STRING
) USING ICEBERG;

5. Expand and Secure:

  • As you scale, configure multiple catalogs for different environments or data domains.
  • Continuously monitor and audit access using Polaris' built-in governance features.

Apache Polaris makes it simple to bring open, governed, and scalable metadata management to your Iceberg-based data lakehouse. With just a few steps, you can enable centralized control over your data assets while maintaining the flexibility to innovate across clouds and engines.

Conclusion: Unify and Govern Your Open Lakehouse with Apache Polaris

The move toward open data lakehouses demands more than just scalable storage and flexible compute. It requires a modern, open metadata catalog that can unify data assets across multiple engines, clouds, and environments while enforcing strong governance and access controls. Apache Polaris rises to meet this challenge.

By delivering centralized metadata management, native support for Apache Iceberg, multi-engine interoperability, and an integrated role-based access control model, Polaris enables organizations to:

  • Build truly open and flexible lakehouse architectures
  • Simplify governance and compliance across distributed environments
  • Empower data teams with self-service analytics and discovery
  • Securely manage and evolve metadata at scale

Unlike legacy catalogs and proprietary metadata services, Polaris is built on open standards and designed for the future of multi-cloud, multi-engine data ecosystems. It offers organizations a clear path to modernize their metadata infrastructure while maintaining complete control over their data strategy.

Learn more about the Dremio Catalog powered by Polaris for Apache Iceberg — discover how to unify metadata, streamline governance, and power open lakehouse analytics for your organization.

Explore the Dremio platform for Apache Iceberg, get a glimpse of the future of Apache Polaris, and dive into a hands-on Polaris setup to start your journey.

Sign up for AI Ready Data content

Discover How Apache Polaris Accelerates AI and Analytics with Unified, AI-Ready Data Products

get started

Get Started Free

No time limit - totally free - just the way you like it.

Sign Up Now
demo on demand

See Dremio in Action

Not ready to get started today? See the platform in action.

Watch Demo
talk expert

Talk to an Expert

Not sure where to start? Get your questions answered fast.

Contact Us

Ready to Get Started?

Enable the business to accelerate AI and analytics with AI-ready data products – driven by unified data and autonomous performance.