Dremio Blog

8 minute read · May 12, 2026

Sub-Microsecond Timestamps: Dremio’s Iceberg v3 Precision Support

Will Martin Will Martin Technical Evangelist
Start For Free
Sub-Microsecond Timestamps: Dremio’s Iceberg v3 Precision Support
Copied to clipboard

Most analytics workloads are fine with millisecond timestamps. A daily sales report doesn't care whether a transaction landed at 14:32:07.001 or 14:32:07.001423. But for the teams that do care about sub-millisecond precision, the lakehouse has historically been a weak link. Timestamps either got truncated on ingestion, stored as strings to preserve precision, or shoehorned into integer columns representing epoch time. None of those approaches work cleanly with standard SQL time functions.

Dremio's support for the Iceberg v3 timestamp specification changes this. Timestamps in Dremio now support precision from 0 to 9 fractional second digits, covering everything from whole seconds through microseconds to nanoseconds. For workloads that genuinely need that resolution, the data is stored correctly and queryable with standard SQL from day one.

How Timestamp Precision Works in Dremio

The Iceberg v3 specification defines four timestamp primitive types: 

  • timestamp and timestamptz store values at microsecond precision (6 fractional digits). 
  • timestamp_ns and timestamptz_ns store values at nanosecond precision (9 fractional digits).

The tz variants include timezone offset information; the others represent local time without timezone context.

In Dremio's SQL syntax, you specify precision directly in the column definition using TIMESTAMP(n) where n is the number of fractional second digits:

-- Microsecond precision (Iceberg v2 table)
CREATE TABLE sensor_events (
  event_id BIGINT,
  event_time TIMESTAMP(6)
);

-- Nanosecond precision (automatically creates Iceberg v3 table)
CREATE TABLE hft_ticks (
  tick_id BIGINT,
  event_time TIMESTAMP(9)
);

The precision level is also what determines which format version Dremio uses. Specify precision 6 or omit it entirely and you get an Iceberg v2 table with microsecond timestamps. Specify precision 9 and Dremio automatically creates an Iceberg v3 table. You don't manage the format version directly; it follows from the precision you need.

NOTE: you cannot alter a column's timestamp precision after the table is created. If you realise later that a microsecond column should have been nanoseconds, the path is to create a new table and migrate the data. This is a one-time decision at schema design time, not something that can be patched retroactively.

When queries combine timestamps of different precisions, such as in a JOIN or UNION, Dremio promotes to the higher precision automatically. A join between a microsecond column and a nanosecond column produces nanosecond output. This behavior prevents silent precision loss in queries that span tables with different timestamp definitions.

Try Dremio’s Interactive Demo

Explore this interactive demo and see how Dremio's Intelligent Lakehouse enables Agentic AI

Why Sub-Microsecond Precision Matters

The jump from milliseconds to microseconds is significant for a lot of analytics workloads. For the teams that need it, this upgrade is the difference between usable and unusable data.

At the microsecond level, you can correctly sequence events from distributed systems where multiple operations complete within the same millisecond. That's standard territory for microservices tracing, database audit logs, and real-time event pipelines. Most OLTP (OnLine Transaction Processing) databases already record at microsecond precision, so ingesting into a lakehouse that preserves that precision means you're not silently degrading your data.

At the nanosecond level, the use cases narrow but become highly specific. Financial exchanges publish market data at nanosecond resolution because, at modern trading speeds, two events within the same microsecond are not the same event. The same applies for industrial sensors on high-speed manufacturing lines or scientific instruments in physics experiments. For these workloads, storing timestamps at lower precision doesn't just lose accuracy. It loses critical ordering information the data is meant to capture.

Real-World Uses for High-Precision Timestamps

High-frequency trading and market data analytics are the clearest fit for nanosecond precision. Order book events, trade executions, and quote updates all arrive with nanosecond timestamps from exchange feeds. A lakehouse that preserves this resolution lets quant teams run latency analysis, reconstruct market microstructure, and audit execution quality against actual event sequences rather than approximations.

Distributed systems observability is a strong microsecond use case, which Dremio can now support. OpenTelemetry traces, service mesh logs, and database query timings are all generated at microsecond resolution. Landing this data in a Dremio Iceberg table at TIMESTAMP(6) means you can query span durations, identify contention between concurrent operations, and correctly attribute latency to specific services without the rounding errors that come from millisecond truncation.

IoT and industrial telemetry covers both ends of the spectrum. Consumer IoT devices typically operate at millisecond precision, but industrial sensors in manufacturing, energy, and process control regularly generate data at microsecond or nanosecond rates. A lakehouse that can ingest and correctly store both makes it practical to run unified analytics across device classes without normalising away the precision your high-fidelity sensors provide.

Scientific and research data pipelines are another natural fit. Genomics sequencing equipment, particle physics detectors, and astronomical observation instruments all produce time-series data where sub-microsecond ordering is meaningful. Storing this data in a standard lakehouse format with full precision preserved means it can be queried, joined, and analysed with the same SQL tooling used for business analytics, without custom data handling.

Getting Started

Timestamp precision is available on Iceberg tables in Dremio Cloud today. Use TIMESTAMP(6) for microseconds (or omit precision entirely for the default). Use TIMESTAMP(9) for nanoseconds and Dremio creates the v3 table automatically.

To try this against your own data, spin up a free Dremio Cloud environment at dremio.com/get-started. Create a TIMESTAMP(9) table, load nanosecond-resolution data, and query fractional seconds with standard SQL. No custom types, no integer workarounds.

Try Dremio Cloud free for 30 days

Deploy agentic analytics directly on Apache Iceberg data with no pipelines and no added overhead.