Apache Iceberg is an open table format for datasets that can be used with compute engines like Spark, Trino, PrestoDB, Flink, and Hive.
It has a lot of failsafes in place to ensure that users don’t accidentally mess up a table with a wrong command.
Its schema evolution supports tasks like add, drop, update, or rename, and won’t inadvertently un-delete data. It also has hidden partitioning, which helps to prevent silently incorrect results or slow queries that might result from user mistakes.
“It’s quickly becoming that industry standard for how tables are represented in systems like S3 and object storage,” said Tomer Shiran, founder and chief product officer at cloud data lake company Dremio, which is a contributor to the project.
Read the full article here on Software Development Times.