Coginiti menu Coginiti menu

What are Open Table Formats and Why Should You Care?

Matthew Mullins
January 7, 2025

In the world of data engineering and analytics, you frequently hear about the benefits of “data lakes” and the challenges of storing and querying massive, ever-changing datasets. However, you often run into file-level complexities when you try to build production-grade data pipelines. For instance, how do you handle schema evolution? How do you ensure your data is consistent? How do you safely modify data in multiple steps without corrupting your datasets?

Enter open table formats. They are essentially data management layers that sit on top of files in your data lake (e.g., Parquet files). They coordinate metadata, track changes, support ACID transactions, optimize queries, and more. Instead of just dealing with random files in a data lake, open table formats give you the “feel” of working with tables in a database—without the proprietary vendor lock-in.

Some of the most popular open table formats include:

Apache Iceberg

  • Official website
  • Key features: Versioned data, schema evolution, partitioning improvements, compatibility with multiple engines (Spark, Trino, Snowflake, etc.).

Apache Hudi

  • Official website
  • Key features: Fast upserts, incremental queries, built-in rollback, near real-time data ingestion.

Delta Lake

  • Official website
  • Key features: ACID transactions, schema enforcement, time travel for querying historical data versions, great ecosystem integration (especially with Spark).

Why should you care?

ACID Transactions at Scale

  • With an open table format, you can safely update, insert, or delete data in massive datasets. This is critical for late-arriving facts, corrections, GDPR compliance (right to be forgotten), and more.

Time Travel & Versioning

  • You can query older snapshots of your table without having to copy data or rely on backup files. This is useful for root-cause analysis, auditing, or simply debugging.

Schema Evolution

  • Add, rename, or remove columns as your business logic evolves. Open table formats eliminate the headache of managing brittle schemas.

Performance Optimization

  • Features like advanced partitioning, compaction, and clustering can speed up queries while reducing storage costs. You often get pushdown filters, partition pruning, and data skipping for free.

Ecosystem Integration

  • They are engine-agnostic by design, so you can use Spark, Hive, Trino, Presto, Flink, or your favorite query engine without fear of vendor lock-in. They also integrate nicely with orchestrators like Dagster or CoginitiScript.

Open and Extensible

  • Being open-source ensures a vibrant community, a transparent development model, and compatibility with the broader big data ecosystem. Plus, you can avoid the “Hotel California” effect where your data can check in but never leave!

More Resources

Apache Iceberg

Apache Hudi

Delta Lake

Databricks Unified Analytics

Takeaways

  • Open table formats combine the reliability of data warehouses with the flexibility and cost-effectiveness of data lakes.
  • They simplify data governance, improve performance, and help you build resilient data pipelines.
  • If you’re a data engineer, analyst, or scientist who needs consistent and evolving data without getting locked into proprietary solutions, open table formats are absolutely worth your attention.

Stay tuned for more topics in this series, where we’ll continue to tackle emerging data technologies and explain why you should care!