Glossary/File Formats & Data Exchange

ORC

ORC (Optimized Row Columnar) is an open-source columnar file format that stores data in compressed columns, optimized for fast analytical queries and efficient storage in data lakes.

ORC is a columnar format similar to Parquet, developed within the Apache Hadoop ecosystem. Like Parquet, ORC stores data by column, enabling compression, selective column reading, and fast analytical performance. ORC emphasizes heavy compression through technique like run-length encoding and dictionaries, often achieving higher compression ratios than Parquet at the cost of slightly slower read performance. ORC files contain extensive metadata including column statistics (min/max values, null counts), enabling query optimizers to skip unnecessary data and filter at the file level.

ORC is particularly optimized for Hive and Spark workloads and dominates in Hadoop and Hortonworks ecosystems. While Parquet is more widely adopted across cloud platforms and newer architectures, ORC remains dominant in mature Hadoop-based data lakes. The choice between ORC and Parquet often depends on existing tooling: Hadoop environments favor ORC, while cloud-native systems typically favor Parquet. ORC's focus on compression and metadata makes it excellent for large, infrequently-accessed data lakes, while Parquet's balanced approach works well across diverse platforms.

Key Characteristics

  • Columnar format optimized for analytical queries
  • Achieves high compression ratios through aggressive compression techniques
  • Contains extensive metadata for query optimization
  • Includes file-level statistics for data pruning
  • Highly integrated with Hive and Hadoop ecosystems
  • Supports schema evolution and complex data types

Why It Matters

  • Achieves very high compression ratios, reducing storage costs significantly
  • Reduces query execution time by reading only necessary columns
  • Extensive metadata enables smart data pruning
  • Excellent choice for large data lakes with infrequent access
  • Strong integration with Hadoop-based analytics workflows
  • Reduces network bandwidth for distributed analytics systems

Example

A data lake stores two years of application logs in ORC format: 500GB uncompressed. With ORC compression, storage reduces to 25GB, a 20:1 compression ratio. The ORC metadata contains column statistics: min/max timestamps, null counts, etc. When a query requests logs from a specific date range, the system uses metadata to skip file stripes not containing that date range, avoiding unnecessary I/O and improving performance dramatically.

Coginiti Perspective

Coginiti supports ORC materialization for practitioners in Hadoop-based environments, particularly when integrating with Spark or Hive infrastructure; the format's aggressive compression and extensive metadata align with cost-optimization goals in large data lakes. For cloud-native architectures or cross-platform workflows, Coginiti typically materializes to Parquet or Iceberg, which offer broader platform support and more consistent performance across diverse SQL engines and cloud data warehouses.

See Semantic Intelligence in Action

Coginiti operationalizes business meaning across your entire data estate.