Glossary/File Formats & Data Exchange

Avro

Avro is an open-source data serialization format that compactly encodes structured data with a defined schema, supporting fast serialization and deserialization across programming languages and systems.

Avro stores data in a compact binary format along with a schema that describes the data structure, column names, and types. Unlike row-based text formats like CSV that require parsing to interpret values, Avro binary encoding is compact and fast to deserialize. Avro schemas are defined in JSON and can evolve: older versions of applications can read data written by newer versions and vice versa, as long as schema changes follow compatibility rules. This forward and backward compatibility makes Avro excellent for long-lived streaming systems where producers and consumers may not update simultaneously.

Avro is particularly popular in streaming and event-driven architectures, used extensively with Kafka, where producers send Avro-encoded events to topics and consumers read and deserialize them. Avro supports complex nested types and provides better schema evolution than simpler formats. While Parquet and ORC optimize for analytical queries through columnar storage, Avro optimizes for serialization efficiency and schema flexibility in streaming contexts. Many data architectures use Avro for streaming ingestion, then convert to Parquet or ORC for long-term storage and analytics.

Key Characteristics

  • Binary serialization format with embedded schema
  • Compact encoding significantly smaller than text formats
  • Supports schema evolution with backward and forward compatibility
  • Designed for fast serialization and deserialization
  • Widely used in streaming and event-driven systems
  • Supports complex nested data types and multiple programming languages

Why It Matters

  • Reduces data transmission size by 10x compared to JSON or CSV
  • Enables schema evolution without breaking applications
  • Supports fast, language-agnostic serialization
  • Critical for streaming systems where schema changes occur
  • Reduces infrastructure costs through smaller message sizes
  • Prevents data incompatibility issues in distributed systems

Example

A streaming pipeline produces events from web applications: user_id, timestamp, event_type, and properties (nested). Events are serialized as Avro using a schema defining these fields and their types. A JSON equivalent of the same event is 300 bytes, while Avro encoding is 50 bytes. Across millions of daily events, Avro reduces transmission and storage by 6x. When the application adds a new optional field, the Avro schema evolves, and existing consumers continue working automatically through compatibility rules.

Coginiti Perspective

Coginiti integrates with Avro-based streaming systems to ingest event data from Kafka and other sources, supporting schema evolution across producer and consumer applications. When building analytical transformations over streaming data, practitioners use CoginitiScript to decode Avro messages, apply transformations, and materialize results to Parquet or Iceberg for downstream analytics, following an ELT pattern that preserves event data flexibility while optimizing storage for analytical query patterns.

See Semantic Intelligence in Action

Coginiti operationalizes business meaning across your entire data estate.