Data Deployment vs Release
Data deployment is the technical action of moving code to an environment (staging, production), while a release is the business decision to make changes available to users.
Deployment and release are distinct concepts, though often confused. A deployment is technical: moving code from one environment to another. A release is business: making functionality available to users. An example: code is deployed to production at 2 AM (technical action), but the feature is released at 9 AM (business action, when users are notified and can access it). Organizations use this distinction to decouple technical and business timing, enabling safe deployments and controlled releases.
Deployment and release separation emerged because businesses needed control over when features reach users. Without this separation, technical deployments forced immediate releases, which was risky. Decoupling enables: deploying code off-hours to minimize issues, then releasing when businesses are ready. It also enables rollback: if a deployment causes problems, undeploy it; if a release has issues, disable the feature without re-deploying.
In data contexts, deployment means: loading new transformation code, updating metrics definitions, or changing database schemas. Release means: making new metrics available to users, switching dashboards to new metrics, or communicating to business stakeholders that new data is available. Advanced organizations use feature flags: deploying code and metrics to production but hiding them until release time. This enables safe deployment and controlled release.
Key Characteristics
- ▶Deployment: technical promotion of code to environments
- ▶Release: business decision to make changes available to users
- ▶Decoupling enables independent timing and control
- ▶Feature flags enable deployment without release
- ▶Rollback can be technical (undeploy) or business (un-release)
- ▶Improves safety and control
Why It Matters
- ▶Control: Separate business and technical timing
- ▶Safety: Deploy off-hours, release during business hours
- ▶Rollback: Disable features without re-deploying code
- ▶Coordination: Release timing aligns with communication
- ▶Flexibility: Enable rapid deployments with controlled releases
Example
A new revenue metric is developed and deployed to production at 2 AM (test queries confirm it works), but not released to dashboards. The following morning, analytics team validates the metric, communicates to stakeholders, and at 9 AM, the metric is released (dashboards updated to use it, emails sent to users). If issues arise before 9 AM, the metric is un-released (dashboards reverted) but remains deployed.
Coginiti Perspective
Coginiti decouples deployment from release through the Analytics Catalog's promotion workflow and Coginiti Actions scheduling. Code can be deployed to production workspaces and materialized to tables/views without immediately exposing changes to users; SMDL semantic definitions can be version-controlled separately from their publication. Coginiti Actions' cron scheduling and job dependencies enable controlled deployment timing, while lifecycle hooks (beforeAll, beforeEach, afterEach, afterAll) provide automation points for deployment activities distinct from user-facing release events.
Related Concepts
More in Collaboration & DataOps
Analytics Engineering
Analytics engineering is a discipline combining data engineering and analytics that focuses on building maintainable, tested, and documented data transformations and metrics using software engineering practices.
Code Review (SQL)
Code review for SQL involves peer evaluation of SQL code changes to ensure correctness, quality, and adherence to standards before deployment.
Continuous Delivery
Continuous Delivery is the practice of automating data code changes to a state ready for production deployment, requiring explicit approval for the final production promotion.
Continuous Deployment (CD)
Continuous Deployment is the automated promotion of code changes to production immediately after passing all tests, enabling rapid delivery with minimal manual intervention.
Continuous Integration (CI)
Continuous Integration is the practice of automatically testing and validating data code changes immediately after commit, enabling rapid feedback and early error detection.
Data Collaboration
Data collaboration is the practice of multiple stakeholders working together on shared data work through version control, documentation, review processes, and communication tools.
See Semantic Intelligence in Action
Coginiti operationalizes business meaning across your entire data estate.