Glossary/AI, LLMs & Data Integration

Model Context Protocol (MCP)

The Model Context Protocol is a standard for how AI systems and applications communicate about context, resources, and capabilities, enabling LLMs to understand and access external tools and data sources dynamically.

As AI systems become more sophisticated, they need access to diverse tools and data sources (APIs, databases, document repositories). Without standardization, each AI application would need custom integrations. The Model Context Protocol defines a standard interface that allows LLMs to discover available resources, understand their capabilities, and invoke them. This enables Tool-Using AI at scale.

MCP works by establishing a protocol for communication: the server (providing resources like database access or APIs) advertises what it can do, and the client (the LLM or AI application) can query available capabilities and invoke them. This is similar to how REST APIs standardized web communication. By standardizing the context protocol, MCP enables vendors and organizations to build integrations once and have them work with multiple LLM-based applications.

MCP is particularly relevant for analytics because data systems need standardized ways to present themselves to AI agents. Rather than building custom Text-to-SQL or Data Copilot integrations for each system, MCP enables a single integration that works with any MCP-aware AI system. This dramatically reduces integration burden and accelerates adoption of AI analytics.

Key Characteristics

  • Standardizes how servers advertise resources and capabilities to AI systems
  • Enables LLMs to discover available tools, data sources, and their parameters
  • Provides a protocol for invoking resources and returning results
  • Supports resource authentication and authorization
  • Enables dynamic capability discovery so clients adapt to available resources
  • Designed to work across diverse systems (databases, APIs, file stores, etc.)

Why It Matters

  • Standardizes AI-to-tool communication, reducing custom integration burden
  • Enables Tool-Using AI and AI Agents to work across diverse systems seamlessly
  • Accelerates deployment of AI analytics by providing standard integration patterns
  • Allows organizations to standardize on MCP-compatible tools
  • Reduces vendor lock-in by enabling interoperability between LLM platforms and data systems
  • Facilitates governance: standard protocol makes authentication and audit logging consistent

Example

A database system implements MCP, advertising capabilities: "I can execute SQL queries with parameters: SELECT queries only, max result set 1M rows, authentication via OAuth." An AI Agent discovers this capability via MCP, understands the constraints, and can independently execute SQL queries against the database using the standard protocol.

Coginiti Perspective

Coginiti's semantic intelligence (SMDL definitions, query capabilities, platform connectors) aligns with MCP principles: the platform can advertise available dimensions, measures, relationships, and query capabilities to AI systems through standard protocols. Coginiti Actions enables automation of MCP-invoked operations (scheduled queries, scheduled publications), while the ODBC driver and Semantic SQL provide standard query interfaces that MCP clients can discover and invoke. By supporting MCP-compatible integrations, organizations can connect diverse AI agents and copilots to Coginiti's governed analytics without custom engineering.

See Semantic Intelligence in Action

Coginiti operationalizes business meaning across your entire data estate.