We first released the Coginiti AI Assistant nine months ago with support for OpenAI’s GPT models. That initial release enabled data analysts and engineers using the Coginiti platform to ask any data related question of the large language model, such as how to generate a query to fetch data or how to create a table. In some regards, this implementation merely moved the chat interface into the application so that user didn’t have to switch context to interact with the model. We improved the experience by preloading the user’s context with information about the connected data platform, so the language model would generate the correct platform syntax. We also inject the user’s database schema, table, view, and column names along with any associated relationships so that the language model can generate the correct semantics.
The AI Assistant implementation was completely optional for users, but for those that adopted it the initial response was generally positive. The large language models proved very capable of translating natural language questions into functional code. This is especially aided when organizations use good semantic naming practices in their date platform. eg dim_customer over dim_cust. These large language models weren’t just good at generating code, they were good at answering all kinds of data related questions from error explanation to helping analyst reason through a problem.
Six months ago, it was clear that large language models are going to be mostly a commodity service. Widely used but not meaningfully differentiated from one another in their capabilities. We felt organizations should have the optionality to pick the large language models that best fit the needs for their organization. To enable this, we added support for Anthropic’s Claude models in Coginiti Pro, along with model services such as AWS Bedrock and Azure OpenAI. We want to support a wide array of models the same way we support a wide array of databases.
We also expanded the AI Assistant integration beyond just the dialectic chat interface. Users can select code from the editor and ask for it to be optimized or explained. This is especially useful when a user is working with code that was originally written by someone else. We also integrated the AI Assistant into Coginiti’s error handling. Database error codes, especially for older database systems, can be cryptic and difficult to understand. We send the user’s code and the resulting errors to the assistant for analysis. The AI Assistant integrated into Coginiti’s visual explain plan, enabling users to get deeper understanding of the database operations and optimization options. These integration improvements all smoothed the interaction with the AI Assistant.
Coginiti customers that have enabled support for large language models report 15-20% performance improvements for their team. Far from replacing analysts or data engineers, large language models are helping them be better and more productive in their daily work. The AI Assistant is just that, an assistant that is there when the user needs it and disappears when they don’t. You might wonder then, what’s next.
For the last couple of months we’ve been working on an implementation of Retrieval Augmented Generation (RAG) for Coginiti Team and Enterprise customers. RAG combines the retrieval capabilities of a search engine with the generative power of a large language model. This augmentation increases the accuracy and relevance of AI Assistant interactions by injecting more relevant domain information into the language model’s context. Where previously the large language model just had access to a user’s schema information, it will now have relevant code samples to work with as well. RAG is enabled in Coginiti by the fact that the product ships with a repository for domain specific data in the form of our Analytics Catalog. Each catalog asset consists of code samples along with comments, tags, and documentation among its metadata. (Hint: if you are not enhancing your catalog assets with metadata today, you will want to start doing so you can fully leverage the RAG capabilities).
The integration of RAG into Coginiti’s AI Assistant represents a significant leap forward, expanding the large language model’s response generation capabilities through sourcing accurate and relevant information. The potential to transform data into actionable insights has never been more accessible. You will be able to start utilizing these new capabilities in the next release of Coginiti Team and Enterprise (24.03). Learn more about how you can experience Coginiti AI Assistant today by scheduling a demo.