By implementing an AI-powered knowledge assistant and connecting it to the client’s databases, we helped make information and data retrieval a much more accessible and smarter process for all teams across the company.
A leading energy management technology provider holds information as a strategic pillar of their business model and sustainable growth. For that, they sought to bring their information retrieval process to peak performance.
With massive amounts of structured and unstructured operational and business data accumulated over the years, they were looking to give teams across the company smart and easy access and extract information at will.
The best part? This can be achieved through intuitive digital human-like interactions.
Such a solution would decentralize data and decouple the reliance on in-house specialized teams which used to retrieve text-based information manually.
To address all efficiency-related bottlenecks, we implemented an AI-powered knowledge assistant. In simpler terms, it’s a chatbot that can sift through loads of data to answer specific questions/requests in text form.
It uses an advanced Large Language Model (LLM) to query extensive corporate knowledge bases, understand many types of questions, and deliver precise contextual responses, all while maintaining a natural conversation flow.
Here’s how it works:
We connected the solution to the client’s databases to pull relevant information for answering questions/requests. Then, we built a knowledge base by ingesting and indexing the client’s key structured data, regulatory documents, and business documents data such as contracts, financial agreements, SLAs, and equipment installation sheets. This way, the solution can quickly retrieve and surface relevant information when prompted to do so.
To extract and process text from physical documents, we integrated Optical Character Recognition (OCR). This technology takes a scanned image of printed text and converts it to machine-encoded text. Consequently, both digital and now-converted physical documents are stored and indexed in one extensive library.
Powered by the LangGraph orchestration framework, the chatbot can handle complex multi-turn (multi-step) conversations and tasks with high accuracy. It can process general knowledge queries as well as internal information.
With natural human-like and context-aware engagement, the chatbot manages any business query without losing track of previous inputs and outputs.
Another technology that turns the gears of the chatbot is a multi-agent Retrieval-Augmented Generation (RAG).
Built on Amazon OpenSearch and Bedrock, it basically integrates retrieval-based techniques with generative-based AI to optimize outputs. For tip-top context-aware reasoning, we integrated the LangChain framework. That also served as an interface for chaining LLM calls with other data sources.
In a multilayered approach, RAG processes and understands many document formats. It then extracts relevant insights from large repositories on demand.
By instantly accessing and synthesizing data from thousands of documents, it can pinpoint specific details within archived materials such as legal agreements, regulations, guidelines, etc.
Use Case
A user can request a summary of a specific aspect of a contractual relationship with a customer. The chatbot retrieves the information from relevant documents, analyzes it, and fulfills the request. When the user asks for additional details about the same customer, like their installed equipment or financial history, they won’t need to provide further clarifications; the chatbot already understands the context and the aspects being referred to.
When it comes to data, the chatbot is as versatile as they come. It includes a built-in analytics agent that interfaces with all types of business databases and generates insights from retrieved data.
Also, it executes precise data queries based on natural language requests and monitors user interactions.
Previously, analytics were centralized within a single team, but with this new capability in hand, users can retrieve accurate real-time business data instantaneously without any technical expertise, and export them in multiple formats (PDF, Excel, World, or Plain text).
To round things up, we added a validation layer. It ensures that the generated responses are accurate, reliable, consistent, and compliance-adherent.
Validation is divided into two phases.
On the user’s side, response accuracy and relevance are evaluated using a 5-point scale and additional text feedback.
The second-pass filter is done using a reranker ML model. Its main job is to improve the chatbot’s responses by prioritizing the most relevant and high-quality answers. It does that by evaluating multiple response options using factors like context relevance and overall response quality to reorder them effectively.
The outcome: better answers on the first try.
The solution has access to and processes over 10,000 legal documents and around 3,000 data sheets to date. Its benefits could be really defined in two words: efficiency and autonomy.
Efficiency in eliminating hours of manual research and accelerating decision-making. Autonomy in reducing dependency on specialized teams for information access and providing seamless access to interconnected data.
No similar solution is currently available on the market. When tested against existing AI agents, this custom solution has a higher output accuracy rate. It addresses needs that existing tools cannot fulfill while offering greater flexibility and cost-efficiency.
Whether you need help with day-to-day operations, professional support in a specific area of technology, a spearhead on a new innovative project, or anything in between, our technology services have you covered for your near and long-term goals. For more information, explore our tech services overview below.
Get our latest insights delivered straight to your inbox!
Take a leap into the future, harness the power of innovation and accelerate your transformation to unlock new opportunities.