Exploring AI Agent-Driven Auto Insurance Claims RAG Pipeline. by info.odysseyx@gmail.com September 9, 2024 written by info.odysseyx@gmail.com September 9, 2024 0 comment 6 views 6 introduction: In this article, we will look at a recent experiment in building a RAG pipeline specifically tailored for the insurance industry to process auto insurance claims, with the goal of reducing processing times. Additionally, we will demonstrate the implementation of Autogen AI Agents to enhance search results through agent interactions and function calls on sample auto insurance claim documents, along with a Q&A use case, and how this workflow can significantly reduce the time required to process claims. In my opinion, the RAG workflow represents a new data stack that is different from the traditional ETL process. In data engineering, it involves data ingestion and processing similar to traditional ETL, but it introduces additional pipeline steps such as chunking, embedding, and loading data into vector databases, thus breaking away from the standard lakehouse or data warehouse pipeline. Each step in the RAG application workflow is critical to the accuracy and relevance of the downstream LLM application. One of these steps is the chunking method, and for this proof of concept we decided to test a page-based chunking technique that leverages the layout of the document without relying on a third-party package. Key services and features: By leveraging the enterprise-grade capabilities of Azure AI services, you can securely integrate Azure AI Document Intelligence, Azure AI Search, and Azure OpenAI through private endpoints. This integration ensures that your solution adheres to best-practice cybersecurity standards. It also provides secure network isolation and private connectivity to your virtual network and related Azure services. Some of these services include: Azure AI Document Intelligence And pre-built layout models. Azure AI Search Index and Vector Database Consists of HNSW Search Algorithm. Azure OpenAI GPT-4-o model. Page-based chunking technique. Autogen AI agent. Azure Open AI Embedding Model: text-ada-003. Azure Key Vault. Personal Endpoint Integration for all services. Azure Blob storage. Azure Function App. (This serverless computing platform can be replaced with: Microsoft Fabric or Azure Databricks) Document extraction and chunking: These templates include forms with details about the accident location, description, vehicle information of the parties involved, and injuries sustained. Rama Index To provide a sample billing document. Below is a sample form template. Sample claim form The billing documents are PDF files stored in Azure Blob Storage. Data collection starts from the container URL in Blob Storage using the Azure AI Document Intelligence Python SDK. This implementation of the page-based chunking method leverages the Markdown output of the Azure AI Document Intelligence SDK. Set up with a pre-built layout extraction model, the SDK extracts the content of the page, including forms and text, in Markdown format, preserving the specific structure and context of the document, such as paragraphs and sections. The SDK facilitates the extraction of documents page by page through a collection of pages in the document, allowing for sequential organization of markdown output data. Each page is preserved as an element within the page list, simplifying the process of efficiently extracting page numbers for each segment. More information about the Document Intelligence service and layout model can be found here. link. The snippet below demonstrates the process of page-based extraction, preprocessing of page elements, and assigning them to a Python list. Extract Pages Each page content is used as a value for the content field of the vector database index, along with other metadata fields in the vector index. Each page content is its own chunk and is embedded before being loaded into the vector database. The following snippet demonstrates this in action. Autogen AI Agent and Agent Tools/Functions Definition: The concept of AI agents models human reasoning and question-answering processes. Agents are driven by large language models (brains) that help them determine whether additional information is needed to answer a question or whether tools need to be executed to complete a task. In contrast, an agentless RAG pipeline incorporates carefully designed prompts that incorporate contextual information sourced from a vector store (typically via context variables within the prompt) before initiating a response request to the LLM. The AI agent has the autonomy to decide the “best” way to complete a task or provide an answer. This experiment presents a simple agent RAG workflow. We will explore more complex agent-centric RAG solutions in more detail in upcoming posts. More information on Autogen Agents can be found at: here. I set up two instances of Autogen Agents, designed to simulate or engage in a question-and-answer chat conversation with each other to perform search tasks based on input messages. I wrote Python functions to be associated with these agents to facilitate the ability for the agents to retrieve and fetch query results from the Azure AI Search vector store via function calls. Both the AssistantAgent, which is configured to call the function, and the UserProxyAgent, which is responsible for executing the function, are examples of Autogen Conversable Agent classes. The user agent initiates a conversation with the assistant agent by asking a question about the search document. The assistant agent then collects and synthesizes a response based on the instructions of the system message prompts and context data retrieved from the vector store. The snippet below provides the definition of the Autogen agent and the chat conversation between the agents. The full notebook implementation is available on the linked GitHub. repository. Final thoughts: The assistant agent answered all six questions correctly, which is consistent with my assessment of the document information and the facts. This proof of concept demonstrates how to develop an LLM application by integrating relevant services into the RAG workflow. It aims to significantly reduce the claims processing time in an automobile insurance industry scenario. As previously mentioned, each step in the RAG workflow is critical to the quality of the response. The Assistant agent’s system message prompts must be written precisely because they can change the response outcome based on the instructions set. Likewise, the logic of the custom search function plays a critical role in the agent’s ability to find and synthesize responses to messages. The accuracy of the responses was assessed manually. Ideally, this process should be automated. In an upcoming post, we will look at automated evaluation of RAG workflows. What methods can be used to accurately evaluate and subsequently improve RAG pipelines? Both the discovery and creation phases of the RAG process require thorough evaluation. What tools can be used to accurately evaluate the end-to-end steps of a RAG workflow, including extraction, processing, and chunking strategies? How can we compare different chunking methods, such as page-based chunking and recursive character text segmentation chunking options, as described in this paper? How can we compare the search results of the HNSW vector search algorithm with the KNN complete algorithm? What types of assessment tools are available, and what metrics can be collected from agent-based systems? Is there a single tool that can be used to manage these issues? We will discover the answers to these questions. Source link Share 0 FacebookTwitterPinterestEmail info.odysseyx@gmail.com previous post Client Relationship Executive Careers in Connexions Global Pvt Ltd, JP Nagar, Bangalore – Apply Now! next post Accelerating water wading simulation using Altair® nanoFluidX® on Azure Nvidia A100 and Nvidia H100 You may also like 7 Disturbing Tech Trends of 2024 December 19, 2024 AI on phones fails to impress Apple, Samsung users: Survey December 18, 2024 Standout technology products of 2024 December 16, 2024 Is Intel Equivalent to Tech Industry 2024 NY Giant? December 12, 2024 Google’s Willow chip marks breakthrough in quantum computing December 11, 2024 Job seekers are targeted in mobile phishing campaigns December 10, 2024 Leave a Comment Cancel Reply Save my name, email, and website in this browser for the next time I comment.