Enhance the reliability of your generative AI with new hallucination correction capability by info.odysseyx@gmail.com September 24, 2024 written by info.odysseyx@gmail.com September 24, 2024 0 comment 6 views 6 Today, we are excited to announce a preview. “correction,” New features Azure AI Content Safety‘s ground detection function. With this improvement, Grounding detection Not only does it identify inaccuracies in AI output, it also corrects them, further strengthening trust in generative AI technology. What is ground detection? Ground detection is the ability to identify unfounded or hallucinatory content in AI output, helping developers improve generative AI applications by accurately finding unfounded responses in connected data sources. Since we introduced Grounded Detection in March of this year, customers have asked, “What can I do with the detected information other than blocking?” This highlights a significant challenge in the rapidly evolving generative AI landscape, where existing content filters are often inadequate to address the unique risks posed by generative AI hallucinations. Introducing the Edit feature That’s why we’re introducing the correction feature. It’s important to empower customers to understand and take action on unsubstantiated content and misinformation, especially as demand for the reliability and accuracy of AI-generated content continues to grow. Building on existing Groundedness Detection capabilities, this groundbreaking feature enables Azure AI Content Safety to identify and correct hallucinations in real time before users of generative AI applications experience them. How the fix works To use grounding detection, the generative AI application must be connected to the ground documents used in the document summarization and RAG-based Q&A scenarios. Application developers must enable the edit feature. Then, when a groundless sentence is detected, a new request for correction is triggered to the generative AI model. LLM then evaluates the unsubstantiated sentences by comparing them to the supporting documents. If a sentence has no relevant content to the source document, it may be filtered out entirely. However, if there is content taken from the base document, the base model will rewrite the non-base sentences to match the base document. Step-by-step guide to grounding detection detection: First, Azure AI Content Safety detects unsubstantiated content in AI-generated content. Illusion is not an all-or-nothing problem. Most unsubstantiated output actually contains some unsubstantiated content. That’s why unsubstantiated detection pinpoints specific segments of unsubstantiated material. Once unsubstantiated content is identified, the model highlights specific text that is inaccurate, irrelevant, or manipulated. reasoning: Users can enable inference. After identifying unsubstantiated segments, the model generates an explanation as to why a particular text was flagged. This transparency is essential because it allows Azure AI Content Safety users to isolate unsubstantiated points and assess the severity of unsubstantiation. correction: Users can enable corrections. When flagged as unsubstantiated content, the system starts the rewriting process in real time. Inaccuracies are corrected to ensure alignment with the linked data sources. This correction is done before users can see the initial unsubstantiated content. calculation: Finally, the modified content is returned to the user. What is a generative AI illusion? Hallucination refers to the generation of content that lacks support from the underlying data. This phenomenon is particularly relevant for large-scale language models (LLMs), which can unintentionally generate misleading information. This problem can be aggravated in high-stakes fields such as medicine, where accurate information is essential. While AI has the potential to improve access to important information, illusions can lead to misunderstandings and misrepresentations, posing a risk in these critical areas. Why Corrective Actions Are Important The introduction of this correction feature is important for several reasons. First, filtering is not always a reasonable mitigation, and can result in poor user experiences when it outputs inconsistent text without edited content. Correction represents the first type of functionality that goes beyond blocking. Second, concerns about hallucinations have significantly slowed the deployment of GenAI in high-risk fields such as medicine. The fix would help unblock these applications. Third, concerns about hallucinations have delayed the widespread deployment of Copilots to the public. With the fix, organizations can confidently deploy interactive interfaces to their customers. Other Generation AI Grounding Strategies In addition to using Groundedness Detection and new correction features, there are several steps you can take to strengthen the grounding of your generative AI applications. Key tasks include tuning system messages and connecting your generative applications to trusted data sources. Start grounding detection Source link Share 0 FacebookTwitterPinterestEmail info.odysseyx@gmail.com previous post Microsoft Ignite Sold Out? Not for Security Professionals! Secure Your Spot next post Exciting Supply Chain Management Job Opportunities at Kkonnect.io in Bangalore: Apply Now! You may also like 7 Disturbing Tech Trends of 2024 December 19, 2024 AI on phones fails to impress Apple, Samsung users: Survey December 18, 2024 Standout technology products of 2024 December 16, 2024 Is Intel Equivalent to Tech Industry 2024 NY Giant? December 12, 2024 Google’s Willow chip marks breakthrough in quantum computing December 11, 2024 Job seekers are targeted in mobile phishing campaigns December 10, 2024 Leave a Comment Cancel Reply Save my name, email, and website in this browser for the next time I comment.