Home NewsX Enhance the reliability of your generative AI with new hallucination correction capability

Enhance the reliability of your generative AI with new hallucination correction capability

by info.odysseyx@gmail.com
0 comment 6 views


Today, we are excited to announce a preview. “correction,” New features Azure AI Content Safety‘s ground detection function. With this improvement, Grounding detection Not only does it identify inaccuracies in AI output, it also corrects them, further strengthening trust in generative AI technology.

Untitled Design (47).png

What is ground detection?

Ground detection is the ability to identify unfounded or hallucinatory content in AI output, helping developers improve generative AI applications by accurately finding unfounded responses in connected data sources.

Since we introduced Grounded Detection in March of this year, customers have asked, “What can I do with the detected information other than blocking?” This highlights a significant challenge in the rapidly evolving generative AI landscape, where existing content filters are often inadequate to address the unique risks posed by generative AI hallucinations.

Introducing the Edit feature

That’s why we’re introducing the correction feature. It’s important to empower customers to understand and take action on unsubstantiated content and misinformation, especially as demand for the reliability and accuracy of AI-generated content continues to grow.

Building on existing Groundedness Detection capabilities, this groundbreaking feature enables Azure AI Content Safety to identify and correct hallucinations in real time before users of generative AI applications experience them.

How the fix works

To use grounding detection, the generative AI application must be connected to the ground documents used in the document summarization and RAG-based Q&A scenarios.

  • Application developers must enable the edit feature.
  • Then, when a groundless sentence is detected, a new request for correction is triggered to the generative AI model.
  • LLM then evaluates the unsubstantiated sentences by comparing them to the supporting documents.
  • If a sentence has no relevant content to the source document, it may be filtered out entirely.
  • However, if there is content taken from the base document, the base model will rewrite the non-base sentences to match the base document.

Step-by-step guide to grounding detection

  • detection: First, Azure AI Content Safety detects unsubstantiated content in AI-generated content. Illusion is not an all-or-nothing problem. Most unsubstantiated output actually contains some unsubstantiated content. That’s why unsubstantiated detection pinpoints specific segments of unsubstantiated material. Once unsubstantiated content is identified, the model highlights specific text that is inaccurate, irrelevant, or manipulated.
  • reasoning: Users can enable inference. After identifying unsubstantiated segments, the model generates an explanation as to why a particular text was flagged. This transparency is essential because it allows Azure AI Content Safety users to isolate unsubstantiated points and assess the severity of unsubstantiation.
  • correction: Users can enable corrections. When flagged as unsubstantiated content, the system starts the rewriting process in real time. Inaccuracies are corrected to ensure alignment with the linked data sources. This correction is done before users can see the initial unsubstantiated content.
  • calculation: Finally, the modified content is returned to the user.

Ground detection_07c.gif

What is a generative AI illusion?

Hallucination refers to the generation of content that lacks support from the underlying data. This phenomenon is particularly relevant for large-scale language models (LLMs), which can unintentionally generate misleading information.

This problem can be aggravated in high-stakes fields such as medicine, where accurate information is essential. While AI has the potential to improve access to important information, illusions can lead to misunderstandings and misrepresentations, posing a risk in these critical areas.

Why Corrective Actions Are Important

The introduction of this correction feature is important for several reasons.

  • First, filtering is not always a reasonable mitigation, and can result in poor user experiences when it outputs inconsistent text without edited content. Correction represents the first type of functionality that goes beyond blocking.
  • Second, concerns about hallucinations have significantly slowed the deployment of GenAI in high-risk fields such as medicine. The fix would help unblock these applications.
  • Third, concerns about hallucinations have delayed the widespread deployment of Copilots to the public. With the fix, organizations can confidently deploy interactive interfaces to their customers.

Other Generation AI Grounding Strategies

In addition to using Groundedness Detection and new correction features, there are several steps you can take to strengthen the grounding of your generative AI applications. Key tasks include tuning system messages and connecting your generative applications to trusted data sources.

Start grounding detection





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX