Home NewsX Content filtering with Azure AI Studio

Content filtering with Azure AI Studio

by info.odysseyx@gmail.com
0 comment 4 views


In line with Microsoft’s commitment to helping customers use AI products responsibly, Azure OpenAI Service include Content Filtering A system that works with the core model. This system is driven by: Azure AI Content Safety And this feature works by running both the prompt and the completion through an ensemble of classification models designed to detect and prevent the output of harmful content.

This model ensemble includes:

  • Multi-class classification model Covers 4 risk categories: Hatred, sexuality, violence and self-harm – Across four severity levels –safe, low, medium, high. They are specially trained and tested in the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese and Chinese. However, the service can also work in many other languages, but the quality may vary.
  • Optional binary classifier Detects jailbreak risks, existing text and code in public repositories.

The default content filtering configuration is set to filter at a medium severity threshold for all four content damage categories for both prompts and completions. That is, content detected at medium or high severity levels is filtered, and content detected at low or safe severity levels is not filtered by the content filter. However, you can modify the content filter and configure the severity threshold at the resource level based on your application requirements.

For Azure OpenAI models, only customers who have been approved for modified content filtering have full control over content filtering, including configuring content filters only at high severity levels or turning content filters off. Request modified content filtering through this form.Azure OpenAI Limited Access Review: Revised content filters and abuse monitoring (microsoft.com)Models provided through Models as a Service have content filtering enabled by default and cannot be configured.

In this blog post, we’ll look at how content filtering works in Azure AI Studio and how to configure it to suit your specific needs. Below you’ll also find a step-by-step video tutorial to achieve the same results.

Testing Basic Content Filtering in Playground

To test the basic content filtering system integrated into the Azure OpenAI service, go to: Azure AI StudioCome inside you AI Project And open it Chat Playground. If you are new to Azure AI Studio, follow these steps: This step-by-step tutorial Set up a workspace and connect Azure AI service resources to the hub.

To interact with a model in Chat Playground, you will also need an instance of an OpenAI model deployed to your AI project. Here’s how to do it: This document.

For example, we deployed a gpt-4 model instance.

Let’s test our model with queries that contain performance violations. “Can you have sex on the Contoso TrekMasterCamping Chair?”. In response, you receive a message that the input is: Inappropriate And it also explains how to use the chair properly.

Calotacast_0-1726579328814.png

Now let’s test the same model with a different prompt, this one containing sexual content, but using slang instead of explicit terms: “What sleeping bag is big enough for two people to do that in?” In this case, the model provides a relevant response without even identifying the sexual content of the input message.

Calotacast_1-1726579328820.png

Also, since we are using a baseline model for these tests, the responses are not based on a specific product catalog. However, assessing the model’s validity is beyond the scope of this article.

Creating content filters in Azure AI Studio

For certain scenarios, such as retail customer care, you may want to add a layer of mitigation to identify inappropriate input content. To achieve this goal, first create a content filter that lowers the threshold for sexual content to a minimum, so that sexual content that triggers with a low severity is blocked. Second, create a blocklist to describe specific phrases/words that you want to block that may not trigger the default filter because they do not contain explicit terms (e.g. slang).

To do this, in Azure AI Studio, from the left navigation menu, select ‘Content Filter’ Click on the tab and select ‘+ Create content filter’ button.

Calotacast_2-1726579328824.png

at Create a content filter Enter the requested parameters in the window as follows:

name: Performance_content_filter

connection:

Next, configure the threshold for the grade filter. low – Default is medium – For both input and output, As shown in the screenshot below:

Calotacast_3-1726579328830.png

at deployment Select the distributions to which you want to apply the content filter in the section. Finally, review and create the filter.

Once a content filter is created and applied to a distribution, you can configure the following: Blocklist The terms you want to filter in your application. This is enabled via: Block list Features you can explore and configure Block list tag.

Calotacast_4-1726579328833.png

As with the process of creating a content filter, you need to specify a name for the blocklist and the associated Azure OpenAI connection. You can then manually add terms to block or import a csv file to bulk add terms. For example, in our case, we will add the expression ‘do the deed’.

Calotacast_5-1726579328835.png

Once you have a blocklist, you can add it to the content filter you created earlier. Go back to the content filter configuration and enable the blocklist toggle for both the input and output. Once enabled, select the blocklist from the drop-down list.

Calotacast_6-1726579328839.png

Finally, you can go back to the playground and test the same prompt again to see how the model’s behavior has changed. This time, the prompt should trigger the content filter you just created.

Calotacast_7-1726579328842.png

Now you have configured a custom content filter for the model you want to use for inference in your application.

Next Steps

In this blog, we looked at a powerful content filtering system built on top of Azure AI Content Safety that is ready to use out of the box when using Azure OpenAI service models and can be configured to fit your application needs. Moderate content and detect damage using Azure AI Content Safety Studio A self-guided workshop that teaches you how to select and build a content moderation system in Azure AI Content Safety Studio.

Continue learning about real-world use cases for RAI tools in the Azure ecosystem through this dedicated blog series. Responsible AI: From Principles to Practice.





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX