Home NewsX Integrating Azure Content Safety with API Management for Azure OpenAI Endpoints

Integrating Azure Content Safety with API Management for Azure OpenAI Endpoints

by info.odysseyx@gmail.com
0 comment 27 views


Ensuring the safety and integrity of AI-generated content is paramount in today’s digital environment. Azure Content Safety, combined with Azure API Management, provides a powerful solution for managing and securing Azure OpenAI endpoints. This blog walks you through the integration process, focusing on text analytics and Prompt Shield.

Content Filtering.gif

What is Azure Content Safety?

Azure AI Content Safety Provides analytics on user and AI-generated content. Currently available APIs include:

  • Prompt Shield: Scan user text and document text for input attacks on LLM.
  • Grounding Detection: Ensure that the response generated by LLM is based on the sources provided.
  • Detect protected material text: Check if AI-generated responses contain copyrighted material.
  • Text/Image Analysis: Identify and classify text severity for sexual content, hate, violence, and self-harm.

Why should I integrate with Azure Content Safety?

Azure Content Safety provides advanced algorithms to detect and mitigate harmful content in both user prompts and AI-generated output. When integrated with Azure API Management, you can:

  • Enhanced Security: Protect your applications from malicious content.
  • Ensuring Compliance: Comply with regulatory standards and guidelines.
  • Improve user experience: Provide safer and more reliable services to your users.

Onboard Azure Content Safety API to Azure API Management

As with other APIs, you can onboard the Azure Content Safety API to Azure APIM by fetching the latest version. Open API Specification. API Management feature helps you enable managed identity-based authentication for content-safe APIs and communicate privately using Private Endpoints.

Onboarding Azure OpenAI to Azure API Management

There are several ways to onboard AOAI to your API management. profit It is widely discussed. I blog and github repo Here’s a more detailed explanation of what that means.

Integrate Azure OpenAI APIs with content safety in API management

AI Gateway Labs is an amazing repository of labs that explore different patterns through a series of labs. To demonstrate this integration, we have included two content safety scenarios as labs.

The pattern for this integration is to leverage: Send Request Based on APIM’s policy, we call the content safety API and, if it is deemed safe, we decide to forward the request to OpenAI.

The snippet below concatenates all the prompts from a request coming into OpenAI and checks if an attack was detected.

        
            @("https://" + context.Request.Headers.GetValueOrDefault("Host") + "/contentsafety/text:shieldPrompt?api-version=2024-02-15-preview")
            POST
            
                @(context.Variables.GetValueOrDefault("SubscriptionKey"))
            
            
                application/json
            
            @{
                string[] documents = new string[] {};
                string[] messages = context.Request.Body.As(preserveContent: true)["messages"].Select(m => m.Value("content")).ToArray();
                JObject obj = new JObject();		
                JProperty userProperty = new JProperty("userPrompt", string.Concat(messages));
                JProperty documentsProperty = new JProperty("documents", new JArray(documents));
                obj.Add(userProperty);
                obj.Add(documentsProperty);
                return obj.ToString();
            }
        
        
            
                
                    ()["userPromptAnalysis"]["attackDetected"] == true)">
                        
                        
                            
                            @{ 
                        var errorResponse = new
                        {
                            error = new
                            {
                                message = "The prompt was identified as an attack by the Azure AI Content Safety service."
                            }
                        };                            
                        return JsonConvert.SerializeObject(errorResponse);
                    }
                        
                    
                
            
            
                
                    
                
            
        

The snippet below concatenates all the prompts from requests coming into OpenAI and checks whether they fall within acceptable limits for hate, sexuality, self-harm, and violence.

        
            @("https://" + context.Request.Headers.GetValueOrDefault("Host") + "/contentsafety/text:analyze?api-version=2023-10-01")
            POST
            
                @(context.Variables.GetValueOrDefault("SubscriptionKey"))
            
            
                application/json
            
            @{
            	string[] categories = new string[] {"Hate","Sexual","SelfHarm","Violence"};
				JObject obj = new JObject();
				JProperty textProperty = new JProperty("text", string.Concat(context.Request.Body.As(preserveContent: true)["messages"].Select(m => m.Value("content")).ToArray()));
				JProperty categoriesProperty = new JProperty("categories", new JArray(categories));
				JProperty outputTypeProperty = new JProperty("outputType", "EightSeverityLevels");
				obj.Add(textProperty);
				obj.Add(categoriesProperty);
				obj.Add(outputTypeProperty);
				return obj.ToString();
			}
        
        
            
                 categoryThresholds = new Dictionary()
                    {
                        { "Hate", 0 },
                        { "Sexual", 0 },
                        { "SelfHarm", 0 },
                        { "Violence", 0 }
                    };

                    foreach (var category in categoryThresholds)
                    {
                        var categoryAnalysis = ((JArray)((IResponse)context.Variables["safetyResponse"]).Body.As(preserveContent: true)["categoriesAnalysis"]).FirstOrDefault(c => (string)c["category"] == category.Key);

                        if (categoryAnalysis != null && (int)categoryAnalysis["severity"] > category.Value)
                        {
                            // Threshold exceeded for the category
                            thresholdExceededCategory = category.Key;
                            break;
                        }
                    }
                    return thresholdExceededCategory;
                }" />
                
                    
                        
                            
                            @{
                                var errorResponse = new
                                {
                                    error = new
                                    {
                                        message = "The content was filtered by the Azure AI Content Safety service for the category: " + (string)context.Variables["thresholdExceededCategory"]
                                    }
                                };
                                return JsonConvert.SerializeObject(errorResponse);
                            }
                        
                    
                
            
            
                
                    
                
            
        

conclusion

Integrating API Management and Azure Content Safety for your Azure OpenAI endpoints is a powerful way to enhance the security and reliability of your AI applications. Following these steps will help ensure that AI-generated content is safe, compliant, and user-friendly.

For more information, see: Azure Content Safety Documentation and Azure API Management documentation.





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX