Home NewsX New Responsible AI Features for Building Trustworthy AI

New Responsible AI Features for Building Trustworthy AI

by info.odysseyx@gmail.com
0 comment 13 views


in the first hour last week Microsoft AI Tour In Mexico City, we announced Trustworthy AI, Microsoft’s One approach to how we deliver privacy, safety, and security. promise and ability.

All of Microsoft’s AI innovations are based on a comprehensive set of AI principles, policies, and standards. This includes the following basic commitments: Safe Future Initiative, AI Principlesand Privacy Protection Principles. This promise puts you in control of your data and gives you confidence that it is safe, whether at rest or in transit. We are transparent about where our data is and how we use it, and we are committed to ensuring that AI systems are developed responsibly. This commitment also ensures that the AI ​​systems we build have the right privacy, safety, and security in mind from the start. We use our own best practices and learnings to provide features and tools to help you build your own AI applications that share the same high standards we pursue. Whether you are a business leader, AI developer, or co-pilot enthusiast, Microsoft provides the foundation you need to build and use trustworthy generative AI..

With new announcements, in the truest sense of the word, come new products and features! his blog postTakeshi Numoto, Vice President and Chief Marketing Officer, details Trusted AI and all the product announcements shared from Mexico City. But in our case Responsible AI Blog SeriesWe’d love to take the time to learn more about our presentation on Responsible AI!

evaluation

Evaluating generative AI applications is a key part of the measurement phase. Generative AI development lifecycle. While you may consider relying on intuition or applying mitigation strategies based on the sporadic feedback you receive about your app output, running assessments with a systematic and methodical approach can provide signals that inform target mitigation steps.

We’ve announced four new features in public preview that will help you more easily evaluate and improve the output of your applications.

  • Risk and safety assessment of indirect prompt injection attacks
  • Risk and safety assessment of protected substances (text)
  • Math-based indicators ROUGE, BLEU, METEOR and GLEU
  • Synthetic data generators and similar for non-adversarial tasks

inside her blog postMy colleague Minsoo explains these new features in detail and provides a step-by-step tutorial so you can try them out today!

One of the key changes you’ll notice is that we’ve migrated the evaluator from the promptflow-assessment package to the new Azure AI Assessment SDK. We recommend that you develop a migration plan to migrate your existing assessments to the new SDK. If you continue to use the Promptflow-evals package in an existing evaluation, you may encounter errors related to missing input. Since you modified the attribute name, your existing dataset may be using the incorrect name.

Looking for reference documentation? Don’t worry. Let me help you! You can explore new things. evaluation Package here: azure.ai.evaluation package | microsoft run.

Azure AI content safety

Azure AI Content Safety provides a robust set of guardrails for generative AI. We have a growing list of features and functionality for you to explore. RAI Playlist. Needless to say, we also have a new Operate AI responsibly with the Azure AI Studio learning path We provide guided guidance on how to apply these features with a UI-based or code-first approach. However, here are some surprising new features included in the list of content safety features:

As a former XR developer, I was very surprised to learn the following: unity We use the content filtering model for Muse Chat! Carlotta recently shared more about filtering her content. Content filtering using Azure AI Studio mail. Please read the article to find out more!

data protection

When developing AI solutions in highly regulated sectors, data protection is the most important consideration. Data privacy is a common concern, but ensuring that sensitive or regulated data remains encrypted at all times, including while processing it in the cloud, is an issue we are actively addressing. Take a look at our latest features designed to solve these problems.

As a heads up, we’re starting with a limited preview of Azure AI confidential inference. Do you have a use case in mind? We’d love to hear from you! Complete this form to sign up for a preview of our confidential inference service. Sign up for Azure AI Confidential Inference Preview (office.com).

next steps

Many of our existing products and Azure services support our approach to trusted AI. If you’re wondering where best to start as a developer, it’s a good idea to evaluate existing generative AI solutions and find out exactly how they integrate features that support security, privacy, and safety. Maybe you’ll see an opportunity to take advantage of one of the new features I shared in this post? Or maybe an idea comes to you while exploring something new. Operate AI responsibly with the Azure AI Studio learning path!

If you’re still in the ideation phase and haven’t yet entered keystrokes into your code editor, check out Pablo’s course. Generative AI application lifecycleit’s part of us Generative AI for Beginners lecture. As you review the lesson, pinpoint areas of opportunity for integrating privacy, safety, and security. There are more ways than 1 and we can’t wait to see what you decide!

Whichever path you pursue, just know that you are making great strides toward building trustworthy AI solutions!





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX