Home NewsX Model governance and secure access to on-premises or custom VNET resources

Model governance and secure access to on-premises or custom VNET resources

by info.odysseyx@gmail.com
0 comment 0 views


New enterprise security and governance capabilities in Azure AI, October 2024

At Microsoft, we are focused on helping our customers build and use trusted AI—secure and private AI. This month, we’re excited to introduce new security features that support: Enterprise ReadyOrganizations can deploy and scale GenAI solutions with confidence.

  • Improved model governance: Control which GenAI models are available for deployment in the Azure AI Model Catalog using new built-in and custom policies.
  • Secure access to hybrid resources: Securely access on-premises and custom VNET resources from your managed VNET using Application Gateway for your training, fine-tuning, and inference needs.

Below, we share detailed information about these enterprise features and guidance to help you get started.

Control which GenAI models are available for deployment in the Azure AI Model Catalog using new built-in and custom policies (public preview)

The Azure AI Model Catalog provides more than 1,700 models for developers to explore, evaluate, customize, and deploy. This breadth of choice fosters innovation and flexibility, but it can also present significant challenges for businesses looking to ensure that all deployed models match their internal policies, security standards, and compliance requirements. Azure AI administrators can now restrict selected models to deploy from the Azure AI Model Catalog using new Azure policies for better control and compliance.

With this update, organizations can use pre-built policies for Model as a Service (MaaS) and Model as a Platform (MaaP) deployments, or use detailed instructions to create custom policies for the Azure OpenAI service and other AI services. You can.

1) MaaS, application of policies built into MaaS

Now the manager says “[Preview] Azure Machine Learning deployments should only use approved registry models, which is a built-in policy within the Azure portal. This policy allows administrators to specify which MaaS and MaaP models are approved for deployment. When developers access the Azure AI Model Catalog from Azure AI Studio or Azure Machine, training ensures that only approved models can be deployed. See the documentation here. Control AI model deployment with built-in policies – Azure AI Studio.

2) Building custom policies for AI services and Azure OpenAI services

Administrators can now create custom policies for Azure AI services and models in the Azure OpenAI service using: Detailed instructions. Custom policies allow administrators to tailor deployments to their organization’s compliance requirements by customizing the services and models that development teams can access. See the documentation here: Control AI model deployment with custom policies – Azure AI Studio.

These policies provide comprehensive coverage for creating a list of acceptable models and applying them across Azure Machine Learning and Azure AI Studio.

Securely access on-premises and custom VNET resources from your managed VNET using Application Gateway (Public Preview).

A virtual network securely isolates network traffic from its own tenant, even when other customers use the same physical servers. Previously, Azure AI customers could only access Azure resources from managed virtual networks (VNETs) backed by private endpoints ( The list of supported private endpoints is here.). This means that hybrid cloud customers using a managed VNET cannot access machine learning resources that are not within their Azure subscription, such as resources that are on-premises or resources that are in a custom Azure VNET but are not supported by private endpoints.

Azure Machine Learning and Azure AI Studio customers can now securely access on-premises or custom VNET resources for training, fine-tuning, and inference scenarios in their managed VNET. application gateway. Application Gateway is a load balancer that makes routing decisions based on the URL of an HTTPS request. Application Gateway supports private connections from a managed VNET to any resource using the HTTP or HTTPs protocol. This feature allows customers to access the machine learning resources they need outside of their Azure subscription without compromising their security posture.

Supported scenarios for Azure AI customers using hybrid cloud

Application Gateway is currently validated to support private connections to Jfrog Artifactory, Snowflake Database, and private APIs, supporting critical use cases for enterprises.

  1. JFrog artifact Used to store custom Docker images for training and inference pipelines, to store trained models ready for deployment, and to ensure security and compliance of ML models and dependencies used in production. JFrog Artifactory can be in a different Azure VNET, separate from the VNET used to access your ML workspace or AI Studio project. Therefore, a private connection is required to secure data transferred from your managed VNET to JFrog Artifactory resources.
  2. snowflake A cloud data platform where users can store data for training and model fine-tuning in managed computing. To send and receive data securely, your connection to your Snowflake database must be completely private and not exposed to the Internet.
  3. Private API Used for managed online endpoints. Managed online endpoints are used to deploy machine learning models for real-time inference. Certain private APIs may be required to deploy managed online endpoints and must be secured over a private network.

Getting started with Application Gateway

To get started with Application Gateway in Azure Machine Learning, see: How to access on-premises resources – Azure Machine Learning | microsoft run. To get started with Application Gateway in Azure AI Studio, see: How to access on-premises resources – Azure AI Studio | microsoft run.

Securely access on-premises and custom VNET resources using Application GatewaySecurely access on-premises and custom VNET resources using Application Gateway

How to analyze and optimize Azure OpenAI service costs using Microsoft Cost Management

One more thing… As organizations increasingly rely on AI for their core operations, it has become essential to closely track and manage AI spending. In this month’s blog, the Microsoft Cost Management team does a great job highlighting tools that help you analyze, monitor, and optimize your costs using the Azure OpenAI service. Read it here:

Build secure, production-ready GenAI apps using Azure AI Studio

Are you ready to go deeper? Check out these top resources:

Whether you attend in person or virtually, we look forward to seeing you at. Microsoft Ignite 2024! In the next session, we’ll share the latest information on Azure AI and take a closer look at its enterprise-grade security features.





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX