Model governance and secure access to on-premises or custom VNET resources by info.odysseyx@gmail.com October 31, 2024 written by info.odysseyx@gmail.com October 31, 2024 0 comment 0 views 0 New enterprise security and governance capabilities in Azure AI, October 2024 At Microsoft, we are focused on helping our customers build and use trusted AI—secure and private AI. This month, we’re excited to introduce new security features that support: Enterprise ReadyOrganizations can deploy and scale GenAI solutions with confidence. Improved model governance: Control which GenAI models are available for deployment in the Azure AI Model Catalog using new built-in and custom policies. Secure access to hybrid resources: Securely access on-premises and custom VNET resources from your managed VNET using Application Gateway for your training, fine-tuning, and inference needs. Below, we share detailed information about these enterprise features and guidance to help you get started. Control which GenAI models are available for deployment in the Azure AI Model Catalog using new built-in and custom policies (public preview) The Azure AI Model Catalog provides more than 1,700 models for developers to explore, evaluate, customize, and deploy. This breadth of choice fosters innovation and flexibility, but it can also present significant challenges for businesses looking to ensure that all deployed models match their internal policies, security standards, and compliance requirements. Azure AI administrators can now restrict selected models to deploy from the Azure AI Model Catalog using new Azure policies for better control and compliance. With this update, organizations can use pre-built policies for Model as a Service (MaaS) and Model as a Platform (MaaP) deployments, or use detailed instructions to create custom policies for the Azure OpenAI service and other AI services. You can. 1) MaaS, application of policies built into MaaS Now the manager says “[Preview] Azure Machine Learning deployments should only use approved registry models, which is a built-in policy within the Azure portal. This policy allows administrators to specify which MaaS and MaaP models are approved for deployment. When developers access the Azure AI Model Catalog from Azure AI Studio or Azure Machine, training ensures that only approved models can be deployed. See the documentation here. Control AI model deployment with built-in policies – Azure AI Studio. 2) Building custom policies for AI services and Azure OpenAI services Administrators can now create custom policies for Azure AI services and models in the Azure OpenAI service using: Detailed instructions. Custom policies allow administrators to tailor deployments to their organization’s compliance requirements by customizing the services and models that development teams can access. See the documentation here: Control AI model deployment with custom policies – Azure AI Studio. These policies provide comprehensive coverage for creating a list of acceptable models and applying them across Azure Machine Learning and Azure AI Studio. Securely access on-premises and custom VNET resources from your managed VNET using Application Gateway (Public Preview). A virtual network securely isolates network traffic from its own tenant, even when other customers use the same physical servers. Previously, Azure AI customers could only access Azure resources from managed virtual networks (VNETs) backed by private endpoints ( The list of supported private endpoints is here.). This means that hybrid cloud customers using a managed VNET cannot access machine learning resources that are not within their Azure subscription, such as resources that are on-premises or resources that are in a custom Azure VNET but are not supported by private endpoints. Azure Machine Learning and Azure AI Studio customers can now securely access on-premises or custom VNET resources for training, fine-tuning, and inference scenarios in their managed VNET. application gateway. Application Gateway is a load balancer that makes routing decisions based on the URL of an HTTPS request. Application Gateway supports private connections from a managed VNET to any resource using the HTTP or HTTPs protocol. This feature allows customers to access the machine learning resources they need outside of their Azure subscription without compromising their security posture. Supported scenarios for Azure AI customers using hybrid cloud Application Gateway is currently validated to support private connections to Jfrog Artifactory, Snowflake Database, and private APIs, supporting critical use cases for enterprises. JFrog artifact Used to store custom Docker images for training and inference pipelines, to store trained models ready for deployment, and to ensure security and compliance of ML models and dependencies used in production. JFrog Artifactory can be in a different Azure VNET, separate from the VNET used to access your ML workspace or AI Studio project. Therefore, a private connection is required to secure data transferred from your managed VNET to JFrog Artifactory resources. snowflake A cloud data platform where users can store data for training and model fine-tuning in managed computing. To send and receive data securely, your connection to your Snowflake database must be completely private and not exposed to the Internet. Private API Used for managed online endpoints. Managed online endpoints are used to deploy machine learning models for real-time inference. Certain private APIs may be required to deploy managed online endpoints and must be secured over a private network. Getting started with Application Gateway To get started with Application Gateway in Azure Machine Learning, see: How to access on-premises resources – Azure Machine Learning | microsoft run. To get started with Application Gateway in Azure AI Studio, see: How to access on-premises resources – Azure AI Studio | microsoft run. Securely access on-premises and custom VNET resources using Application Gateway How to analyze and optimize Azure OpenAI service costs using Microsoft Cost Management One more thing… As organizations increasingly rely on AI for their core operations, it has become essential to closely track and manage AI spending. In this month’s blog, the Microsoft Cost Management team does a great job highlighting tools that help you analyze, monitor, and optimize your costs using the Azure OpenAI service. Read it here: Build secure, production-ready GenAI apps using Azure AI Studio Are you ready to go deeper? Check out these top resources: Whether you attend in person or virtually, we look forward to seeing you at. Microsoft Ignite 2024! In the next session, we’ll share the latest information on Azure AI and take a closer look at its enterprise-grade security features. Source link Share 0 FacebookTwitterPinterestEmail info.odysseyx@gmail.com previous post Linux and Open Source on Azure Quarterly Update – October 2024 next post Color, Conditions, and Copilot: How to save time using conditional formatting with Copilot in Excel You may also like AI search threatens digital economy, researcher warns November 12, 2024 Qualcomm has an ‘AI-first’ vision for the future of smart devices November 11, 2024 AMD is moving fast in AI, may join forces with Intel November 11, 2024 A New Dawn of Software Defined Networking (SDN) in Windows Server 2025 November 5, 2024 Get AI ready: Empowering developers in the era of AI November 5, 2024 Announcing the General Availability of Windows Server IoT 2025! November 5, 2024 Leave a Comment Cancel Reply Save my name, email, and website in this browser for the next time I comment.