Home NewsX Code AI apps on Azure - Python, Prompty & Visual Studio

Code AI apps on Azure - Python, Prompty & Visual Studio

by info.odysseyx@gmail.com
0 comment 3 views


Video transcript:

– With Studio on Azure AI now generally available, you have everything you need to build your own custom AI app experiences. In fact, today I’m going to show you how you can use Studio alongside your code while building apps based on data. We’ll also show you how to orchestrate different models together in a multi-step process, set up automated deployments, run assessments, and continuously monitor production apps as part of GenAIOps. Stop thinking of Studio as just a user interface.

– You can leverage the Azure AI resources you create and call them directly from your code. It’s also a great way to become familiar with the components of an AI app, and can be accessed at ai.azure.com. You can view our model catalog to access the latest models without logging in.

-More than 1700 from OpenAI, Microsoft, Meta, Mistral, etc. Model benchmarks then provide comparisons based on model accuracy and averages across different models to help you choose the right model for your app experience. The same applies to consistency, which evaluates how well a model produces a smooth, natural sound response. Groundedness is a measure of how well a model references the source material provided.

-Fluency looks at the language proficiency of the answers, while relevance helps measure how well the model meets expectations based on prompts, etc. Next, moving to AI services provides access to pre-built Azure AI services to build multimodal applications that can integrate cognitive technologies such as speech, language and translation, vision, and OCR to detect harmful or inappropriate input. and output using content safety. Now we’ll show you how to use Studio in Azure AI with your coding environment to build apps based on custom data.

-Sign in with your Azure account to create a project for your app. Let’s do that and give it a name. This project allows developers to securely connect to the Azure resources they need to use their models, configure assets for their project, and provision the necessary resources so they can start working right away. Now that the core resources are deployed, the next thing to do is deploy the model.

-Let’s go back to the model catalog and choose the GPT-4o, a versatile and high-performance model. Then I’ll press deploy and I’ll leave the default name and deployment details and confirm. Deploying this model takes only seconds, and if you have multiple models deployed, you can easily switch between them. Now let’s go to the playground to test the automatically selected GPT-4o model. The simplest way to give your app personality is to define system messages.

– Think of this as a set of commands that are added to the first message of every chat session. We’ll keep the default system message and add sentences, respond cheerfully and use emoticons. For context, the app I’m building is meant to help people make purchasing decisions about Contoso’s outdoor products. We’ll prompt you to generate a response based on open-world training and ask, “What kind of shoes do you have?” Since I have no context or product knowledge, I reacted as expected, but still cheerful and added funny emojis as instructed.

-Contextual responses now need to be based on reference data or models known as Retrieval Augmented Generation. Let’s go ahead and select the Add Data tab in the playground and add a new data source since we’re starting from scratch. You’ll be given the option to either create an existing Azure AI Search index or automatically create a new index by uploading a file from your device or pointing to an existing data source.

-Let’s select the upload option and choose the file we want. I have dozens of product information files to choose from, and I’ll choose a search service for my index. And I basically vectorize my data and also enable keyword matching through hybrid search. If you go ahead and click Create, a vector search index will be created in a few moments that can be used to search for data related to the user prompt.

– Now, if this is your first time doing vector searches, think of things like GPS coordinates where vectors are floating around. When a prompt is submitted, it is also converted to a vector embedding, and ground data is retrieved by finding the closest coordinate match in the data set. And with that, let’s go back to the playground. And when you try the prompt again, you’ll see that the vector search index has been added, allowing the model to generate an answer based on its underlying knowledge. It even references our files.

-And you can see all the details about our hiking shoes by clicking on one of the references. Now that it works in Playground, let’s build this functionality into our app. First, to set up your coding environment, you need to install the OpenAI SDK and the Azure Identity library. You then call the Azure OpenAI service from your Python code using your Azure credentials without an API key.

-Now that we’re done setting up VS Code, let’s go back to Playground for a moment and import the code we’ve built in Playground so far. Let’s copy that code to our clipboard and go back into VS Code and paste it there. You can then open a terminal and run your code. When it runs, you will see a response output containing the same information you saw in the playground, but in JSON format so you can integrate it with the rest of your code. And there we are. You have now implemented search augmentation creation in your app.

-This means that a real app can take multiple steps using different prompts and models to accomplish a given task. It can be customized based on user prompts with orchestrators executing logical steps to complete a task. For example, here you see a solution that performs a coordinated sequence of tasks to create a document for Contoso Creative. One of them is the product discovery stage we just built.

-There is research work to help you select the right information. Examples include Trends, which provides writing ideas, and Product Work, which takes research summaries and links them back to actual products in the catalog. Finally, there’s the assignment task, which takes the information from the last two steps and creates an article. Each step is a unique individual operation within an end-to-end orchestration and builds on previous steps using subsequent session history.

-Now let’s run it. Before creating an article, we take time to parse information on the web and correlate it with our product catalog. We’ve noticed a trend in Quinzee shelters for camping. We then send the article to our editors’ jobs for review. Editors approve or reject articles and provide feedback if the article requires further editing. Here is an article introducing the Quinzee shelter. This article also references the TrailMaster X4 Tent, a similar item in our catalog. Now let’s look at how to build this.

-A new secondary playground in the studio is now available, along with additional tools for building systems like this. But I’ll show you how to build this in code and how to set up GenAIOps when building your app. Now, instead of creating a project in Studio, let’s get started with the Assistant using one of VS Code’s new templates. Simply deploy all required resources in one step using simple AZD-Op commands. And this will take a few minutes to run.

-During provisioning, we will show you the available templates at aka.ms/AIAppTemplates. There is more to come. Then, you need to go back to VS Code and run the GitHub Pipeline Configuration command. I’ve already done this short period of inactivity. Next, let’s go to the Orchestrator Python file. Here you can see three of the four tasks, each using a different model and prompt.

-Now let’s move on to the researcher’s message file, which we can iterate through in our local encode playground if we want. A prompt file is essentially a prompt template, and each task uses prompts to provide instructions to the model. And every task role in this example has its own prompt file that runs from Python code. Now, the thing to note is that because there are so many interactions happening, it can be really difficult to determine where the problem is happening and debug problems in your code, and that’s where the tracing begins. You can test this by running the orchestrator file. When you run it you will see all the raw prompts and completion information.

-You can find traces from here. Here you will see each step and model call taken to write the article and details to better understand and debug the app. And you can see what was sent to the model at each step of the trace. Now that you’ve completed manual testing of your experience, you can move on to the next step and evaluate how it performs on a larger data set.

-For this we will use an orchestrator file to run the evaluation with the Azure AI Evaluator embedded in the Python file. Once you’re up and running, it uses four built-in evaluators for Relevance, Fluency, Coherence, and Fundamentals to test your scores on each. On a scale of 1 to 5, higher is better, as are some averages. Run. From here, you can use prompt templates to optimize your prompt engineering, add content filters or update system messages, and evaluate and iterate until you are satisfied with the results.

-Now let’s move on to production and look at our monitoring options. If you’re already using Application Insights, you can add reporting visuals to your dashboard. In effect, this is a dashboard for your deployed app that contains all the metrics you have configured to monitor, including evaluation scores across runs, token usage per model over time, and model age. Test multiple models.

-And with transaction search, you can get detailed tracking information as we saw earlier with DS Codes. However, here it spans all executions and traces over a defined time range. This allowed us to build and test a GenAI app along with search augmentation generation patterns.

-As you have seen, you now have everything you need to build custom experiences using Azure AI right in your code. Go to ai.azure.com to get started. Check out code samples at aka.ms/AIAppTemplates. If you haven’t subscribed to Mechanics yet, please do so and thank you for watching.





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX