Building Your Own Chatbot using Azure OpenAI Capabilities by info.odysseyx@gmail.com October 4, 2024 written by info.odysseyx@gmail.com October 4, 2024 0 comment 8 views 8 is it so Sarita, Microsoft Learn Student Ambassador. This guide walks you through setting up a chatbot using Open AI’s GPT-4o model, which leverages Azure’s high-level language model. Lastly, we have provided a link to our repository where you can integrate the chatbot on your website. Build your own chatbot using Azure OpenAI capabilities What is Azure Open AI service? “The Azure OpenAI service provides REST API access to OpenAI’s powerful language models, including GPT-4o, GPT-4o mini, GPT-4 Turbo with Vision, GPT-4, GPT-3.5-Turbo, and the Embeddings model series. These models can be easily applied to specific tasks, including but not limited to content generation, summarization, image understanding, semantic retrieval, and natural language-to-code translation. Users can access the service through REST API, Python SDK, or a web-based interface in Azure OpenAI Studio.” – Microsoft Run prerequisites An Azure account with an active Azure subscription. Account in Azure Open AI Studio Understand key terms Before you begin setup, it’s important to familiarize yourself with some key terms. playground: A user-friendly interface to explore, prototype, and test AI assistants without any coding requirements. Perfect for iterating and experimenting with new ideas. Azure AI services: A suite of tools and services that help you build intelligent applications using artificial intelligence. resource group: A container that holds related Azure resources for management and billing. Keys and Endpoints: Security credentials and URL that allows your application to access Azure services. Model Deployment: The process of configuring and running a specific AI model for use. system message: Instructions provided to an AI model to guide its action and response context. immediate: Input provided to the AI model to derive a specific response. token : token They can be thought of as fragments of words. Before the API processes the request, the input is sorted into tokens. These tokens are not truncated exactly where the word begins or ends. Tokens can also contain trailing spaces and subordinate words. Here are some rules to help you understand tokens in terms of length: 1 token ~= 4 letters in English, 1 token ~= 3/4 words, 100 tokens ~= 75 words. Choose the right model at the right price model explanation GPT-4o & GPT-4o Mini & GPT-4 Turbo The latest and most capable Azure OpenAI model with a multimodal version that can accept both text and images as input. GPT-4 A set of models that improve on GPT-3.5 and can understand and generate natural language and code. GPT-3.5 A set of models that improve upon GPT-3 and can understand and generate natural language and code. Embedding A set of models that can convert text into a numeric vector format to promote text similarity. DALL-E A set of models that can generate original images from natural language. whisper A set of preview models that can transcribe and translate speech into text. text to speech(preview) A set of preview models that can synthesize text into speech. – Microsoft learns about open AI services The Azure OpenAI service provides two main features: price Models for using AI tools: standard (on-demand) and provisioned throughput units (PTUs). Let’s understand the two one by one. Standard (On Demand) Pricing: You only pay for what you use. Costs are calculated based on: Number of tokens processedHere tokens represent chunks of text (input and output). Each 1,000 tokens has a certain cost. For example, the more complex the model (e.g. GPT-4), the higher the token cost. This model is flexible and works well if you’re not sure how much you’ll use the service or don’t require ongoing access. Provisioned Throughput Units (PTU): This model provides consistent performance and guaranteed throughput while reducing latency. you are Fixed hourly rate per PTUSo you can predict your costs. Although there is an additional cost for input and output tokens, PTUs provide faster and more reliable access, making this model ideal for large-scale usage or scenarios where latency is critical. PTU also has minimum expansion requirements (i.e., you must purchase a certain number of devices to use the service at this level). comparison: Standard (On-Demand): Pay only for tokens and is suitable for small or variable usage. PTU: Fixed hourly rate + token cost, better suited for stable, high-volume operations. Example (why I use GPT-4o as a student): GPT-4o has a lower cost per token compared to other models such as GPT-4-Turbo. This is cost-effective for students or small users and does not include: Price per PTU per hour, This is different from GPT-4-Turbo. Let’s get started Step 1: Create an Azure and Azure OpenAI account – To get started, you’ll need to set up an Azure account if you don’t already have one. visit azure and Azure Open AI Visit our website to get started. Step 2: Create a new resource group search “Resource Group” In the Azure search box. Please click “make” Start a new resource group. Please enter the required details. application : Select your Azure subscription. Resource group name : Give the group a unique name. location : Select the Azure region that suits you. click “Review + Create” Then “make”. Wait a few minutes for the resource group to be set up. Step 3: Create a new Azure OpenAI resource search “Azure Open AI” In the Azure search box. Please click “make” From the menu bar Please enter the following details: application : Select a subscription. resource group : Select the resource group you just created. region : Specify the region. name : Assigns a name to an OpenAI resource. Pricing Tier : Select the appropriate pricing tier. click “Daum Network” and choose ‘Any network, including the Internet, can access this resource.. Click to proceed. “Next Tag”. Finally, click. “make” Then “submit”. Wait a few minutes for the resource to be set up. Step 4: Go to Azure AI Studio In the Azure dashboard, go to: Project Overview Then select ‘.Chat in ‘Project Playground’. Configure your assistant. role : Assign a role to the bot from the drop-down menu. system message : Provides instructions on how the assistant should behave. For example, you might instruct them to adopt a Shakespearean persona, using rich metaphors and poetic language. yes : Provides user input and expected auxiliary outputs so the bot can better learn and respond. Enter test questions for your chatbot. “Write a message inviting your friends to your birthday party.” This will help you evaluate your chatbot. choose “Deploy to web app” from “Distribution target” Drop down. choose “Create a new web app” Enter the required information and enable code recording in the web app. shellac “distribution”. Step 5: Congratulations! You have successfully created your assistant! You can now connect to the frontend using Python and Flask to integrate your chatbot into your website. Here’s how to proceed: Key and endpoint discovery: Use the key and endpoint of the Azure resource you created previously. Store this safely in a `.env` file. Choose a code editor: Open your favorite code editor and start coding. See repository.: Check out my repository. You can see the code. conclusion With these steps, you now have a fully functional chatbot powered by Azure OpenAI capabilities. The possibilities are endless, and you can further customize and enhance the Assistant to suit your needs. Whether for personal or business applications, leveraging Azure’s powerful AI services opens up a world of innovation. Happy coding! resources GenAI for Beginners What is Azure AI Studio? Basics of Azure OpenAI service Get started with the Azure OpenAI assistant Azure Samples – OpenAI Azure OpenAI service Source link Share 0 FacebookTwitterPinterestEmail info.odysseyx@gmail.com previous post Graphics Design Internship Opportunities in Mumbai at Totality Solutions for Aspiring Creatives next post The Azure AI Influencers Day Series You may also like How to strengthen AI security with MLSecOps December 6, 2024 The Sonos Arc Ultra raises the bar for home theater audio December 5, 2024 Aptera Motors will showcase its solar EV at CES 2025 December 3, 2024 How Chromebook tools strengthen school cybersecurity December 2, 2024 Nvidia unveils the ‘Swiss Army Knife’ of AI audio tools: Fugato November 26, 2024 Nvidia Blackwell and the future of data center cooling November 25, 2024 Leave a Comment Cancel Reply Save my name, email, and website in this browser for the next time I comment.