Home NewsX Azure Open IA, acelerando nossos caminhos.

Azure Open IA, acelerando nossos caminhos.

by info.odysseyx@gmail.com
0 comment 8 views


To see how fast OpenAI is, you’ll have to ask for more. É comum sentirmos o desejo de testar suas funcionalidades, porém, dúvida de por onde Começar e como fazê-lo sem comprometer a confidencialidade de nossos Dados.

We have presenters using accelerators from Azure through OpenAI. azure-search-openai-demoele nos auxiliará desde a configuração da infraestrutura até aé Implementação de applicações de exemplo. Fiquei’s impression is that it is convenient to use, reviews on how to use it, easy to use using “azd”, veja como instalar aqui Install Azure Developer CLI | microsoft run

Basta executar alguns immediately issue commands and configure their surroundings in Azure. Eu apenas criei uma nova pasta sem clonar nenhum repositório e exexei esses comandos.

azd init -t azure-search-openai-demo  
azd auth login  
azd up

1-) azd init -t azure-search-openai-demo

azd init -t azure-search-openai-demoazd init -t azure-search-openai-demo

2-) azd authentication login

azd authentication loginazd authentication login

3-) Azd Up

Azd upAzd up

Ao fim da execução de mais ou menos 30 minutos, pude ver esses recursos criados:

List of recurring crisis situationsList of recurring crisis situations

Iterative visualization tools:

map likesourcemap likesource

Apps initially applicable:

Accelerator example applicationAccelerator example application

Arquivos criados ao rodar or comando azd init -t azure-search-openai-demo:

Arquivos criados ao rodar o comando azd init -t azure-search-openai-demoArquivos criados ao rodar o comando azd init -t azure-search-openai-demo

Basta fazer with Perguntas. There is a series of example forum articles for comprehensive solutions, sometimes without pasta local “data” or support for Azure.

wdossantos_33-1729596822623.png

You are ready for your dad’s intake. Existe um guia na própria documentação do acelerador intitulado Document indexing for chat apps. Para Começar, vou fazer o upload de um artigo meu do blog, chamado “Escalando nodes and escalando e pods do not have AKS. | Wilson Santos | middle”, para isso, basta gerar um PDF e colocar o arquivo dentro da pasta “data” do acelerador, diretamente na sua máquina local.

For reference, see the executable script in PowerShell script “prepdoc”. O comando para executa-lo é o seguinte

.\\scripts\\prepdocs.ps1

Confirm that Rodar Podemo has ended successfully.

wdossantos_34-1729596859679.png

This app is an accelerator that runs React with Python and Podemos to implement the accelerator apps needed to enhance the functionality of your app.

Temos um front em React e um backend em Python, mas caso você esteja acostumado com C# que é o meu caso, nada melhor que explorar os SDKs do Azure.AI.OpenAI uh SemanticKernel que eu estou a presentando a seguir.

Nada melhor que “codar” para entender melhor as coisas!

Agora provides the programs you need to speed up your infrastructure using the .NET Core console. The program performs tasks related to Azure Open AI, composes messages, and verifies basic checks by respondents against primary documents. Para isso, utilizaremos o pacote Azure.AI.OpenAI. There are no specific examples. There are cases where version 2.0.0 is used.

install-package Azure.AI.OpenAI

Primeiro passo vamos criar um client da classe AzureOpenAIClient:

AzureOpenAIClient azureClient = new(endpoint, credential);

This is an eo endpoint with credentials, which can be used repeatedly across instances of Azure OpenAI by referring to the “Keys and endpoints” section.

wdossantos_35-1729596914646.png

 var credential = new AzureKeyCredential("...");

Depois, vamos criar uma instancia da classe chamada Search Client. Essa classe ajudará na conexão com o serviço de busca (search service).

You know and know:

var searchEndpoint = new Uri("...");
var searchCredential = new AzureKeyCredential("...");
var indexName = "gptkbindex";
var searchClient = new SearchClient(searchEndpoint, indexName, searchCredential);
var searchOptions = new SearchOptions
{
    Size = 5 // Número de documentos a recuperar
};
var searchResults = searchClient.Search(text, searchOptions);
var retrievedDocuments = searchResults.Value.GetResults().Select(result => result.Document["content"].ToString());
var context = string.Join("\n", retrievedDocuments);

A class Search Client What you need is uma chave de segurança. Para encontrá-la, acessamos a instância do serviço de busca (Search Service) followed by “Keys”.

wdossantos_36-1729596964770.png

o Endpoint Assessment Overview

wdossantos_37-1729596995296.png

Agora vamos criar uma instancia da classe ChatClientI requested a response for OpenAI and passed the prompt for Azure openIA.

E aqui que está o Grande Segredo do RAGtudo é prompt, pegamos o resultado da busca do Search Client I think there are various variables context e juntamos has no prompts.

var prompt = $"Contexto: {context}\nPergunta: {text}";
ChatClient chatClient = await azureClient.CompleteChatAsync("gpt-4o");
ChatCompletion completion = chatClient.CompleteChat(new List
{
new SystemChatMessage("Você é um assistente atencioso"),
new UserChatMessage(prompt),

});

Para obter o valor do campo deployment namePlease don’t describe gpt-4o or OpenAI instances. Summarize “Model Deployment” and see “Managed Deployment”. Isso abrirá o Azure OpenAI StudioThere are also ways to visualize the implanted model.

wdossantos_0-1730489069431.png

os pacotes utilizes the forums:

o Código final ficou assim:

using Azure;
using Azure.AI.OpenAI;
using Azure.Search.Documents;
using Azure.Search.Documents.Models;
using OpenAI.Chat;


public class Program
{

    static async Task Main(string[] args)
    {
        while (true)
        {
            Console.WriteLine("Digite uma Pergunta");
            var question = Console.ReadLine();
            if (question != null)
            {
                var result = await AskingChatCompletionWithSearchsAsync(question);
                Console.WriteLine(result);
            }

        }

    }

    static async Task AskingChatCompletionWithSearchsAsync(string text)
    {

        var endpoint = new Uri("https://openiapriv02.openai.azure.com/");
        var credential = new AzureKeyCredential("...");
        AzureOpenAIClient azureClient = new(endpoint, credential);


        var searchEndpoint = new Uri("https://gptkb-kv4atymcdg6pg.search.windows.net");
        var searchCredential = new AzureKeyCredential("...");
        var indexName = "gptkbindex";
        var searchClient = new SearchClient(searchEndpoint, indexName, searchCredential);
        var searchOptions = new SearchOptions
        {
            Size = 5 // Número de documentos a recuperar
        };
        var searchResults = searchClient.Search(text, searchOptions);
        var retrievedDocuments = searchResults.Value.GetResults().Select(result => result.Document["content"].ToString());
        var context = string.Join("\n", retrievedDocuments);



        var prompt = $"Contexto: {context}\nPergunta: {text}";
        ChatClient chatClient = azureClient.GetChatClient("gpt-4o");
        ChatCompletion completion = await chatClient.CompleteChatAsync(new List
        {
            new SystemChatMessage("Você é um assistente atencioso"),
            new UserChatMessage(prompt),

        });


        var result = $"{completion.Role}: {completion.Content[0].Text}";
        return result;
    }
}

fiz uma pergunta para o support com base em um documento que já veio de exemplo no acelerador o Benefit_Options.pdf, perguntei o Is there a Northwind Health Plus program? Observe que a resposta edada com base nos documentsos indexados no search service

wdossantos_1-1730489483583.png

This model continues to work with new characters, and there are also versions of the model that can be controlled by OpenAI.

In particular, the OS model is GPT-3.5 Turbo E GPT-4 Check out regular expressions with new iterations. For example, version 0613 introduces GPT-3.5 Turbo and GPT-4 features. A funny thing happens repeatedly: it turns out that models are used in ways that allow models to be used externally. There is a caveat about the models frequently used by saiba for SDK support. Azure OpenAI service model version – Azure OpenAI | microsoft run

Semantic Kernel is an SDK for code development that can be integrated into existing code. It supports a variety of programming languages, including C#, Python, and Java. Information about plugins and connections, the semantic kernel allows additional information about the application. Flexibility allows for requirements for proof or code presence for specific models, depending on the IA model, and allows freedom to adhere to any combination of services depending on the needs of the project. Saiba Mice

Vamos fazer algumas implements basic features, RAG chat basic features, and importantly, lembrar que alguns dos pacotes usados ​​ainda estão em versões alpha não indicados paraambiente produtivo.

vamos is used for pacote installation.

install-pacakge Microsoft.SemanticKernel

Agora vamos criar uma Representativeação do que o SDK chama de Kernel, essa Representativeação já adiciona do injeção do Chat Completion da nossa instancia do Azure Open IA

 var builder = Kernel.CreateBuilder();  
  
   builder.AddAzureOpenAIChatCompletion(  
            "gpt-4o",                      // Azure OpenAI Deployment Name  
            "https://wsopenia.openai.azure.com/", // Azure OpenAI Endpoint  
            "...");  // Azure OpenAI Key  
  
  
   var kernel = builder.Build();

Allow versatile use by formatting text in templates rather than semantic kernels, and ask chamar funções and extrair valores, para isso usamos as chaves {{…}}. There are no example items such as various histories, and most of the tests were entered for IA quais informationações são a entrada de usuário e quais são apenas instruções. Para Saber Mace Sobre Prompt Use default prompt template language | microsoft run

var prompt = @"Chat:{{$history}} User:{{$input}}";

Agora Usamos o Metodo CreateFunctionFromPrompt Be specific with the configuration for the prompt MaxTokens, Temperature etc..

 var kf = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings
 {
     MaxTokens = 500,
     ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
 });

e usando a classe Kernel argumentspassamos os valores das variáveis

var arguments = new KernelArguments();  
arguments\["input"\] = question;  
arguments\["history"\] = questionHistory;

etapa final é chamar as funções asynchronous call Pass the instance of “kf” and pass the instance of the argument.

var result = await kernel.InvokeAsync(kf, arguments);

Well, example implementation:

var arguments = new KernelArguments();
var prompt = @"Chat:{{$history}} User:{{$input}}";
var kf = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings
{
    MaxTokens = 500,
    ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
});


var questionHistory = "";
while (true)
{
    Console.WriteLine("Digite uma Pergunta");

    var question = Console.ReadLine();


    // Add user input                
    arguments["input"] = question;

    var result = await kernel.InvokeAsync(kf, arguments);


    Console.WriteLine("result:" + result);
    //Console.WriteLine("history:" + questionHistory);

    questionHistory += "Chat:" + result + "User:" + question + "\n";
    arguments["history"] = questionHistory;
}

For answers to RAG items or LLM answers:o que é o Azure OpenIA”?

wdossantos_41-1729597143650.png

Usage:

Use os plugins for interagir com osso código presence, for example, pedidos em uma base de dados of consultants, and podemos os connection to integrar com os serviços de IA, for example, via Azure openIA. Saiba Mice

Check out most plugin examples. The class order is as follows:

public sealed class Order
{
    [KernelFunction, Description("Show order details for number")]
    public static string Sqrt([Description("The number to order details")] double number1)
    {
        return $"The detail about order {number1} is your current state is closed.";
    }

}

perceba que descrevemos seu comportamento usando um atributo Kernel function

Depois eu vou add esse plugin format:

var builder = Kernel.CreateBuilder();  
builder.Plugins.AddFromType();

feito isso podemos optar pela auto chamada configurando a classe OpenAIPromptExecutionSettings com owned Tool call behavior setada para o valor AutoInvokeKernelFunction

 // Enable auto function calling  
  OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new()  
  {  
      ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions,  
  };

o restante do código é o mesmo mas vou colocar um exemplo completo aqui:

var builder = Kernel.CreateBuilder();  
  
builder.AddAzureOpenAIChatCompletion(  
  "gpt-4o",                      // Azure OpenAI Deployment Name  
  "https://cog-nggxeq6fpjnxg.openai.azure.com/", // Azure OpenAI Endpoint  
  "...");  // Azure OpenAI Key  
  
builder.Plugins.AddFromType();  
  
var arguments = new KernelArguments();  
var prompt = @"Chat:{{$history}} User:{{$input}}";  
var kf = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings  
{  
    MaxTokens = 500,  
    ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions  
});  
  
  
var questionHistory = "";  
while (true)  
{  
    Console.WriteLine("Digite uma Pergunta");  
    var question = Console.ReadLine();  
    // Add user input                  
    arguments\["input"\] = question;  
    var result = await kernel.InvokeAsync(kf, arguments);  
    Console.WriteLine("result:" + result);  
  
    questionHistory += "Chat:" + result + "User:" + question + "\\n";  
    arguments\["history"\] = questionHistory;  
}

Observe the mágica deste example: com ele, podemos criar uma classe que controla os mecanismos de busca e, portanto, o conteúdo da resposta. No, LLM determines that based on the description of the class and mechanism. Sometimes ‘function calls’ occur repeatedly and the versions change for recent models with GPT-4.

wdossantos_42-1729597207511.png

Would you like to request “Show order details for number 10” via console and run response?

Usage:

Umbrella of Memories IA Search

Aqui temos o uso de um conector e de um plugin, o connector vai adicionar accesso ao IA Search por meio dessa Implementação:

var memoryBuilder = new MemoryBuilder();
memoryBuilder.WithMemoryStore(new AzureAISearchMemoryStore(
   "https://gptkb-nggxeq6fpjnxg.search.windows.net",
   "...")
);

Alem do Pacote do Microsoft.SemanticKernel Ainda vamos precisar de mais dois pacotes o Microsoft.SemanticKernel.Connectors.AzureAISearch uh Microsoft.SemanticKernel.Plugins.Memory

Mas também precisamos do plugin Text memory plugin para acessar as memorias, mas ates de acessar vamos cria-las

Para isso vamos usar Essa Implementação:

var memory = memoryBuilder.Build();  
  
const string MemoryCollectionName = "aboutMe";  
  
await memory.SaveInformationAsync(MemoryCollectionName, id: "info1", text: "My name is Andrea");  
await memory.SaveInformationAsync(MemoryCollectionName, id: "info2", text: "I currently work as a tourist operator");  
await memory.SaveInformationAsync(MemoryCollectionName, id: "info3", text: "I currently live in Seattle and have been living there since 2005");  
await memory.SaveInformationAsync(MemoryCollectionName, id: "info4", text: "I visited France and Italy five times since 2015");  
await memory.SaveInformationAsync(MemoryCollectionName, id: "info5", text: "My family is from New York");

kernel.ImportPluginFromObject(new TextMemoryPlugin(memory));

Com isso sera gerado um index na IA search chamado aboutme

wdossantos_43-1729597236226.png

list index

wdossantos_44-1729597266492.png

search explorer

wdossantos_45-1729597299034.png

Campos do index

esse index foi gerado usando esse Text Embedding text-embedding-ada-002que é um tipo de algoritmo de vetorização

wdossantos_0-1730492183235.png

Deploy Azure OpenAI Studio

com explains how to implement an instance configuration for Azure openIA or include ferntas ferntas for the text-embedding-ada-002 model.

 memoryBuilder.WithTextEmbeddingGeneration((loggerFactory, httpClient) => {
     return new AzureOpenAITextEmbeddingGenerationService(
         "text-embedding-ada-002", // Embedding generation service name
         "https://openiapriv02.openai.azure.com/",
         "...",
         httpClient: httpClient,
         loggerFactory: loggerFactory
     );
 });

por fim para fazer as perguntas ao chat e receber resposta que vem desse index como se fossem uma lembrança do chat, mas antes precisamos enriquecer o prompt com essas informações algo assim:

 const string skPrompt = @"  
 ChatBot can have a conversation with you about any topic.  
 It can give explicit instructions or say 'I don't know' if it does not have an answer.  
  
 Information about me, from previous conversations:  
 - {{$fact1}} {{recall $fact1}}  
 - {{$fact2}} {{recall $fact2}}  
 - {{$fact3}} {{recall $fact3}}  
 - {{$fact4}} {{recall $fact4}}  
 - {{$fact5}} {{recall $fact5}}  
  
 Chat:  
 {{$history}}  
 User: {{$userInput}}  
 ChatBot: ";

esses valores serão passados ​​​​pelo KernelArguments:

 arguments\["fact1"\] = "What my name?";  
 arguments\["fact2"\] = "where do I live?";  
 arguments\["fact3"\] = "where is my family from?";  
 arguments\["fact4"\] = "where have I travelled?";  
 arguments\["fact5"\] = "what do I do for work?";

Check out the complete implementation:

var builder = Kernel.CreateBuilder();

builder.AddAzureOpenAIChatCompletion(
         "gpt-35-turbo",                      // Azure OpenAI Deployment Name
         "https://cog-nggxeq6fpjnxg.openai.azure.com/", // Azure OpenAI Endpoint
         "...");  // Azure OpenAI Key


var kernel = builder.Build();

#pragma warning disable SKEXP0001, SKEXP0010, SKEXP0050, SKEXP0020

var memoryBuilder = new MemoryBuilder();


memoryBuilder.WithTextEmbeddingGeneration((loggerFactory, httpClient) => {
    return new AzureOpenAITextEmbeddingGenerationService(
        "text-embedding-ada-002", // Embedding generation service name
        "https://openiapriv02.openai.azure.com/",
        "...",
        httpClient: httpClient,
        loggerFactory: loggerFactory
    );
});


memoryBuilder.WithMemoryStore(new AzureAISearchMemoryStore(
   "https://gptkb-nggxeq6fpjnxg.search.windows.net",
   "...")
);

var memory = memoryBuilder.Build();

const string MemoryCollectionName = "aboutMe";

await memory.SaveInformationAsync(MemoryCollectionName, id: "info1", text: "My name is Andrea");
await memory.SaveInformationAsync(MemoryCollectionName, id: "info2", text: "I currently work as a tourist operator");
await memory.SaveInformationAsync(MemoryCollectionName, id: "info3", text: "I currently live in Seattle and have been living there since 2005");
await memory.SaveInformationAsync(MemoryCollectionName, id: "info4", text: "I visited France and Italy five times since 2015");
await memory.SaveInformationAsync(MemoryCollectionName, id: "info5", text: "My family is from New York");

var questions = new[]
{
    "what is my name?",
    "where do I live?",
    "where is my family from?",
    "where have I travelled?",
    "what do I do for work?",
};


#pragma warning disable SKEXP0050

// TextMemoryPlugin provides the "recall" function
kernel.ImportPluginFromObject(new TextMemoryPlugin(memory));


const string skPrompt = @"
ChatBot can have a conversation with you about any topic.
It can give explicit instructions or say 'I don't know' if it does not have an answer.

Information about me, from previous conversations:
- {{$fact1}} {{recall $fact1}}
- {{$fact2}} {{recall $fact2}}
- {{$fact3}} {{recall $fact3}}
- {{$fact4}} {{recall $fact4}}
- {{$fact5}} {{recall $fact5}}

Chat:
{{$history}}
User: {{$userInput}}
ChatBot: ";

var chatFunction = kernel.CreateFunctionFromPrompt(skPrompt, new OpenAIPromptExecutionSettings { MaxTokens = 200, Temperature = 0.8 });

#pragma warning disable SKEXP0050

var arguments = new KernelArguments();

arguments["fact1"] = "What my name?";
arguments["fact2"] = "where do I live?";
arguments["fact3"] = "where is my family from?";
arguments["fact4"] = "where have I travelled?";
arguments["fact5"] = "what do I do for work?";

arguments[TextMemoryPlugin.CollectionParam] = MemoryCollectionName;
arguments[TextMemoryPlugin.LimitParam] = "2";
arguments[TextMemoryPlugin.RelevanceParam] = "0.8";

var history = "";
arguments["history"] = history;
Func Chat = async (string input) => {
    // Save new message in the kernel arguments
    arguments["userInput"] = input;

    // Process the user message and get an answer
    var answer = await chatFunction.InvokeAsync(kernel, arguments);

    // Append the new interaction to the chat history
    var result = $"\nUser: {input}\nChatBot: {answer}\n";

    history += result;
    arguments["history"] = history;

    // Show the bot response
    Console.WriteLine(result);
};


while (true)
{
    Console.WriteLine("Digite uma Pergunta");
    var question = Console.ReadLine();
    if (question != null)
    {
        await Chat(question);
    }

}

wdossantos_47-1729597372637.png

The console and execution response is a response to “What my name”?

Usage:

  1. azure-search-openai-demo
  2. azure-search-openai-demo-csharp
  3. Azure/Vector Search-AI Assistant on Cognitive Search Vectors (github.com)
  4. Azure developer CLI troubleshooting | microsoft run
  5. For more information about the Azure OpenAI service, see Azure OpenAI | microsoft run
  6. Document indexing for chat apps
  7. AzureAIServicesLandingZone
  8. Creating an AI Agent Using Semantic Kernel | microsoft run
  9. Use default prompt template language | microsoft run
  10. Azure OpenAI service model version – Azure OpenAI | microsoft run
  11. Guest Post: Getting Started with Semantic Kernels for LangChain Users | Semantic Kernel (microsoft.co…
  12. Introduction to the API Manifest Plugin for Semantic Kernel | Semantic Kernel (microsoft.com)





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX