Home NewsX How to Use GitHub Models to Elevate Your Projects

How to Use GitHub Models to Elevate Your Projects

by info.odysseyx@gmail.com
0 comment 7 views


Hello, my name is Zil-e-huma, Betaine. student ambassador And today we are back with another interesting article for our curious tech colleagues. Have you ever wondered if there is a way to seamlessly integrate AI models into your projects without heavy lifting? Introducing GitHub Models, a groundbreaking feature that puts the power of AI at your fingertips. Whether you are an AI enthusiast, a passionate developer, or just looking to make your applications smarter, this guide will show you how to leverage all the power of GitHub Models in a few easy steps.

GitHub Model Discovery: Your Gateway to AI Magic

Imagine having a powerful collection of AI models at your disposal. Models that can chat, generate code, and do much more with just a few tweaks. That’s it. GitHub Model Here’s one for you. To get started, head to GitHub’s Marketplace and select Models. Here you’ll find a variety of options, from the versatile Lama to the innovative Meta. Imagine this as an AI toolkit ready to explore and experiment!

jillehuma_0-1726808652387.png

Now that you’ve selected your model, you’ll see the layout. Here’s what the layout looks like, and what’s it all about?

jillehuma_1-1726808652424.png

  • README: A guide that covers everything you need to know about the model.
  • Evaluation: A handy comparison tool to see how this model compares to other models.
  • Transparency: Get full details on the inner workings of your model.
  • License: See usage permissions and restrictions.

Are you ready to take your first leap? Click the Playground button and the fun begins!

First AI Adventure: Playing with GitHub Models

The Playground is where the magic happens. Here you can ask questions, change parameters, and watch the model respond in real time. This way, you can get custom responses by adjusting settings like max tokens and temperature to see how different configurations affect the output.

Now let’s take it one step further. Click the Get Started button and a user-friendly overlay will appear. You can choose the programming language and SDK that best suits your needs. Then it’s time to generate your personal access token. Don’t worry, it’s easier than you think. Just follow these steps:

  1. Go to Personal Access Token.
  2. Select the beta option.
  3. Sign in using your GitHub credentials.
  4. Set an expiration date and give the token a name.
  5. Click Generate Token and copy it.

You now have the keys to the GitHub Models kingdom! Export your token to your environment and you’re ready to start coding.

Making AI a reality: Integrating GitHub models into your projects

Quick and easy integration

Want to see how easy it is to integrate a model into your project? Let’s use a simple Python example. This code will get you up and running in no time.

import os
from openai import OpenAI

token = os.environ["GITHUB_TOKEN"]
endpoint = "https://models.inference.ai.azure.com"
model_name = "gpt-4o"

client = OpenAI(
    base_url=endpoint,
    api_key=token,
)

response = client.chat.completions.create(
    messages=[
        {
            "role": "system",
            "content": "You are a helpful assistant.",
        },
        {
            "role": "user",
            "content": "What is the capital of France?",
        }
    ],
    model=model_name,
    temperature=1.0,
    max_tokens=1000,
    top_p=1.0
)

print(response.choices[0].message.content)

Now to run this file, type the following command in your terminal. This is because you need to let the system know your GitHub token. Here is one way to do this:

jillehuma_3-1726808652434.png

(Note: The GitHub token used here is for educational purposes only and has now been removed)

Advanced integration with custom tools

But don’t stop there. Imagine adding custom features to your AI. In the example provided, we are getting flight information between two cities. Here’s how you can enhance the model with custom tools:

import os
import json
from openai import OpenAI

token = os.environ["GITHUB_TOKEN"]
endpoint = "https://models.inference.ai.azure.com"
model_name = "gpt-4o"

# Define a function that returns flight information between two cities (mock implementation)
def get_flight_info(origin_city: str, destination_city: str):
    if origin_city == "Seattle" and destination_city == "Miami":
        return json.dumps({
            "airline": "Delta",
            "flight_number": "DL123",
            "flight_date": "May 7th, 2024",
            "flight_time": "10:00AM"})
    return json.dumps({"error": "No flights found between the cities"})

# Define a function tool that the model can ask to invoke in order to retrieve flight information
tool={
    "type": "function",
    "function": {
        "name": "get_flight_info",
        "description": """Returns information about the next flight between two cities.
            This includes the name of the airline, flight number and the date and time
            of the next flight""",
        "parameters": {
            "type": "object",
            "properties": {
                "origin_city": {
                    "type": "string",
                    "description": "The name of the city where the flight originates",
                },
                "destination_city": {
                    "type": "string", 
                    "description": "The flight destination city",
                },
            },
            "required": [
                "origin_city",
                "destination_city"
            ],
        },
    },
}

client = OpenAI(
    base_url=endpoint,
    api_key=token,
)

messages=[
    {"role": "system", "content": "You an assistant that helps users find flight information."},
    {"role": "user", "content": "I'm interested in going to Miami. What is the next flight there from Seattle?"},
]

response = client.chat.completions.create(
    messages=messages,
    tools=[tool],
    model=model_name,
)

# We expect the model to ask for a tool call
if response.choices[0].finish_reason == "tool_calls":

    # Append the model response to the chat history
    messages.append(response.choices[0].message)

    # We expect a single tool call
    if response.choices[0].message.tool_calls and len(response.choices[0].message.tool_calls) == 1:

        tool_call = response.choices[0].message.tool_calls[0]

        # We expect the tool to be a function call
        if tool_call.type == "function":

            # Parse the function call arguments and call the function
            function_args = json.loads(tool_call.function.arguments.replace("'", '"'))
            print(f"Calling function `{tool_call.function.name}` with arguments {function_args}")
            callable_func = locals()[tool_call.function.name]
            function_return = callable_func(**function_args)
            print(f"Function returned = {function_return}")

            # Append the function call result fo the chat history
            messages.append(
                {
                    "tool_call_id": tool_call.id,
                    "role": "tool",
                    "name": tool_call.function.name,
                    "content": function_return,
                }
            )

            # Get another response from the model
            response = client.chat.completions.create(
                messages=messages,
                tools=[tool],
                model=model_name,
            )

            print(f"Model response = {response.choices[0].message.content}")

With this code, your AI model can not only answer questions, but also actively perform tasks, like finding the next flight from Seattle to Miami. The possibilities are endless!

Enhance your workflow with GitHub Codespaces

Want a more seamless experience? With GitHub Codespaces, you can run your models in a fully configured cloud environment. Here’s how.

  1. Go to the playground.
  2. Click Start and then select Run Codespace.
  3. A virtual environment with all dependencies pre-installed will be started, allowing you to start coding right away.

No more configuration worries, just you and your code.

Pricing and Restrictions: What You Need to Know

GitHub models are powerful but have speed limits. To use them effectively, you need an Azure AI account and a personalized Azure token. Pricing details are available in the Azure AI portal, so you can choose the plan that best suits your needs.

FAQ: Answers to your questions

Question: Can the GitHub model replace Hugging Face?

A: Not yet. Most models on GitHub are closed source and link back to Azure AI. GitHub models provide a convenient way to use Azure AI, but they do not currently offer open model weights like Hugging Face. However, using GitHub Personal Tokens, you can use Azure AI models very easily.

Are you ready to jump in?

GitHub models are a fantastic way to easily integrate AI into your applications. From simple queries to complex integrations, the possibilities are endless. So why wait? Head over to GitHub, explore the models, and let your creativity run wild!

Happy coding! 🚀

Microsoft Learn modules for further learning

Introduction to Prompt Engineering with GitHub Copilot – Training | Microsoft Learn

Build a web app using a refreshable machine learning model – Training | Microsoft Learn

Introduction to GitHub – Training | Microsoft Learn





Source link

You may also like

Leave a Comment

Our Company

Welcome to OdysseyX, your one-stop destination for the latest news and opportunities across various domains.

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Laest News

@2024 – All Right Reserved. Designed and Developed by OdysseyX