Skip to content

Azure OpenAI model is bound at connection (limit of 1) #6726

@Vaccano

Description

@Vaccano

Describe the bug

When Using Azure Open AI as a provider, the configuration asks for a "Deployment Name". This allows for a valid connection to an Azure Open AI model. But just one. Azure OpenAI has a single model per deployment.

So all the model selection, model switching and lead/worker settings do not work. It does not matter what model you pick it will always use the model that is in the deployment. (Again this is one to one in Azure)


To Reproduce
Steps to reproduce the behavior:

  1. On portal.azure.com create a new OpenAI resource
  2. On ai.azure.com, add a model to that resource (ie gpt-4o)
  3. On ai.azure.com add another model to that resource (gpt-4o-mini)
  4. Connect to the first model (gpt-4o) in goose, using its token and deployment name (Via the Azure OpenAI model)
  5. In the dropdown for the models select gpt-4o-mini (or using lead/worker with gpt-4o and gpt-4o-mini)
  6. Make a bunch of requests and use some tokens
  7. Check your request count on ai.azure.com.
  8. See that only the gpt-4o model got any requests.

Please provide the following information

  • OS & Arch: Windows 11
  • Interface: UI
  • Version: 1.21.1
  • Extensions enabled: none
  • Provider & Model: Azure OpenAI, gpt-4o and gpt-4o-mini

Additional context
The connection works, but there is no way to make Goose aware of the different models that are aviable.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions