-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Open
Description
Describe the bug
When Using Azure Open AI as a provider, the configuration asks for a "Deployment Name". This allows for a valid connection to an Azure Open AI model. But just one. Azure OpenAI has a single model per deployment.
So all the model selection, model switching and lead/worker settings do not work. It does not matter what model you pick it will always use the model that is in the deployment. (Again this is one to one in Azure)
To Reproduce
Steps to reproduce the behavior:
- On portal.azure.com create a new OpenAI resource
- On ai.azure.com, add a model to that resource (ie gpt-4o)
- On ai.azure.com add another model to that resource (gpt-4o-mini)
- Connect to the first model (gpt-4o) in goose, using its token and deployment name (Via the Azure OpenAI model)
- In the dropdown for the models select gpt-4o-mini (or using lead/worker with gpt-4o and gpt-4o-mini)
- Make a bunch of requests and use some tokens
- Check your request count on ai.azure.com.
- See that only the gpt-4o model got any requests.
Please provide the following information
- OS & Arch: Windows 11
- Interface: UI
- Version: 1.21.1
- Extensions enabled: none
- Provider & Model: Azure OpenAI, gpt-4o and gpt-4o-mini
Additional context
The connection works, but there is no way to make Goose aware of the different models that are aviable.
Metadata
Metadata
Assignees
Labels
No labels