You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: Basic Auth support, Azure openAI, Semver, and using LabelInterface (#437)
* Adding basic authentication, and updating how model_version is
handled. Also fixing links to the docs. Tests to come
* Adding azure openai, plus response model
* Adding a bit more tests :> fixing typos
* fixing response
* Merge azure into llm_interactive
* git sdk install
* Fix errors
* Updating readme
* Change versions
* Add pytest verbosity
* Change functional test CI location
---------
Co-authored-by: Mikhail Maluyk <[email protected]>
Co-authored-by: Michael Malyuk <[email protected]>
Co-authored-by: nik <[email protected]>
Copy file name to clipboardExpand all lines: label_studio_ml/examples/llm_interactive/README.md
+14-3
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
## Interactive LLM labeling
2
2
3
-
This example server connects Label Studio to OpenAI's API to interact with GPT chat models (gpt-3.5-turbo, gpt-4, etc.).
3
+
This example server connects Label Studio to [OpenAI](https://platform.openai.com/) or [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service) API to interact with GPT chat models (gpt-3.5-turbo, gpt-4, etc.).
4
4
5
5
The interactive flow allows you to perform the following scenarios:
6
6
@@ -231,7 +231,18 @@ When deploying the server, you can specify the following parameters as environme
231
231
-`PROMPT_TEMPLATE` (default: `"Source Text: {text}\n\nTask Directive: {prompt}"`): The prompt template to use. If `USE_INTERNAL_PROMPT_TEMPLATE` is set to `1`, the server will use
232
232
the default internal prompt template. If `USE_INTERNAL_PROMPT_TEMPLATE` is set to `0`, the server will use the prompt template provided
233
233
in the input prompt (i.e. the user input from `<TextArea name="my-prompt" ...>`). In the later case, the user has to provide the placeholders that match input task fields. For example, if the user wants to use the `input_text` and `instruction` field from the input task `{"input_text": "user text", "instruction": "user instruction"}`, the user has to provide the prompt template like this: `"Source Text: {input_text}, Custom instruction : {instruction}"`.
234
-
-`OPENAI_MODEL` (default: `gpt-3.5-turbo`) : The OpenAI model to use.
234
+
-`OPENAI_MODEL` (default: `gpt-3.5-turbo`) : The OpenAI model to use.
235
+
-`OPENAI_PROVIDER` (available options: `openai`, `azure`, default - `openai`) : The OpenAI provider to use.
235
236
-`TEMPERATURE` (default: `0.7`): The temperature to use for the model.
236
237
-`NUM_RESPONSES` (default: `1`): The number of responses to generate in `<TextArea>` output fields. Useful if you want to generate multiple responses and let the user rank the best one.
237
-
-`OPENAI_API_KEY`: The OpenAI API key to use. Must be set before deploying the server.
238
+
-`OPENAI_API_KEY`: The OpenAI or Azure API key to use. Must be set before deploying the server.
239
+
240
+
### Azure Configuration
241
+
242
+
If you are using Azure as your OpenAI provider (`OPENAI_PROVIDER=azure`), you need to specify the following environment variables:
243
+
244
+
-`AZURE_RESOURCE_ENDPOINT`: This is the endpoint for your Azure resource. It should be set to the appropriate value based on your Azure setup.
245
+
246
+
-`AZURE_DEPLOYMENT_NAME`: This is the name of your Azure deployment. It should match the name you've given to your deployment in Azure.
247
+
248
+
-`AZURE_API_VERSION`: This is the version of the Azure API you are using. The default value is `2023-05-15`.
0 commit comments