|
| 1 | +# Configuration |
| 2 | + |
| 3 | +## Basic |
| 4 | + |
| 5 | +This is the default configuration and can be overridden in different functional areas. |
| 6 | + |
| 7 | +### Anthropic |
| 8 | + |
| 9 | +See [Developer doc](https://docs.anthropic.com/en/docs/intro-to-claude) |
| 10 | + |
| 11 | +- **Base URL** Anthropic API URL, See [API Reference](https://docs.anthropic.com/en/api/getting-started) |
| 12 | +- **API Key** Anthropic API key. |
| 13 | +- **Model** Chat model used |
| 14 | + |
| 15 | +### OpenAI |
| 16 | + |
| 17 | +See [open.com](https://platform.openai.com/docs/introduction) |
| 18 | + |
| 19 | +- **API Type** OpenAI Official or Microsoft azure servers. |
| 20 | +- **Base URL** OpenAI API URL. |
| 21 | +- **API Key** Our legacy keys. Provides access to all organizations and all projects that user has been added to; access [API Keys](https://platform.openai.com/account/api-keys) to view your available keys. We highly advise transitioning to project keys for best security practices, although access via this method is currently still supported. |
| 22 | +- **Model** Chat model used, See [Model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility) |
| 23 | +- **Project** Provides access to a single project (preferred option); Access [Project API Keys](https://platform.openai.com/settings/organization/general) by selecting the specific project you wish to generate keys against. |
| 24 | +- **Organization** For users who belong to multiple organizations or are accessing their projects through their legacy user API key, you can pass a header to specify which organization and project is used for an API request. Usage from these API requests will count as usage for the specified organization and project. |
| 25 | + |
| 26 | +### Baidu Cloud WenXin |
| 27 | + |
| 28 | +Baidu Cloud API Key Or Secret Key, See [Create an application](https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application). |
| 29 | + |
| 30 | +- **API Key** Baidu Cloud API Key |
| 31 | +- **Secret Key** Baidu Cloud Secret Key |
| 32 | +- **Model** Chat model used |
| 33 | + |
| 34 | +### Ali Cloud TongYi |
| 35 | + |
| 36 | +See [开通 DashScope 并创建 API-KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key). |
| 37 | + |
| 38 | +- **API Key** Baidu Cloud API Key |
| 39 | +- **Model** Chat model used |
| 40 | +- **EnableSearch** 启用互联网搜索,模型会将搜索结果作为文本生成过程中的参考信息,但模型会基于其内部逻辑判断是否使用互联网搜索结果。默认:关闭。 |
| 41 | + |
| 42 | +## Chat |
| 43 | + |
| 44 | +### Models |
| 45 | + |
| 46 | +Shows in chat panel model selection list. |
| 47 | + |
| 48 | + |
| 49 | + |
| 50 | +- **title** Display selected text |
| 51 | + - **type** `string` |
| 52 | +- **provider** Use that LLM Provider. |
| 53 | + - **type** `"anthropic" | "openai" | "qianfan" | "tongyi"` |
| 54 | +- **baseURL** LLM API baseURL, Default use provider config. |
| 55 | + - **type** `string | undefined` |
| 56 | +- **apiKey** LLM API key, Default use provider config. |
| 57 | + - **type** `string | undefined` |
| 58 | +- **secret Key** Only Baidu QianFan provider |
| 59 | + - **type** `string | undefined` |
| 60 | +- **model** Model name |
| 61 | + - **type** `string` |
| 62 | +- **temperature** Amount of randomness injected into the response. Ranges from 0 to 1. |
| 63 | + - **type** `number | undefined` |
| 64 | +- **maxTokens** A maximum number of tokens to generate before stopping. |
| 65 | + - **type** `number | undefined` |
| 66 | +- **stop** A list of strings to use as stop words. |
| 67 | + - **type** `string[] | undefined` |
| 68 | +- **clientOptions** Optional parameters |
| 69 | + - **type** `object | undefined` |
| 70 | + |
| 71 | +examples: |
| 72 | + |
| 73 | +```jsonc |
| 74 | +[ |
| 75 | + { |
| 76 | + "title": "GPT-4", |
| 77 | + "provider": "openai", |
| 78 | + "model": "gpt-4" |
| 79 | + }, |
| 80 | + { |
| 81 | + "title": "GPT-3.5 turbo", |
| 82 | + "provider": "openai", |
| 83 | + "model": "gpt-3.5-turbo", |
| 84 | + "temperature": 0.75 |
| 85 | + }, |
| 86 | + { |
| 87 | + "title": "QWen turbo", |
| 88 | + "provider": "tongyi", |
| 89 | + "model": "qwen-turbo" |
| 90 | + }, |
| 91 | + { |
| 92 | + "title": "ERNIE-Bot turbo", |
| 93 | + "provider": "qianfan", |
| 94 | + "model": "ERNIE-Bot-turbo" |
| 95 | + } |
| 96 | +] |
| 97 | +``` |
| 98 | + |
| 99 | +If there is no configuration of apikey, default get from basic config. |
| 100 | + |
| 101 | +## Code completion |
| 102 | + |
| 103 | +### Model |
| 104 | + |
| 105 | +Model for overwrite provider in the base configuration. see [Autodev: OpenAI](#openai) |
| 106 | + |
| 107 | +### Template |
| 108 | + |
| 109 | +Customize your modeling cue template. |
| 110 | + |
| 111 | +> [!IMPORTANT] |
| 112 | +> Variables use string replacement, please fill in strictly according to instructions. |
| 113 | +
|
| 114 | +The recommended format is FIM ( filling in the middle ), for example: |
| 115 | + |
| 116 | +```sh |
| 117 | +<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle> |
| 118 | + |
| 119 | +# or |
| 120 | + |
| 121 | +<PRE>{prefix}<SUF>{suffix}<MID> |
| 122 | +``` |
| 123 | + |
| 124 | +Available Variables: |
| 125 | + |
| 126 | +- `prefix` Code before the cursor. |
| 127 | +- `suffix` Code after the cursor. |
| 128 | +- `language` Current editing file language, for example: "javascript". |
| 129 | + |
| 130 | +### Stops |
| 131 | + |
| 132 | +Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
| 133 | + |
| 134 | +### Enable Legacy Mode |
| 135 | + |
| 136 | +Use legacy `/v1/completion` instead of `/v1/chat/completion` |
| 137 | + |
| 138 | +### Request Delay |
| 139 | + |
| 140 | +Code auto-completion delay request time. Avoid excessive consumption of API tokens. `requestDelay` only works if [Autodev: Enable Code Completion](#enable-code-completion) is enabled. |
| 141 | + |
| 142 | +### Enable Rename |
| 143 | + |
| 144 | +Enable rename suggestion |
| 145 | + |
| 146 | +### Enable Code Completion |
| 147 | + |
| 148 | +Enable code-completion. Then, when the editor is triggered (e.g., a carriage return or a line feed or a content change, etc.), it automatically completes your code. |
| 149 | + |
| 150 | +## Embedding |
| 151 | + |
| 152 | +### Models |
| 153 | + |
| 154 | +### Builtin: Sentence Transformerjs in Local |
| 155 | + |
| 156 | +### OpenAI |
| 157 | + |
| 158 | +### Ollama |
| 159 | + |
| 160 | +## Legacy Config |
| 161 | + |
| 162 | +## OpenAI Compatible Config |
| 163 | + |
| 164 | +> Please use [Autodev: OpenAI](#openai) instead |
| 165 | +
|
| 166 | +Model for overwrite provider in the base configuration. see `autodev.openai.model` |
| 167 | + |
| 168 | +> [!NOTE] |
| 169 | +> Currently only supports OpenAI chat models. |
| 170 | +
|
| 171 | +Config OpenAI example: |
| 172 | + |
| 173 | +1. open `settings.json` in vscode |
| 174 | +2. add the following configuration |
| 175 | + |
| 176 | +``` |
| 177 | + "autodev.openaiCompatibleConfig": { |
| 178 | + "apiType": "openai", |
| 179 | + "model": "moonshot-v1-8k", |
| 180 | + "apiBase": "https://api.moonshot.cn/v1", |
| 181 | + "apiKey": "xxx" |
| 182 | + }, |
| 183 | + "autodev.completion.model": "openai", |
| 184 | +``` |
0 commit comments