Skip to content
This repository was archived by the owner on Dec 2, 2025. It is now read-only.

Commit 82ec540

Browse files
authored
Merge pull request #40 from zhengxs2018/feat-legacy-and-delay
2 parents 883771d + ede0702 commit 82ec540

File tree

6 files changed

+331
-128
lines changed

6 files changed

+331
-128
lines changed

docs/configuration.md

Lines changed: 184 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,184 @@
1+
# Configuration
2+
3+
## Basic
4+
5+
This is the default configuration and can be overridden in different functional areas.
6+
7+
### Anthropic
8+
9+
See [Developer doc](https://docs.anthropic.com/en/docs/intro-to-claude)
10+
11+
- **Base URL** Anthropic API URL, See [API Reference](https://docs.anthropic.com/en/api/getting-started)
12+
- **API Key** Anthropic API key.
13+
- **Model** Chat model used
14+
15+
### OpenAI
16+
17+
See [open.com](https://platform.openai.com/docs/introduction)
18+
19+
- **API Type** OpenAI Official or Microsoft azure servers.
20+
- **Base URL** OpenAI API URL.
21+
- **API Key** Our legacy keys. Provides access to all organizations and all projects that user has been added to; access [API Keys](https://platform.openai.com/account/api-keys) to view your available keys. We highly advise transitioning to project keys for best security practices, although access via this method is currently still supported.
22+
- **Model** Chat model used, See [Model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
23+
- **Project** Provides access to a single project (preferred option); Access [Project API Keys](https://platform.openai.com/settings/organization/general) by selecting the specific project you wish to generate keys against.
24+
- **Organization** For users who belong to multiple organizations or are accessing their projects through their legacy user API key, you can pass a header to specify which organization and project is used for an API request. Usage from these API requests will count as usage for the specified organization and project.
25+
26+
### Baidu Cloud WenXin
27+
28+
Baidu Cloud API Key Or Secret Key, See [Create an application](https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application).
29+
30+
- **API Key** Baidu Cloud API Key
31+
- **Secret Key** Baidu Cloud Secret Key
32+
- **Model** Chat model used
33+
34+
### Ali Cloud TongYi
35+
36+
See [开通 DashScope 并创建 API-KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key).
37+
38+
- **API Key** Baidu Cloud API Key
39+
- **Model** Chat model used
40+
- **EnableSearch** 启用互联网搜索,模型会将搜索结果作为文本生成过程中的参考信息,但模型会基于其内部逻辑判断是否使用互联网搜索结果。默认:关闭。
41+
42+
## Chat
43+
44+
### Models
45+
46+
Shows in chat panel model selection list.
47+
48+
![Sidepanel](./images/sidepanel.png)
49+
50+
- **title** Display selected text
51+
- **type** `string`
52+
- **provider** Use that LLM Provider.
53+
- **type** `"anthropic" | "openai" | "qianfan" | "tongyi"`
54+
- **baseURL** LLM API baseURL, Default use provider config.
55+
- **type** `string | undefined`
56+
- **apiKey** LLM API key, Default use provider config.
57+
- **type** `string | undefined`
58+
- **secret Key** Only Baidu QianFan provider
59+
- **type** `string | undefined`
60+
- **model** Model name
61+
- **type** `string`
62+
- **temperature** Amount of randomness injected into the response. Ranges from 0 to 1.
63+
- **type** `number | undefined`
64+
- **maxTokens** A maximum number of tokens to generate before stopping.
65+
- **type** `number | undefined`
66+
- **stop** A list of strings to use as stop words.
67+
- **type** `string[] | undefined`
68+
- **clientOptions** Optional parameters
69+
- **type** `object | undefined`
70+
71+
examples:
72+
73+
```jsonc
74+
[
75+
{
76+
"title": "GPT-4",
77+
"provider": "openai",
78+
"model": "gpt-4"
79+
},
80+
{
81+
"title": "GPT-3.5 turbo",
82+
"provider": "openai",
83+
"model": "gpt-3.5-turbo",
84+
"temperature": 0.75
85+
},
86+
{
87+
"title": "QWen turbo",
88+
"provider": "tongyi",
89+
"model": "qwen-turbo"
90+
},
91+
{
92+
"title": "ERNIE-Bot turbo",
93+
"provider": "qianfan",
94+
"model": "ERNIE-Bot-turbo"
95+
}
96+
]
97+
```
98+
99+
If there is no configuration of apikey, default get from basic config.
100+
101+
## Code completion
102+
103+
### Model
104+
105+
Model for overwrite provider in the base configuration. see [Autodev: OpenAI](#openai)
106+
107+
### Template
108+
109+
Customize your modeling cue template.
110+
111+
> [!IMPORTANT]
112+
> Variables use string replacement, please fill in strictly according to instructions.
113+
114+
The recommended format is FIM ( filling in the middle ), for example:
115+
116+
```sh
117+
<fim_prefix>{prefix}<fim_suffix>{suffix}<fim_middle>
118+
119+
# or
120+
121+
<PRE>{prefix}<SUF>{suffix}<MID>
122+
```
123+
124+
Available Variables:
125+
126+
- `prefix` Code before the cursor.
127+
- `suffix` Code after the cursor.
128+
- `language` Current editing file language, for example: "javascript".
129+
130+
### Stops
131+
132+
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
133+
134+
### Enable Legacy Mode
135+
136+
Use legacy `/v1/completion` instead of `/v1/chat/completion`
137+
138+
### Request Delay
139+
140+
Code auto-completion delay request time. Avoid excessive consumption of API tokens. `requestDelay` only works if [Autodev: Enable Code Completion](#enable-code-completion) is enabled.
141+
142+
### Enable Rename
143+
144+
Enable rename suggestion
145+
146+
### Enable Code Completion
147+
148+
Enable code-completion. Then, when the editor is triggered (e.g., a carriage return or a line feed or a content change, etc.), it automatically completes your code.
149+
150+
## Embedding
151+
152+
### Models
153+
154+
### Builtin: Sentence Transformerjs in Local
155+
156+
### OpenAI
157+
158+
### Ollama
159+
160+
## Legacy Config
161+
162+
## OpenAI Compatible Config
163+
164+
> Please use [Autodev: OpenAI](#openai) instead
165+
166+
Model for overwrite provider in the base configuration. see `autodev.openai.model`
167+
168+
> [!NOTE]
169+
> Currently only supports OpenAI chat models.
170+
171+
Config OpenAI example:
172+
173+
1. open `settings.json` in vscode
174+
2. add the following configuration
175+
176+
```
177+
"autodev.openaiCompatibleConfig": {
178+
"apiType": "openai",
179+
"model": "moonshot-v1-8k",
180+
"apiBase": "https://api.moonshot.cn/v1",
181+
"apiKey": "xxx"
182+
},
183+
"autodev.completion.model": "openai",
184+
```

docs/features/code-completion.md

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
---
2+
layout: default
3+
title: Code Completions
4+
parent: Basic Features
5+
nav_order: 5
6+
permalink: /features/code-completion
7+
---
8+
9+
Automatically completes your code based on the position of your cursor.
10+
11+
## Enable Feature
12+
13+
Not enabled by default, see [AutoDev: Code Completion](../configuration.md#code-completion)
14+
15+
```jsonc
16+
{
17+
"autodev.openai.apiKey": "sk-xxxxx", // Your openai api key
18+
"autodev.suggestion.enableCodeCompletion": true
19+
}
20+
```
21+
22+
Now let's try writing some code.
23+
24+
## Select code Model
25+
26+
You can hope that you use specific code models instead of dialog models
27+
28+
```jsonc
29+
{
30+
"autodev.completion.model": "gpt-4o" // Overriding the default chat model
31+
}
32+
```
33+
34+
Recommended to use a specially trained code model, or a base model that supports fim.
35+
36+
## Connect to local model
37+
38+
Here is an example of ollama, see [OpenAI compatibility](https://github.com/ollama/ollama/blob/main/docs/openai.md) for details.
39+
40+
```jsonc
41+
{
42+
"autodev.openai.baseURL": "http://127.0.0.1:11434/v1/", // Your local service url
43+
"autodev.openai.apiKey": "sk-xxxxx", // Your local service api key
44+
"autodev.completion.model": "codeqwen:7b-code-v1.5-q5_1" // Overriding the default chat model
45+
}
46+
```
47+
48+
If your self-built service is deployed in a mode that does not support chat, you may need to enable [legacy mode](#enable-legacy-mode).
49+
50+
## Enable Legacy Mode
51+
52+
The default is the traditional `/v1/completion` instead of `/v1/chat/completion`, but you can fall back to the old mode.
53+
54+
```jsonc
55+
{
56+
"autodev.completion.enableLegacyMode": true
57+
}
58+
```

docs/quick-start.md

Lines changed: 5 additions & 117 deletions
Original file line numberDiff line numberDiff line change
@@ -14,110 +14,10 @@ In the vscode configuration, search for `autodev`, or click the ⚙️ button in
1414
在当前的设计里,由于精力不足,在 UI 设计上我们基于 Continue 的设计,因此会出现一些不符合直觉的地方,我们会在后续的版本中逐渐优化。诸如于:
1515
**Chat UI 和代码中的 LLM 模型需要分开配置。
1616

17-
## Basic
17+
> [!IMPORTANT]
18+
> You must configure at least one big model for the plugin to work, see [Configuration](./configuration.md) for details.
1819
19-
This is the default configuration and can be overridden in different functional areas.
20-
21-
### Anthropic
22-
23-
See [Developer doc](https://docs.anthropic.com/en/docs/intro-to-claude)
24-
25-
- **Base URL** Anthropic API URL, See [API Reference](https://docs.anthropic.com/en/api/getting-started)
26-
- **API Key** Anthropic API key.
27-
- **Model** Chat model used
28-
29-
### OpenAI
30-
31-
See [open.com](https://platform.openai.com/docs/introduction)
32-
33-
- **API Type** OpenAI Official or Microsoft azure servers.
34-
- **Base URL** OpenAI API URL.
35-
- **API Key** Our legacy keys. Provides access to all organizations and all projects that user has been added to; access [API Keys](https://platform.openai.com/account/api-keys) to view your available keys. We highly advise transitioning to project keys for best security practices, although access via this method is currently still supported.
36-
- **Model** Chat model used, See [Model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
37-
- **Project** Provides access to a single project (preferred option); Access [Project API Keys](https://platform.openai.com/settings/organization/general) by selecting the specific project you wish to generate keys against.
38-
- **Organization** For users who belong to multiple organizations or are accessing their projects through their legacy user API key, you can pass a header to specify which organization and project is used for an API request. Usage from these API requests will count as usage for the specified organization and project.
39-
40-
### Baidu Cloud WenXin
41-
42-
Baidu Cloud API Key Or Secret Key, See [Create an application](https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application).
43-
44-
- **API Key** Baidu Cloud API Key
45-
- **Secret Key** Baidu Cloud Secret Key
46-
- **Model** Chat model used
47-
48-
### Ali Cloud TongYi
49-
50-
See [开通 DashScope 并创建 API-KEY](https://help.aliyun.com/zh/dashscope/developer-reference/activate-dashscope-and-create-an-api-key).
51-
52-
- **API Key** Baidu Cloud API Key
53-
- **Model** Chat model used
54-
- **EnableSearch** 启用互联网搜索,模型会将搜索结果作为文本生成过程中的参考信息,但模型会基于其内部逻辑判断是否使用互联网搜索结果。默认:关闭。
55-
56-
## Chat
57-
58-
### Models
59-
60-
Shows in chat panel model selection list.
61-
62-
![Sidepanel](./images/sidepanel.png)
63-
64-
- **Title** Display selected text
65-
- **type** `string`
66-
- **Provider** Use that LLM Provider.
67-
- **type** `"anthropic" | "openai" | "qianfan" | "tongyi"`
68-
- **Base URL** LLM API baseURL, Default use provider config.
69-
- **type** `string | undefined`
70-
- **API Key** LLM API key, Default use provider config.
71-
- **type** `string | undefined`
72-
- **Secret Key** Only Baidu QianFan provider
73-
- **type** `string | undefined`
74-
- **Model** Model name
75-
- **type** `string`
76-
- **Temperature** Amount of randomness injected into the response. Ranges from 0 to 1.
77-
- **type** `number | undefined`
78-
- **MaxTokens** A maximum number of tokens to generate before stopping.
79-
- **type** `number | undefined`
80-
- **Stop** A list of strings to use as stop words.
81-
- **type** `string[] | undefined`
82-
83-
examples:
84-
85-
```jsonc
86-
[
87-
{
88-
"title": "GPT-4",
89-
"provider": "openai",
90-
"model": "gpt-4"
91-
},
92-
{
93-
"title": "GPT-3.5 turbo",
94-
"provider": "openai",
95-
"model": "gpt-3.5-turbo",
96-
"temperature": 0.75
97-
},
98-
{
99-
"title": "QWen turbo",
100-
"provider": "tongyi",
101-
"model": "qwen-turbo"
102-
},
103-
{
104-
"title": "ERNIE-Bot turbo",
105-
"provider": "qianfan",
106-
"model": "ERNIE-Bot-turbo"
107-
}
108-
]
109-
```
110-
111-
If there is no configuration of apikey, default get from basic config.
112-
113-
## Code completion
114-
115-
### Model
116-
117-
Model for overwrite provider in the base configuration. see `autodev.openai.model`
118-
119-
> [!NOTE]
120-
> Currently only supports OpenAI chat models.
20+
## Usage
12121

12222
Config OpenAI example:
12323

@@ -134,18 +34,6 @@ Config OpenAI example:
13434
"autodev.completion.model": "openai",
13535
```
13636

137-
### enableRename
138-
139-
Enable rename suggestion
140-
\
141-
142-
## Embedding
143-
144-
### Models
145-
146-
### Builtin: Sentence Transformerjs in Local
147-
148-
### OpenAI
149-
150-
### Ollama
37+
## Next
15138

39+
- [Enable Code-completion](./features/code-completion.md)

package.json

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -293,6 +293,17 @@
293293
],
294294
"description": "Stop words of code model"
295295
},
296+
"autodev.completion.enableLegacyMode": {
297+
"type": "boolean",
298+
"default": false,
299+
"description": "Use legacy \"/v1/completion\" instead of \"/v1/chat/completion\"",
300+
"markdownDescription": "Use legacy `/v1/completion` instead of `/v1/chat/completion`"
301+
},
302+
"autodev.completion.requestDelay": {
303+
"type": "integer",
304+
"default": 500,
305+
"markdownDescription": "Code auto-completion delay request time. Avoid excessive consumption of API tokens. `requestDelay` only works if `#autodev.suggestion.enableCodeCompletion#` is enabled."
306+
},
296307
"autodev.suggestion.enableRename": {
297308
"type": "boolean",
298309
"default": false,

0 commit comments

Comments
 (0)