English | ไธญๆ
Stop guessing what {"error_code": 17} means. Unified error diagnostics for Chinese LLM APIs โ get human-readable messages and actionable fix suggestions in seconds.
One function. Six providers. Two languages. Zero confusion.
You're building with Chinese LLM APIs. Then this happens:
{ "base_resp": { "status_code": 1004 } }Is that auth? Rate limit? Billing? Each provider has its own error format, its own codes, its own conventions. You end up writing the same error-handling boilerplate six times.
llm-provider-errors fixes this. One call to diagnose() and you get a clear explanation + what to do about it.
npm install llm-provider-errorsimport { diagnose } from 'llm-provider-errors';
// Got a 429 from DeepSeek?
const result = diagnose('deepseek', 429);
console.log(result.message); // "Too Many Requests โ rate limit or quota exceeded."
console.log(result.hint); // "Implement exponential backoff, reduce request frequency, or upgrade your plan."
console.log(result.severity); // "high"
// Got a provider-specific error in the response body?
const body = { error: { code: 'insufficient_quota', message: 'Quota exceeded' } };
const result2 = diagnose('deepseek', 402, body);
console.log(result2.providerCode); // "insufficient_quota"
console.log(result2.hint); // "Top up your DeepSeek account at https://platform.deepseek.com/top_up."
// Need Chinese? ไธญๆไน่ก๏ผ
const result3 = diagnose('qwen', 429, undefined, { locale: 'zh' });
console.log(result3.message); // "่ฏทๆฑ่ฟๅค โ ่ถ
ๅบ้็้ๅถๆ้
้ขใ"
console.log(result3.hint); // "่ฏทๅฎ็ฐๆๆฐ้้ฟ้่ฏใ้ไฝ่ฏทๆฑ้ข็๏ผๆๅ็บงๅฅ้คใ"| Provider | Key | API Docs |
|---|---|---|
| MiniMax | minimax |
platform.minimaxi.com |
| Kimi / Moonshot | moonshot |
platform.moonshot.cn |
| DeepSeek | deepseek |
platform.deepseek.com |
| Qwen / Tongyi | qwen |
DashScope Docs |
| GLM / Zhipu AI | glm |
open.bigmodel.cn |
| Baidu / ERNIE | baidu |
Qianfan Docs |
Diagnose an LLM API error and return structured information.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
provider |
Provider |
โ | Provider name ('minimax', 'moonshot', 'deepseek', 'qwen', 'glm', 'baidu') |
statusCode |
number |
โ | HTTP status code from the API response |
responseBody |
unknown |
โ | Response body (string or object) for provider-specific error extraction |
options |
DiagnoseOptions |
โ | `{ locale: 'en' |
Returns: DiagnosisResult
interface DiagnosisResult {
provider: Provider; // Which provider
code: number; // HTTP status code
message: string; // Human-readable error message
hint: string; // Actionable fix suggestion
severity: Severity; // 'low' | 'medium' | 'high' | 'critical'
providerCode?: string; // Provider-specific error code (if extracted)
detail?: string; // Raw detail from provider response
}Check if a provider is supported.
import { isSupported } from 'llm-provider-errors';
isSupported('deepseek'); // true
isSupported('openai'); // falseArray of all supported provider names.
import { providers } from 'llm-provider-errors';
console.log(providers); // ['minimax', 'moonshot', 'deepseek', 'qwen', 'glm', 'baidu']-
Provider-specific codes first โ If you pass a response body, the library tries to extract the provider's own error code (e.g., Baidu's
error_code: 17, MiniMax'sbase_resp.status_code) and map it to a detailed diagnosis. -
HTTP status fallback โ If no provider-specific code is found, it falls back to standard HTTP status code mappings with provider-aware messaging.
-
Generic fallback โ For completely unknown errors, you still get a structured result with the status code and severity.
import { diagnose } from 'llm-provider-errors';
try {
const response = await fetch('https://api.deepseek.com/chat/completions', { ... });
if (!response.ok) {
const body = await response.json().catch(() => undefined);
const diagnosis = diagnose('deepseek', response.status, body);
if (diagnosis.severity === 'critical') {
// Alert the team, this needs attention
alertOps(diagnosis);
}
throw new Error(`${diagnosis.message}\n๐ก ${diagnosis.hint}`);
}
} catch (err) {
// ...
}import axios from 'axios';
import { diagnose } from 'llm-provider-errors';
const client = axios.create({ baseURL: 'https://open.bigmodel.cn/api/paas/v4' });
client.interceptors.response.use(undefined, (error) => {
if (error.response) {
const diagnosis = diagnose('glm', error.response.status, error.response.data, { locale: 'zh' });
error.diagnosis = diagnosis;
console.error(`[GLM Error] ${diagnosis.message}\n๐ก ${diagnosis.hint}`);
}
return Promise.reject(error);
});Contributions are welcome! Here's how:
- Fork the repo
- Create a feature branch:
git checkout -b feat/add-provider - Commit your changes:
git commit -m 'feat: add new provider' - Push to your branch:
git push origin feat/add-provider - Open a Pull Request
- Create
src/providers/your-provider.tsimplementing theProviderHandlerinterface - Add the provider name to the
Providertype insrc/types.ts - Register it in
src/index.ts - Add tests in
test/index.test.ts - Update the README tables
Found an error code we missed? PRs that add real-world error codes with accurate messages and hints are especially valued. Please reference the provider's official documentation.
MIT ยฉ Shuyu Zhang
Built with โค๏ธ for everyone fighting LLM API errors at 3 AM