TypeScript library for building agents based on Schema-Guided Reasoning (SGR).
This library provides base classes and tools for creating intelligent agents that can reason about tasks and perform actions using tools. The library is a TypeScript port of the original Python project sgr-agent-core.
- Base classes for creating agents (
BaseAgent) - Tool system (
BaseTool) - Ready-to-use agent example (
SGRAgent) - Example tools (
ReasoningTool,FinalAnswerTool) - Simple configuration through constructors
- Full TypeScript typing
npm install sgr-agent-coreAfter installation, the library is ready to use. All necessary TypeScript types are included.
If the package is not yet published to npm, use local installation:
# First, build the project
cd /path/to/sgr-agent-core.js
npm install
npm run build
# Then in your project
npm install /path/to/sgr-agent-core.jsgit clone <repository-url>
cd sgr-agent-core.js
npm install
npm run buildAfter installing the package (npm install sgr-agent-core), you can use the library in your project.
-
Install dependencies (if not already installed):
npm install openai zod
-
Set up environment variables:
export OPENAI_API_KEY=your-api-key -
Import necessary classes and functions from the package:
import { SGRAgent, createOpenAIClient, AgentConfig } from "sgr-agent-core"; import { ReasoningTool, FinalAnswerTool } from "sgr-agent-core";
-
Create and run the agent (see examples below)
import { SGRAgent, createOpenAIClient, ReasoningTool, FinalAnswerTool } from "sgr-agent-core";
// 1. Create OpenAI client
const client = createOpenAIClient({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
// 2. Create agent
const agent = new SGRAgent(
[{ role: "user", content: "What is 2+2?" }],
client,
{ llm: { apiKey: process.env.OPENAI_API_KEY!, model: "gpt-4o-mini" } },
[new ReasoningTool(), new FinalAnswerTool()]
);
// 3. Execute agent
const result = await agent.execute();
console.log(result);import {
SGRAgent,
createOpenAIClient,
AgentConfig,
ReasoningTool,
FinalAnswerTool
} from "sgr-agent-core";
async function main() {
// 1. Set up OpenAI client
const openaiClient = createOpenAIClient({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
// Optional: proxy for proxying requests
// proxy: "http://127.0.0.1:8080",
});
// 2. Configure agent
const agentConfig: AgentConfig = {
llm: {
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
temperature: 0.4,
maxTokens: 8000,
},
execution: {
maxIterations: 10,
maxClarifications: 3,
},
prompts: {
systemPrompt: `You are a helpful AI assistant.
Available tools:
{available_tools}
Use the reasoning tool first to analyze the task.`,
initialUserRequest: `Current date: {current_date}`,
},
};
// 3. Create toolkit
const toolkit = [
new ReasoningTool(),
new FinalAnswerTool(),
];
// 4. Create agent
const agent = new SGRAgent(
[{ role: "user", content: "What is 2+2? Provide a detailed explanation." }],
openaiClient,
agentConfig,
toolkit
);
// 5. Execute agent
const result = await agent.execute();
console.log(result);
}
main().catch(console.error);import {
SGRAgent,
createOpenAIClient,
AgentConfig,
ReasoningTool,
FinalAnswerTool,
WebSearchTool,
ExtractPageContentTool,
GeneratePlanTool,
AdaptPlanTool,
ClarificationTool,
CreateReportTool
} from "sgr-agent-core";
async function main() {
// Set up client
const openaiClient = createOpenAIClient({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
// Configuration with web search
const agentConfig: AgentConfig = {
llm: {
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
temperature: 0.4,
maxTokens: 8000,
},
execution: {
maxIterations: 10,
maxClarifications: 3,
},
search: {
tavilyApiKey: process.env.TAVILY_API_KEY, // Optional: for web search
maxResults: 10,
},
prompts: {
systemPrompt: `You are a helpful AI assistant that can search the web and provide answers.
Available tools:
{available_tools}
Use the reasoning tool first to analyze the task.`,
initialUserRequest: `Current date: {current_date}`,
},
};
// Full toolkit
const toolkit = [
new ReasoningTool(),
new GeneratePlanTool(),
new AdaptPlanTool(),
new WebSearchTool(),
new ExtractPageContentTool(),
new ClarificationTool(),
new CreateReportTool(),
new FinalAnswerTool(),
];
// Create and execute agent
const agent = new SGRAgent(
[{ role: "user", content: "What is the latest news about AI?" }],
openaiClient,
agentConfig,
toolkit
);
const result = await agent.execute();
console.log(result);
}
main().catch(console.error);import {
SGRAgent,
createOpenAIClient,
AgentConfig,
StreamingCallback,
ReasoningTool,
FinalAnswerTool
} from "sgr-agent-core";
async function main() {
const openaiClient = createOpenAIClient({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
});
// Set up streaming callback
const streamingCallback: StreamingCallback = {
onChunk: (chunk: string) => {
process.stdout.write(chunk); // Output chunks in real-time
},
onToolCall: (toolCallId: string, toolName: string, toolArguments: string) => {
console.log(`\n[Tool Call] ${toolName} (${toolCallId})`);
},
onFinish: (finalContent: string) => {
console.log("\n[Stream finished]");
},
};
const agentConfig: AgentConfig = {
llm: {
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini",
temperature: 0.4,
maxTokens: 8000,
},
execution: {
maxIterations: 10,
enableStreaming: true, // Enable streaming
},
prompts: {
systemPrompt: `You are a helpful AI assistant.
Available tools:
{available_tools}`,
initialUserRequest: `Current date: {current_date}`,
},
};
const toolkit = [
new ReasoningTool(),
new FinalAnswerTool(),
];
// Pass streamingCallback to constructor
const agent = new SGRAgent(
[{ role: "user", content: "Explain quantum computing in simple terms." }],
openaiClient,
agentConfig,
toolkit,
undefined, // name
undefined, // logger
streamingCallback // streaming callback
);
const result = await agent.execute();
console.log("\n=== Final Result ===");
console.log(result);
}
main().catch(console.error);Create a .env file or set environment variables:
export OPENAI_API_KEY=your-openai-api-key
export OPENAI_MODEL=gpt-4o-mini # optional
export OPENAI_BASE_URL=https://api.openai.com/v1 # optional
export OPENAI_PROXY=http://127.0.0.1:8080 # optional
export TAVILY_API_KEY=your-tavily-api-key # optional, for web search
export ENABLE_STREAMING=true # optionalexport OPENAI_API_KEY=your-api-key
npm run exampleBase abstract class for all agents. Provides:
- Execution context management (
AgentContext) - Step logging
- User clarification handling
- Execution loop with phases: reasoning → select action → action
Key methods:
reasoningPhase()- reasoning phase (abstract)selectActionPhase(reasoning)- tool selection (abstract)actionPhase(tool)- tool execution (abstract)execute()- main execution loop
Interface for tools that agents can use. Each tool must implement:
toolName: unique tool namedescription: tool description for LLMexecute(context, config, data): execution method
AgentContext: agent execution context (state, iterations, results)AgentStatesEnum: agent states (INITED, RESEARCHING, COMPLETED, etc.)SearchResult,SourceData: for search functionality (optional)
import { BaseAgent, BaseTool, AgentConfig } from "sgr-agent-core";
import OpenAI from "openai";
class MyAgent extends BaseAgent {
name = "my_agent";
protected async reasoningPhase() {
// 1. Prepare context and tools
const messages = await this.prepareContext();
const tools = await this.prepareTools();
// 2. Call LLM for reasoning
const response = await this.openaiClient.chat.completions.create({
model: this.config.llm.model,
messages,
tools,
tool_choice: { type: "function", function: { name: "reasoning" } },
});
// 3. Extract reasoning result
const reasoning = /* parse response */;
this.logReasoning(reasoning);
return reasoning;
}
protected async selectActionPhase(reasoning: any): Promise<BaseTool> {
// Select tool based on reasoning
const toolName = /* determine from reasoning */;
return this.toolkit.find(t => t.toolName === toolName)!;
}
protected async actionPhase(tool: BaseTool): Promise<string> {
// Execute tool
const messages = await this.prepareContext();
const response = await this.openaiClient.chat.completions.create({
model: this.config.llm.model,
messages,
tools: await this.prepareTools(),
tool_choice: { type: "function", function: { name: tool.toolName } },
});
const args = JSON.parse(response.choices[0].message.tool_calls![0].function.arguments!);
const result = await tool.execute(this.context, this.config, args);
this.conversation.push({
role: "tool",
content: result,
tool_call_id: response.choices[0].message.tool_calls![0].id,
});
this.logToolExecution(tool, result);
return result;
}
}import { BaseTool, AgentConfig, AgentContext } from "sgr-agent-core";
class MyTool implements BaseTool {
toolName = "my_tool";
description = "Description of my tool for LLM";
async execute(
context: AgentContext,
config: AgentConfig,
data: any
): Promise<string> {
// Tool execution logic
// Can use context to save state
// Can use config to access settings
const result = "Execution result";
return result;
}
}The library supports streaming responses from the LLM for real-time output:
import { SGRAgent, createOpenAIClient, StreamingCallback } from "sgr-agent-core";
const streamingCallback: StreamingCallback = {
onChunk: (chunk: string) => {
process.stdout.write(chunk); // Print chunks as they arrive
},
onToolCall: (toolCallId: string, toolName: string, toolArguments: string) => {
console.log(`\n[Tool Call] ${toolName}`);
},
onFinish: (finalContent: string) => {
console.log("\n[Stream finished]");
},
};
const agentConfig: AgentConfig = {
llm: { /* ... */ },
execution: {
maxIterations: 10,
enableStreaming: true, // Enable streaming
},
};
const agent = new SGRAgent(
taskMessages,
openaiClient,
agentConfig,
toolkit,
undefined, // name
undefined, // logger
streamingCallback // streaming callback
);When enableStreaming is true, the agent will use streaming API calls and callbacks will be invoked as chunks arrive.
All settings are passed through constructors:
const agentConfig: AgentConfig = {
llm: {
apiKey: "your-api-key",
model: "gpt-4o-mini",
baseURL: "https://api.openai.com/v1", // optional
temperature: 0.4,
maxTokens: 8000,
proxy: "http://127.0.0.1:8080", // optional: proxy URL
},
execution: {
maxIterations: 10, // maximum number of iterations
maxClarifications: 3, // maximum number of clarifications
enableStreaming: false, // enable streaming responses (optional)
},
prompts: {
systemPrompt: "...", // system prompt with {available_tools}
initialUserRequest: "...", // initial request with {current_date}
clarificationResponse: "...", // clarification response
},
};The library supports proxy configuration for OpenAI API requests. To use a proxy:
-
Install the required proxy agent packages (optional dependencies):
npm install https-proxy-agent http-proxy-agent socks-proxy-agent
-
Configure proxy in LLM config:
const agentConfig: AgentConfig = { llm: { apiKey: "your-api-key", model: "gpt-4o-mini", proxy: "http://127.0.0.1:8080", // HTTP proxy // or proxy: "socks5://127.0.0.1:1080", // SOCKS5 proxy // or proxy: "https://proxy.example.com:8080", // HTTPS proxy }, };
-
Use
createOpenAIClienthelper function:import { createOpenAIClient } from "sgr-agent-core"; const openaiClient = createOpenAIClient(agentConfig.llm);
Supported proxy formats:
http://host:port- HTTP proxyhttps://host:port- HTTPS proxysocks5://host:port- SOCKS5 proxysocks4://host:port- SOCKS4 proxy
The library exports the following main classes and types:
SGRAgent- ready-to-use SGR agentBaseAgent- base class for creating custom agents
ReasoningTool- reasoning toolFinalAnswerTool- final answer toolWebSearchTool- web search tool (requires Tavily API)ExtractPageContentTool- page content extraction toolGeneratePlanTool- plan generation toolAdaptPlanTool- plan adaptation toolClarificationTool- clarification request toolCreateReportTool- report creation toolBaseTool- base interface for creating custom tools
createOpenAIClient(config)- create OpenAI client with proxy supportStreamingCallback- streaming interfaceStreamingHandler- streaming handler
AgentConfig- agent configurationLLMConfig- LLM configurationExecutionConfig- execution configurationAgentContext- agent execution contextAgentStatesEnum- agent states
// Import main classes
import { SGRAgent, createOpenAIClient, AgentConfig } from "sgr-agent-core";
// Import tools
import {
ReasoningTool,
FinalAnswerTool,
WebSearchTool
} from "sgr-agent-core";
// Import types
import type { StreamingCallback, AgentContext } from "sgr-agent-core";See the examples/ folder for more detailed usage examples:
simple-agent.ts- simple example of using SGRAgent (for development)usage-example.ts- example showing how to use the library after npm install
- No centralized configs (YAML files) - everything through constructors
- No API server - library only for use in applications
- Simplified NextStepTools implementation (without dynamic type generation)
- All settings are explicitly passed through constructors
MIT