Build and run AI-native agent teams in Go.
agent-team-go is a Go-first platform skeleton for teams of agents that can coordinate work, install the skills they need, and connect to real delivery channels like Feishu and Telegram.
Chinese docs · Contributing · Security
Most agent frameworks stop at orchestration demos. Production teams need more:
- Structured delegation instead of prompt-only handoffs
- Custom skills with auto-install from local, registry, or git sources
- Channel adapters for Feishu, Telegram, and CLI-first workflows
- Replayable runs, artifacts, and event logs
- A clean Go codebase that is simple to deploy and extend
This repository is the first public release of that direction.
Custom Skills: define your own skill packages and keep them versionedAuto Skill Install: missing skills are resolved and installed before a runFeishu / Telegram Gateway: channel adapters are first-class, not an afterthoughtStructured Delegation: captain, planner, researcher, coder, reviewer all work through typed work itemsReplay Logs: every run emits events and artifacts that can be replayed laterCheckpoints + Approvals: runs persist checkpoints and approval events for safer executionTeam Memory: compact history from earlier runs feeds into future planning and synthesisRevision Loop: operators can request changes, resume the run, and re-review the revised draftModel Bindings: each agent can declare its own model while providers are configured once at the team levelRetry-Aware Execution: work items can retry and surface blocked dependencies instead of failing silentlyReal Delivery: enabled channels can send real Telegram and Feishu messages, not only previewsIncoming Gateway: Telegram and Feishu webhook events can trigger auto-generated teams directlyPause / Resume: manual approval mode can pause a run, persist state, and resume after a human decision
git clone git@github.com:daewoochen/agent-team-go.git
cd agent-team-go
go run ./cmd/agentteam run \
--team ./examples/software-team/team.yaml \
--task "Launch the public MVP and de-risk the first release"go run ./cmd/agentteam auto \
--task "Compare the top Go agent runtimes and propose our launch angle"go run ./cmd/agentteam serve --listen :8080 --delivergo run ./cmd/agentteam channels validate --team ./examples/software-team/team.yamlgo run ./cmd/agentteam skills install \
--name github \
--source local \
--path ./skills/githubgo run ./cmd/agentteam skills scaffold \
--name launch-writer \
--dir ./skills/launch-writer \
--description "Draft release-ready launch notes"go run ./cmd/agentteam skills search --query messenger
go run ./cmd/agentteam skills list --workdir .go run ./cmd/agentteam init --name my-team --dir ./demogo run ./cmd/agentteam models explain --team ./examples/software-team/team.yamlgo run ./cmd/agentteam inspect team --team ./examples/software-team/team.yaml
go run ./cmd/agentteam inspect team --team ./examples/software-team/team.yaml --format mermaidgo run ./cmd/agentteam replay show --run ./.agentteam/runs/<run-id>.jsongo run ./cmd/agentteam memory show --team ./examples/software-team/team.yamlgo run ./cmd/agentteam run \
--team ./examples/manual-approval-team/team.yaml \
--task "Prepare the launch response and guarded rollout plan"
go run ./cmd/agentteam approvals show --checkpoint ./.agentteam/checkpoints/<run-id>.json
go run ./cmd/agentteam approvals approve --checkpoint ./.agentteam/checkpoints/<run-id>.json --all
go run ./cmd/agentteam resume --team ./examples/manual-approval-team/team.yaml --checkpoint ./.agentteam/checkpoints/<run-id>.jsonIf the operator wants the team to revise the draft instead of approving immediately:
go run ./cmd/agentteam approvals request-changes \
--checkpoint ./.agentteam/checkpoints/<run-id>.json \
--id approval-outbound-message \
--note "Add rollback guidance and make the customer message more conservative"
go run ./cmd/agentteam resume --team ./examples/manual-approval-team/team.yaml --checkpoint ./.agentteam/checkpoints/<run-id>.jsonIf the operator wants to stop the run instead of continuing:
go run ./cmd/agentteam approvals reject \
--checkpoint ./.agentteam/checkpoints/<run-id>.json \
--id approval-outbound-message \
--note "Need a safer rollout and external review first"- Parses a declarative
team.yaml - Validates channel configuration
- Validates model provider configuration and API key env bindings
- Ensures required skills are installed before a run
- Runs a hierarchical team loop with structured delegations, retries, and dependency-aware scheduling
- Produces work items, approvals, artifacts, checkpoints, replay logs, compact team memory, real or preview deliveries, and resumable paused runs
Use environment bindings in team.yaml:
channels:
- kind: telegram
enabled: true
token: env:TELEGRAM_BOT_TOKEN
allow_from: [env:TELEGRAM_CHAT_ID]
- kind: feishu
enabled: true
app_id: env:FEISHU_APP_ID
app_secret: env:FEISHU_APP_SECRET
allow_from: [env:FEISHU_CHAT_ID]Then send the prepared deliveries:
go run ./cmd/agentteam run --team ./examples/assistant-team/team.yaml --task "Draft the launch update" --deliver
go run ./cmd/agentteam channels deliver --team ./examples/assistant-team/team.yaml --run ./.agentteam/runs/<run-id>.jsontoken, app_id, app_secret, and allow_from entries all support env:VAR_NAME so you can keep secrets out of committed YAML.
If you want a dumb-simple bot backend, run:
go run ./cmd/agentteam serve --listen :8080 --deliverEndpoints:
POST /webhooks/telegram
POST /webhooks/feishu
GET /healthz
What happens on each incoming message:
- The gateway normalizes the message into a task.
agent-team-goauto-selects a team profile such as research, incident, or software.- The team runs with memory enabled.
- The final summary is sent back to the source Telegram chat or Feishu chat when
--deliveris enabled.
Telegram example:
curl -X POST http://127.0.0.1:8080/webhooks/telegram \
-H 'Content-Type: application/json' \
-d '{"message":{"text":"Prepare an incident response brief","chat":{"id":12345},"from":{"id":7}}}'Feishu example:
curl -X POST http://127.0.0.1:8080/webhooks/feishu \
-H 'Content-Type: application/json' \
-d '{"header":{"event_type":"im.message.receive_v1"},"event":{"sender":{"sender_id":{"open_id":"ou_x"}},"message":{"chat_id":"oc_123","message_type":"text","content":"{\"text\":\"Draft the launch update\"}"}}}'Agent teams should not lose every lesson after a run finishes.
Enable file-backed memory in team.yaml:
memory:
backend: file
path: .agentteam/memory/release-history.json
max_entries: 8Then inspect it:
go run ./cmd/agentteam memory show --team ./examples/release-memory-team/team.yamlThis is useful for recurring cases such as release management, incident follow-up, customer support triage, and weekly research programs.
Model providers live under models.providers in team.yaml. The recommended pattern is:
- Put the real secret in an environment variable
- Reference that variable with
api_key_env - Point each agent at a model like
openai/gpt-4.1-mini
Example:
models:
default_model: openai/gpt-4.1-mini
providers:
openai:
kind: openai-compatible
base_url: https://api.openai.com/v1
api_key_env: OPENAI_API_KEY
agents:
- name: captain
role: captain
model: openai/gpt-4.1Then export the key before you run the team:
export OPENAI_API_KEY=your_api_key
go run ./cmd/agentteam models validate --team ./team.yamlagentteam will also auto-load a .env file from the current working directory and the team spec directory when present. The repo ships an .env.example file with common variable names.
If you want the easiest path, use the auto mode.
go run ./cmd/agentteam auto --task "Prepare an incident response brief for the sev1 outage"The CLI will:
- Classify the task into a built-in team profile such as software, research, incident, content, or assistant.
- Build a ready-to-run team with captain/planner/specialists.
- Enable persistent memory by default.
- Automatically use OpenAI if
OPENAI_API_KEYis present, otherwise fall back to deterministic mock providers. - Auto-enable Telegram or Feishu delivery when the relevant env vars are present.
flowchart TD
User["User / Trigger"] --> CLI["agentteam CLI"]
CLI --> Loader["TeamSpec Loader"]
Loader --> Skills["Skill Resolver + Installer"]
Loader --> Policy["Policy Gate"]
Skills --> Runtime["Hierarchical Runtime"]
Runtime --> Captain["Captain"]
Captain --> Planner["Planner"]
Captain --> Researcher["Researcher"]
Captain --> Coder["Coder"]
Captain --> Reviewer["Reviewer"]
Runtime --> Channels["CLI / Telegram / Feishu"]
Runtime --> Replay["Replay Log + Artifacts"]
Software TeamCaptain coordinates Planner, Researcher, Coder, and Reviewer to ship a feature or release.Assistant TeamCoordinator receives incoming requests, routes them to specialists, and reports progress back to Feishu or Telegram.Ops TeamA captain agent validates channel access, installs missing skills, and assembles a safe execution plan.Manual Approval TeamA run pauses for human approval before protected actions, then resumes from checkpoint.Deep Research TeamResearcher and Reviewer build a fact package while the captain prepares a final synthesis.Incident Response TeamCaptain coordinates evidence gathering and approval-aware stakeholder updates.Content Studio TeamA small team plans, drafts, and reviews launch assets using reusable skills.Release Memory TeamA recurring release team remembers prior risks, decisions, and follow-up tasks across runs.Auto TeamGive the CLI a task and let it choose the agent mix for you.
More example specs live in examples/README.md. If you want a real provider example, start from examples/openai-launch-team/team.yaml.
- Single binary distribution
- Strong typing for specs, work items, and delegation contracts
- Great fit for concurrent run orchestration
- Friendly to platform teams that want predictable operations
This repo is intentionally opinionated:
- It starts from team execution, not just model orchestration
- It treats skills and channels as platform primitives
- It keeps the code small enough to learn, fork, and ship
v0.1: CLI, TeamSpec, skill resolver, local runtime, CLI channelv0.2: richer Telegram and Feishu adapters, stronger policy hooksv0.3: MCP bridge, better artifact handling, richer replay visualizationv0.4: A2A bridge, sandbox execution, web console
cmd/agentteam # CLI entrypoint
pkg/spec # TeamSpec, AgentSpec, SkillManifest, channel config
pkg/runtime # Run loop, delegation events, replay model
pkg/skills # Skill resolver, installer, registry placeholder
pkg/channels # CLI / Telegram / Feishu adapters
pkg/agents # Role helpers
pkg/policy # Download / install policy hooks
pkg/observe # Replay log writer
examples/ # Runnable team templates
skills/ # Bundled skills
docs/ # Extra documentation
- Team-level model provider config with per-agent model selection
- real
openai-compatibleprovider support alongside deterministic mock providers agentteam models explainandagentteam models validateagentteam skills scaffold,skills search, andskills listagentteam inspect team --format text|mermaid- retry-aware work items with blocked-dependency events
- prepared channel delivery previews in run output and replay logs
- manual approval mode with checkpoint-backed
approvals show/approveandresume - approval rejection and operator notes that flow back into resumed runs
- request-changes approval loops that revise the draft and reopen approvals for re-review
- replay inspection via
agentteam replay show - persistent team memory with
agentteam memory show - auto-generated teams via
agentteam auto - real Telegram and Feishu delivery via
--deliverandchannels deliver - incoming webhook gateway via
agentteam serve - checkpoint persistence under
.agentteam/checkpoints/ - richer example cases for research, incident response, and content teams
This is a polished MVP skeleton. It is meant to be runnable, readable, and easy to extend. The next step after the initial launch is to replace placeholder integrations with full production adapters while keeping the public interfaces stable.
Issues and pull requests are welcome. Good first contributions:
- richer skill manifests
- more realistic delegation strategies
- deeper Telegram / Feishu validation
- replay visualizers
- MCP and sandbox integrations
If this direction resonates with you, give the repo a star and share it with one builder who is tired of fragile agent demos.