Fast, lean AI agents. 5 lines to production.
Coding agents: See SKILLS.md for the complete API guide — tools, agents, multi-agent patterns, streaming, memory, and all imports in one file.
pop is a lightweight Python framework for building AI agents. It supports multiple LLM providers, has 5 core concepts, and gets you from install to a working agent in under 2 minutes.
- 5 lines to a working agent -- define a tool, create an agent, call
run. - 8 LLM providers built-in -- OpenAI, Anthropic, Gemini, DeepSeek, Grok, Kimi, MiniMax, GLM. Switch by changing one string.
- ~2,500 lines of code -- read the entire framework in an afternoon.
- 2 runtime dependencies --
httpxandpydantic. Import time under 1ms (lazy imports). - Zero commercial dependencies -- no forced telemetry, no vendor lock-in.
Reproduce:
python benchmarks/bench_startup.py && python benchmarks/bench_dx.py && python benchmarks/generate_charts.py
Details: docs/benchmarks.md
uv add pop-framework
# or
pip install pop-frameworkAll 8 providers (OpenAI, Anthropic, Gemini, DeepSeek, Grok, Kimi, MiniMax, GLM) are included — no extras needed.
from pop import Agent, tool
@tool
def search(query: str) -> str:
"""Search the web for current information."""
return web_search(query) # your implementation
agent = Agent(model="openai:gpt-4o", tools=[search])
result = agent.run("What happened in AI today?")
print(result.output)That's it. No StateGraph, no RunnableSequence, no ChannelWrite.
| Guide | What it covers |
|---|---|
| Skills | Complete API guide for building agents |
| Providers | Switching LLMs, failover, model adapters |
| Streaming | Real-time events, pattern matching |
| Workflows | Chain, route, parallel, agent, orchestration |
| Multi-Agent | Handoff, pipeline, debate, fan_out |
| Memory | In-memory and markdown-based persistence |
| Benchmarks | Performance numbers, framework comparison |
MIT