-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Arian edited this page Dec 24, 2025
·
3 revisions
A developer-friendly Go package for AI models with multi-provider support.
- Quick Start
- Configuration
- Models Reference
- Multi-Provider Support
- Builder API
- Prompts
- Conversations
- Streaming
- Comparing Models
- Tool/Function Calling
- Builtin Tools
- Structured Output
- Structured Output (Parse Retries)
- Response Validation & Guardrails
- Smart Retry (Exponential Backoff)
- Agents
- Embeddings
- Audio
- Vision / Image Input
- Batch
- Cost Tracking
- Hooks & Observability
- Advanced Features
The ai package provides a fluent, chainable API for interacting with AI models. It supports multiple providers (OpenRouter, OpenAI, Anthropic, Ollama, Azure) and is designed for rapid experimentation and testing of prompts.
import ai "gopkg.in/dragon-born/go-llm.v1"
// Quick one-liner (uses OpenRouter by default)
ai.Ask("Hello!")
// Fluent API
ai.GPT5().
System("You are helpful").
User("Explain AI").
Send()
// Direct provider access
ai.Anthropic().Claude().Ask("Hello") // Direct to Anthropic
ai.OpenAI().GPT4o().Ask("Hello") // Direct to OpenAI
ai.Ollama().Use("llama3:8b").Ask("Hello") // Local Ollama
// With all the bells and whistles
ai.Claude().
SystemFile("prompts/analyst.md").
With(ai.Vars{"domain": "crypto"}).
Context("data.json").
Retry(3).
Fallback(ai.ModelGemini3Flash).
Ask("Analyze this")ai/
├── ai.go # Shortcuts, settings, model helpers
├── builder.go # Fluent Builder API
├── client.go # Multi-provider client
├── provider.go # Provider interface & types
├── provider_openrouter.go # OpenRouter provider
├── provider_openai.go # OpenAI provider
├── provider_anthropic.go # Anthropic provider
├── provider_ollama.go # Ollama provider (local)
├── providers.go # Provider shortcuts (Anthropic(), etc.)
├── types.go # Shared types
├── chat.go # Conversation helper
├── compare.go # Multi-model comparison
├── models.go # Model constants
├── prompts.go # Prompt file loading
├── cache.go # Response caching
├── stats.go # Token/cost tracking
├── stream.go # Streaming support
├── tools.go # Function/tool calling
├── schema.go # Structured output
├── vision.go # Image support
└── pretty.go # Colored output
Every request starts with a Builder that you chain methods on:
ai.GPT5() // returns *Builder
.System("...") // returns *Builder
.User("...") // returns *Builder
.Send() // executes, returns (string, error)Convenient functions that return a Builder for each model:
ai.GPT5() // OpenAI GPT-5.2
ai.Claude() // Anthropic Claude Opus 4.5
ai.Gemini() // Google Gemini 3 Flash
ai.Grok() // xAI Grok 3
ai.Use("any/id") // Any OpenRouter modelai.Debug = true // Print requests/responses
ai.Pretty = true // Colored output (default)
ai.Cache = true // Cache identical requests