AutoMock is a cloud-native CLI tool that generates and deploys production-ready mock API servers from natural language descriptions, API collections, or an interactive builder. Spin up fully managed mock servers on AWS with auto-scaling, monitoring, and built-in load testing support.
Cloud provider support is currently AWS-only. GCP and Azure are planned but not yet available.
- π€ AI-Generated Mocks β Describe your API in natural language, get complete MockServer configurations
- βοΈ Cloud-Native Deployment β One command deploys ECS Fargate + ALB + Auto-scaling
- π¦ Multi-Format Import β Postman, Bruno, Insomnia collections β MockServer expectations
- π§ Interactive Builder β 7-step guided builder for precise control
- β‘ Auto-Scaling β CPU/Memory/Request-based (configurable; defaults: 10β200 tasks)
- πΎ Cloud Storage β S3-backed, versioned, team-accessible expectations
- π Advanced Features β Progressive delays, GraphQL, response templates, response limits
- π§ͺ Load Testing β Locust bundle generation and managed AWS deployment
- π Production-Ready β ALB health checks, CloudWatch monitoring, IAM best practices
brew tap hemantobora/tap
brew install automock
automock --versionOr directly: brew install hemantobora/tap/automock
scoop bucket add hemantobora https://github.com/hemantobora/scoop-bucket
scoop install automock
automock --version- Go to the Releases page
- Download the archive for your OS/arch (e.g.,
automock_darwin_arm64.tar.gz) - Extract and place the binary on your PATH:
tar -xzf automock_*.tar.gz
sudo mv automock /usr/local/bin/
automock --versionRequires Go 1.22+:
git clone https://github.com/hemantobora/auto-mock.git
cd auto-mock
go build -o automock ./cmd/auto-mock- Homebrew:
brew upgrade automock - Scoop:
scoop update automock - Binary: download the newer version and replace the existing binary
# Set your AI provider key (choose one)
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
# Generate mock expectations
automock init --project user-api --provider anthropic
# Deploy to AWS
automock deploy --project user-apiYour mock API is now live at the ALB endpoint shown after deploy.
| Document | Description |
|---|---|
| GETTING_STARTED.md | Complete setup guide, tutorials, examples |
automock help |
CLI reference |
Generate complete MockServer configurations from natural language:
automock init --project my-api --provider anthropicPrompt: "User management API with registration, login, profile CRUD, password reset, and admin functions"
AI generates:
- All CRUD endpoints (
GET /users,POST /users,PUT /users/{id}, etc.) - Authentication flows (login, logout, token refresh)
- Admin-only endpoints with authorization
- Error responses (400, 401, 403, 404, 500)
- Realistic test data with proper types
- Request validation rules
- Multiple scenarios per endpoint
Supported AI Providers:
- Anthropic (Claude Sonnet 4.5)
- OpenAI (GPT-4)
- Template (no AI, fallback mode)
Import existing API definitions from popular tools:
automock init \
--project api-mock \
--collection-file api.postman_collection.json \
--collection-type postmanSupported Formats:
- Postman Collection v2.1 (.json)
- Bruno Collection (.json)
- Insomnia Workspace (.json)
Features:
- Sequential API execution with variable resolution
- Interactive matching configuration (guided; no automatic scenario inference)
- Auto-incremented priorities to avoid collisions
- Pre/post-script processing (Postman-like JS via embedded engine)
- Auth mapping to headers when provided in the collection
Example: Multi-Scenario Configuration
GET /api/users/123:
Priority 100: Anonymous β 401 Unauthorized
Priority 200: Authenticated β 200 OK (user data)
Priority 300: Admin β 200 OK (admin view)
Priority 400: Rate limited β 429 Too Many Requests
Precision-controlled, step-by-step expectation creation:
automock init --project my-api
# Select: interactive7-Step Process:
- Basic Info β Description, priority, tags
- Request Matching β Method, path, query params, headers
- Response Configuration β Status code, headers, body templates
- Advanced Features β Delays, caching, compression
- Connection Options β Socket config, keep-alive
- Response Limits β Serve unlimited times or expire after N requests
- Review & Confirm β Validate before saving
Advanced Request Matching:
- Path parameters:
/users/{id}/orders/{orderId} - Regex paths:
/api/.*/status - Query string matching:
?status=active&limit=10 - Header validation:
Authorization: Bearer * - Body matching: exact, partial, regex, JSONPath
Response Features:
- Template variables:
$!uuid,$!now_epoch,$!request.headers['X-Request-ID'][0] - Progressive delays: 100ms β 150ms β 200msβ¦
- Multiple response bodies per expectation
Deploy production-ready infrastructure with one command:
automock deploy --project my-apiWhat Gets Deployed:
βββββββββββββββββββββββββββββββββββββββββββ
β Application Load Balancer (Public) β
β https://automock-{project}-{id}.elbβ¦ β
βββββββββββββββ¬ββββββββββββββββββββββββββββ
β
βββββββββββ΄βββββββββββ
β Target Groups β
β β’ API (/) β
β β’ Dashboard β
βββββββββββ¬ββββββββββββ
β
βββββββββββββββ΄βββββββββββββββββββββββββββββ
β ECS Fargate Cluster β
β β’ MockServer (port 1080) β
β β’ Config Loader (sidecar) β
β β’ Auto-scaling: configurable β
β (defaults: 10β200 tasks) β
ββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββ΄βββββββββββ
β S3 Bucket β
β expectations.json β
β (versioned) β
βββββββββββββββββββββββ
Infrastructure Features:
- Auto-Scaling β CPU/Memory/Request-based (10β200 tasks by default)
- Monitoring β CloudWatch metrics, logs, alarms
- Health Checks β ALB target health,
/mockserver/status - Security β IAM roles, security groups, private subnets
- Custom Domain β ACM certificate + Route53 (optional, prompted at deploy time)
- Private ALB β Optional internal ALB for VPC-internal clients (e.g., Locust)
- BYO Networking β Use existing VPC, subnets, IGW, NAT, IAM roles, and security groups
Accessing Your Mock:
# API endpoint
curl https://automock-my-api-123.us-east-1.elb.amazonaws.com/api/users
# MockServer Dashboard
open https://automock-my-api-123.us-east-1.elb.amazonaws.com/mockserver/dashboardManage expectations throughout their lifecycle:
# View / add / edit / remove / replace / download / delete
automock init --project my-api
# β Select the desired action from the menu| Action | Description |
|---|---|
view |
List all expectations |
add |
Add new expectations (any generation mode) |
edit |
Edit specific expectations |
remove |
Remove selected expectations |
replace |
Replace all expectations |
download |
Save to {project}-expectations.json |
delete |
Tear down project and all infrastructure |
Generate Locust load testing bundles from a collection:
automock load \
--collection-file api.json \
--collection-type postman \
--dir ./load-tests \
--distributed
cd load-tests
./run_locust_ui.sh
# Browser opens at http://localhost:8089Generated Files:
locustfile.pyβ Test scenariosrequirements.txtβ Python dependenciesrun_locust_ui.sh/.ps1β Start with web UIrun_locust_headless.sh/.ps1β Run without UIrun_locust_master.sh/.ps1β Distributed masterrun_locust_worker.sh/.ps1β Distributed worker
Variable substitution:
${env.VAR}β Expanded at load-time across the spec${data.<field>}and${user.id|index}β Expanded at runtime in path, headers, params, and body- In
auth.mode: shared, only${env.*}expands; inauth.mode: per_user,${data.*}and${user.*}also expand in the login path/headers/body
Deploy a Locust cluster to AWS via the same deploy command. The stack provisions an ECS Fargate cluster (master + configurable workers), an ALB for the Locust UI, and Cloud Map service discovery between master and workers.
Deploy:
# Upload your load test bundle first
automock load --project my-api --upload --dir ./load-tests
# Deploy infrastructure (prompts for sizing and optional BYO networking)
automock deploy --project my-apiScale workers:
automock deploy --project my-api
# β When already deployed, prompts for new worker countDestroy:
automock destroy --project my-api
# β Select: mocks, loadtest, or bothWhat you get:
- Public ALB with HTTP and HTTPS access to the Locust master UI (self-signed cert)
- ECS task definitions for master and workers
- Cloud Map private namespace for masterβworker discovery
- CloudWatch log groups with configurable retention
- Security groups with least-privilege rules
Custom Locust image (optional):
By default the module uses public images (locustio/locust:2.31.2 + python:3.11-slim init sidecar). If you prefer faster cold-starts without runtime installs, build a derived image with Locust and boto3 pre-installed and set the locust_container_image Terraform variable before deploying.
Troubleshooting β locustfile.py not found:
If Locust logs show Could not find '/workspace/locustfile.py', check:
- No bundle has been uploaded yet β run
automock load --project <name> --upload --dir ./load-tests - The pointer file
current.jsonis missing β re-uploading a bundle recreates it - IAM permissions missing: the task role needs
s3:GetObjectands3:ListBucketon the project bucket
Mock backend APIs before they exist:
automock init --project frontend-mock --provider anthropic
# Describe: "REST API for blog app: posts, comments, users"
automock deploy --project frontend-mockConsistent, controlled test environments in CI/CD:
automock deploy --project test-api --skip-confirmation
npm run test:integration -- --api-url https://automock-test-api-123.elb.amazonaws.com
automock destroy --project test-api --forceTest against external APIs without rate limits or costs:
automock init \
--project stripe-mock \
--collection-file stripe-api.postman_collection.json \
--collection-type postmanGenerate and deploy load tests against your mock:
automock load \
--collection-file prod-api.json \
--collection-type postman \
--dir ./load-tests
automock load --project prod-api --upload --dir ./load-tests
automock deploy --project prod-apimin_tasks and max_tasks are both configurable at deploy time. The table below reflects the defaults. Using BYO networking (existing VPC, subnets, and NAT) eliminates the NAT Gateway cost.
| Component | Monthly Cost |
|---|---|
| ECS Fargate (0.25 vCPU, 0.5 GB) | ~$35 |
| Application Load Balancer | ~$16 |
| NAT Gateway | ~$32 |
| Data Transfer | ~$9 |
| CloudWatch Logs | ~$0.50 |
| S3 Storage | ~$0.30 |
| Total | ~$93 |
Rough estimates; varies by region, traffic, and log volume. Validate with the AWS Pricing Calculator.
Hourly rate (10 tasks): ~$0.13/hour
| Provider | Cost per Generation |
|---|---|
| Claude Sonnet 4.5 | $0.05 β $0.20 |
| GPT-4 | $0.10 β $0.30 |
Cost tips: Destroy when not in use (automock destroy --project <name>), reduce task count for smaller APIs, or use BYO networking to share existing NAT Gateways.
Scale Up (Aggressive):
- CPU 70β80% β +50% tasks
- CPU 80β90% β +100% tasks
- CPU 90%+ β +200% tasks
- Memory thresholds follow the same pattern
- Requests/min: 500β1000 β +50%, 1000+ β +100%
Scale Down (Conservative):
- CPU < 40% for 5 minutes β β25% tasks
- Cooldown: 5 minutes between scale events
Limits: minimum 10 tasks, maximum 200 tasks (both configurable)
CloudWatch Metrics: ECS CPU/memory/task count, ALB request count/response time/4xx/5xx errors
Alarms: unhealthy host count > 0, 5XX errors > 10/min, CPU > 70% for 10 min, Memory > 80% for 10 min
- IAM β Least-privilege, separate task execution and task roles, no hardcoded credentials; optional permissions boundary and custom role path
- Networking β ECS tasks in private subnets, NAT for outbound only, security groups restrict traffic to ALB
- Data β S3 server-side encryption (AES-256), versioning enabled, CloudWatch Logs retention: 30 days
Simulate degrading performance:
{
"progressive": {
"base": 100,
"step": 50,
"cap": 500
}
}Request 1: 100ms β Request 2: 150ms β β¦ β capped at 500ms
Dynamic values in responses:
{
"id": "$!uuid",
"timestamp": "$!now_epoch",
"requestId": "$!request.headers['x-request-id'][0]",
"userId": "$!request.pathParameters['userId'][0]",
"randomScore": "$!rand_int_100"
}Available variables: $!uuid, $!now_epoch, $!rand_int_100, $!rand_bytes_64, $!request.path, $!request.method, $!request.headers['X-Header'][0], $!request.pathParameters['param'][0], $!request.queryStringParameters['query'][0]
Basic GraphQL request matching (no schema validation):
{
"httpRequest": {
"method": "POST",
"path": "/graphql",
"body": {
"query": {"contains": "query GetUser"},
"variables": {"userId": "123"}
}
},
"httpResponse": {
"body": {"data": {"user": {"id": "123", "name": "John"}}}
}
}Supported: query string contains, operation name matching, optional variables matching (exact).
auto-mock/
βββ cmd/auto-mock/ # CLI entrypoint
βββ internal/
β βββ cloud/ # Cloud provider abstraction
β β βββ aws/ # AWS implementation (S3, ECS, IAM)
β β βββ factory.go # Provider detection & initialization
β β βββ manager.go # Orchestration & workflows
β βββ mcp/ # AI provider integration (Anthropic, OpenAI)
β βββ builders/ # Interactive expectation builders
β βββ collections/ # Collection parsers (Postman, Bruno, Insomnia)
β βββ expectations/ # Expectation CRUD operations
β βββ repl/ # Interactive CLI flows
β βββ terraform/ # Embedded infrastructure modules
β βββ models/ # Data structures
βββ go.mod
βββ build.sh
βββ README.md
βββ LICENSE
- AWS support (S3, ECS, ALB)
- AI-powered mock generation (Claude, GPT-4)
- Collection import (Postman, Bruno, Insomnia)
- Interactive builder
- Auto-scaling infrastructure
- CloudWatch monitoring
- Locust load testing (local + managed AWS)
- Custom domain support (ACM + Route53)
- Private ALB for VPC-internal clients
- BYO networking (VPC, subnets, IAM, security groups)
- Azure provider support
- GCP provider support
- Swagger/OpenAPI import
- Bruno .bru file format
- Web UI for expectation management
- Prometheus metrics export
- Multiple region deployment
- Docker Compose local deployment
git clone https://github.com/hemantobora/auto-mock.git
cd auto-mock
go mod download
go build -o automock ./cmd/auto-mock
# Run tests
go test ./...- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Areas we'd love help with: Azure/GCP provider support, Swagger/OpenAPI import, Bruno .bru format, Web UI for expectation management, Prometheus metrics export.
This project is licensed under the MIT License β see the LICENSE file for details.
- MockServer β Powerful HTTP mocking server
- Anthropic β Claude AI for intelligent mock generation
- OpenAI β GPT-4 for intelligent mock generation
- AWS β Cloud infrastructure platform
- Terraform β Infrastructure as Code
- Documentation: GETTING_STARTED.md,
automock help - GitHub Issues: Create an issue
- Email: hemantobora@gmail.com