A minimal, working example of Claude Code based AI-assisted development infrastructure for Python projects.
This is a simplified, ready-to-use demonstration of how AI agents, skills, and hooks work together to enhance Python development. This is a working example you can actually run and experiment with.
.claude/
├── skills/
│ └── python-dev-guidelines/ # Python best practices skill
│ ├── SKILL.md # Main skill file
│ └── resources/ # Deep-dive topics
│ ├── api-patterns.md
│ ├── testing.md
│ └── models.md
├── agents/
│ ├── code-reviewer.md # Reviews code quality
│ ├── test-generator.md # Generates pytest tests
│ ├── error-debugger.md # Debugs errors
│ └── documentation-writer.md # Creates documentation
├── hooks/
│ ├── skill-activation.sh # Auto-activates skills
│ └── file-tracker.sh # Tracks file changes
├── settings.json # Hook configuration
└── skills/skill-rules.json # Skill activation rules
A simple FastAPI user management API demonstrating best practices:
src/
├── api.py # FastAPI routes with proper error handling
├── user_service.py # Service layer with custom exceptions
└── models.py # Pydantic models with validation
tests/
└── test_user_service.py # Comprehensive pytest tests
cd simple
pip install fastapi pydantic pytestOption A: Edit a Python file
- Open
src/user_service.pyin your editor - Ask Claude: "Review this code"
- The
python-dev-guidelinesskill should auto-activate!
Option B: Run tests
pytest tests/test_user_service.py -vOption C: Start the API
uvicorn src.api:app --reload
# Visit http://localhost:8000/docs for API documentationWhen you work with .py files, the skill automatically activates:
You open: src/api.py
↓
Hook detects: .py file
↓
Skill activates: python-dev-guidelines
↓
Claude suggests: "The python-dev-guidelines skill may be helpful"
↓
You get: Automatic best practices guidance
Ask Claude to use specific agents for complex tasks:
# Code review
"Use the code-reviewer agent to review my API routes"
# Generate tests
"Use the test-generator agent to create tests for user_service.py"
# Debug errors
"Use the error-debugger agent to fix this ValueError"
# Create documentation
"Use the documentation-writer agent to document the API"Every time you edit a file, the file-tracker hook logs it:
[2025-01-13 10:15:23] Modified: src/api.py
[2025-01-13 10:16:45] Modified: src/user_service.py
[2025-01-13 10:18:12] Modified: tests/test_user_service.py
This helps Claude understand your active context.
You: "Review the user service code for best practices"
Claude:
1. Loads python-dev-guidelines skill
2. Reviews src/user_service.py
3. Checks for:
- Type hints ✅
- Error handling ✅
- Docstrings ✅
- Custom exceptions ✅
4. Provides feedback
You: "Create comprehensive tests for the API routes"
Claude:
1. Uses test-generator agent
2. Analyzes src/api.py
3. Generates tests for:
- Happy paths
- Error cases
- Edge cases
4. Creates complete test file
You: "I'm getting a ValueError when creating users"
Claude:
1. Uses error-debugger agent
2. Reads error traceback
3. Analyzes root cause
4. Proposes fix with explanation
5. Implements the fix
The python-dev-guidelines skill activates when:
File patterns:
- Working with
.pyfiles - Editing files in
src/ortests/
Keywords in your prompts:
- "python", "fastapi", "flask"
- "pytest", "test", "testing"
- "api", "endpoint", "route"
- "model", "validation", "pydantic"
Code patterns:
- Files containing
from fastapi import - Files containing
def test_ - Files containing Pydantic models
# Copy from the main showcase
cp -r ../claude-code-infrastructure-showcase/.claude/skills/your-skill \
.claude/skills/
# Update skill-rules.json to add triggersEdit .claude/skills/skill-rules.json:
{
"skills": {
"python-dev-guidelines": {
"promptTriggers": {
"keywords": [
"your",
"custom",
"keywords"
]
},
"fileTriggers": {
"pathPatterns": [
"your/custom/path/**/*.py"
]
}
}
}
}Copy an existing agent and modify:
cp .claude/agents/code-reviewer.md .claude/agents/my-custom-agent.md
# Edit my-custom-agent.md with your specific instructionsYou: "Add type hints to this function"
[editing src/api.py]
Claude: "📚 Skill Suggestion: The python-dev-guidelines skill may be helpful"
Claude: Based on Python best practices, here's the properly typed function:
def create_user(user_data: UserCreate) -> UserResponse:
"""Create a new user"""
# implementation...
You: "How should I structure these tests?"
[editing tests/test_user_service.py]
Claude: "📚 Skill Suggestion: The python-dev-guidelines skill may be helpful"
Claude: Following pytest best practices from the python-dev-guidelines:
1. Use fixtures for setup
2. Follow Arrange-Act-Assert pattern
3. Test happy path, edge cases, and errors
4. Use descriptive test names
Here's the recommended structure...
- What: Modular knowledge bases with best practices
- When: Auto-activate based on context
- How: Hooks detect patterns and suggest skills
- Example: Python guidelines when editing .py files
- What: Autonomous AI instances for complex tasks
- When: Manually invoked for specific jobs
- How: Claude launches them with specific instructions
- Example: Code reviewer agent for quality analysis
- What: Shell scripts that run at workflow events
- When: On prompt submit, after file edits, etc.
- How: Monitor activity and augment AI behavior
- Example: Skill activation based on file patterns
Check:
- Is the hook executable?
ls -la .claude/hooks/*.sh - Is settings.json configured?
cat .claude/settings.json
- Do file patterns match?
cat .claude/skills/skill-rules.json
Check:
ls -la .claude/agents/All agents should be .md files.
Make them executable:
chmod +x .claude/hooks/*.sh- Experiment: Edit Python files and watch skills activate
- Try Agents: Ask Claude to use different agents
- Customize: Modify skill triggers for your workflow
- Expand: Copy more skills/agents from main showcase
| File | Purpose | Try This |
|---|---|---|
src/api.py |
FastAPI routes | Ask for code review |
src/user_service.py |
Service layer | Ask to generate tests |
tests/test_user_service.py |
Test suite | Ask to run tests |
.claude/skills/python-dev-guidelines/SKILL.md |
Main skill | Read best practices |
.claude/agents/code-reviewer.md |
Reviewer agent | Ask to review code |
fastapi>=0.104.0
pydantic>=2.0.0
pytest>=7.0.0
MIT - Use freely in your projects
This is a simplified version of the full infrastructure showcase. See the main README for:
- More skills (backend, frontend, testing)
- More agents (10 total)
- Advanced hooks
- Complete documentation