Transform learning challenges into focused, hands-on micro-projects with AI
🎯 Features • 🚀 Quick Start • 📖 Documentation • 🐳 Docker • 🛠️ Development
The AI Micro-Project Generator is an intelligent educational tool that transforms error descriptions, learning challenges, or coding issues into structured, bite-sized learning projects. Perfect for educators, students, and developers who want to learn from their mistakes through practical, hands-on experience.
VideoDemo.mp4
- 🎓 Smart Project Generation: Converts issue descriptions into structured learning tasks
- 🔍 AI-Powered Analysis: Uses advanced LLMs to understand and categorize problems
- ⚡ Safe Code Execution: Sandboxed Python environment with preinstalled libraries
- 📝 Detailed Feedback: Get personalized feedback on your solutions
- 🎨 Beautiful Web Interface: Modern React-based frontend with Tailwind CSS
- 🔧 Flexible Configuration: Extensive customization through YAML configs
- 🐳 Production Ready: Complete Docker setup for easy deployment
- 📋 Task Description: Clear, focused learning objectives
- ✅ Success Criteria: Measurable outcomes for completion
- 👨💻 Expert Solution: Reference implementation and guidance
- 🔄 Interactive Feedback: AI-powered code review and suggestions
Before you begin, ensure you have:
- Python 3.12+ installed
- uv package manager
- Docker (for sandbox execution and deployment)
- Node.js 18+ (for frontend development)
- Clone the repository
git clone https://github.com/AaLexUser/AI-micro-project-generator.git
cd AI-micro-project-generator- Install dependencies
uv sync- Set up configuration
cp .env_example .env- Configure your environment
# Edit .env with your API keys
vim .envGenerate a micro-project from an issue description:
# Simple usage
uv run aipg "I keep mixing up Python list comprehensions with map/filter"
# With custom configuration
uv run aipg --config-path custom.yaml "My function returns None instead of expected value"
# Override specific config values
uv run aipg -o llm.model_name="gpt-4" "Database connection fails with timeout"Start the API server:
uv run python -m aipg.api
# Server runs at http://localhost:8000Launch the frontend:
cd frontend
npm install
npm run dev
# Frontend runs at http://localhost:5173The FastAPI server exposes the following endpoints:
| Method | Endpoint | Description |
|---|---|---|
POST |
/projects |
Generate micro-projects from issue descriptions |
POST |
/feedback |
Get AI feedback on user solutions |
GET |
/health |
Health check endpoint |
Generate Projects:
curl -X POST "http://localhost:8000/projects" \
-H "Content-Type: application/json" \
-d '{
"comments": [
"I struggle with async/await in Python",
"My recursive function causes stack overflow"
]
}'Get Feedback:
curl -X POST "http://localhost:8000/feedback" \
-H "Content-Type: application/json" \
-d '{
"project": {...},
"user_solution": "def fibonacci(n): return n if n <= 1 else fibonacci(n-1) + fibonacci(n-2)"
}'| Variable | Description | Default |
|---|---|---|
AIPG_LLM_MODEL |
LLM model name | openai/gpt-4o |
AIPG_LLM_API_KEY |
API key for LLM | - |
AIPG_SANDBOX_DOCKER_IMAGE |
Sandbox Docker image | aipg-sandbox:latest |
LANGFUSE_PUBLIC_KEY |
Langfuse public key | - |
LANGFUSE_SECRET_KEY |
Langfuse secret key | - |
LANGFUSE_HOST |
Langfuse host URL | https://cloud.langfuse.com |
ENVIRONMENT |
Runtime environment | production |
DEBUG |
Debug mode flag | false |
LOG_LEVEL |
Logging level | INFO |
Docker Environment Setup:
# Create .env file for Docker deployment
cp .env_example .env
# Edit with your API keys
vim .envProduction deployment:
# Start all services with health checks and volumes
docker compose up -d
# Check service health
docker compose psDevelopment with hot reload:
# Start with development overrides
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
# API available at: http://localhost:8000
# Frontend available at: http://localhost:5173 (dev) or http://localhost:80Production with optimizations:
# Start with production optimizations and scaling
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -dThe deployment includes three interconnected services:
-
🔧 API Service (port 8000, internal) - FastAPI backend with LLM integrations
- Health checks and dependency management
- Persistent volumes for data and cache
- Non-root user security
-
🎨 Frontend Service (port 80) - React app with Nginx
- SPA routing and API proxy
- Gzip compression and security headers
- Production-optimized build
-
🛡️ Sandbox Service (internal) - Secure Python execution environment
- Read-only filesystem and dropped capabilities
- Network isolation and resource limits
- Preinstalled ML libraries (pandas, numpy, torch, etc.)
Volumes are automatically created for:
api_data- Application data and configurationscache_data- ChromaDB vector database and LLM caches
# View volumes
docker volume ls | grep ai-micro-project-generator
# Backup data
docker run --rm -v ai-micro-project-generator_api_data:/data \
-v $(pwd):/backup alpine tar czf /backup/api_backup.tar.gz -C /data .docker-compose.yml- Main configuration with health checks and securitydocker-compose.dev.yml- Development overrides with hot reloaddocker-compose.prod.yml- Production optimizations and scalingDOCKER.md- Comprehensive deployment guide
# Build all images
make docker-build
# Build specific services
docker compose build api
docker compose build frontend
docker compose build sandbox
# Force rebuild without cache
docker compose build --no-cache# View service status and health
docker compose ps
# Stream logs
docker compose logs -f api
docker compose logs -f --tail=100
# Access running containers
docker compose exec api bash
docker compose exec sandbox python
# Restart services
docker compose restart apiServices won't start:
# Check logs for errors
docker compose logs api
docker compose logs frontend
# Verify environment file
cat .env
# Check port conflicts
sudo lsof -i :80 -i :8000API health check failing:
# Test API directly
curl http://localhost:8000/health
# Check API logs
docker compose logs -f api
# Restart API service
docker compose restart apiFrontend not loading:
# Check nginx configuration
docker compose exec frontend cat /etc/nginx/conf.d/default.conf
# Test frontend container
docker compose exec frontend wget -qO- http://localhost/Sandbox execution issues:
# Test sandbox directly
docker compose exec sandbox python -c "import pandas; print('OK')"
# Check sandbox security settings
docker compose exec --user root sandbox ls -la /home/sandboxVolume permission issues:
# Fix API data permissions
docker compose exec --user root api chown -R app:app /app/data
# Reset volumes (⚠️ data loss)
docker compose down -v
docker compose up -d📘 Need more help? Check the comprehensive DOCKER.md guide for detailed troubleshooting and configuration options.
# Install with development dependencies
uv sync --group dev
# Install pre-commit hooks
make pre-commit-install
# Run all quality checks
make quality# Run linting
make lint
# Auto-fix linting issues
make lint-fix
# Format code
make format
# Run tests
uv run pytest
# Run tests with coverage
uv run pytest --cov=aipg
# Run specific test categories
uv run pytest -m unit
uv run pytest -m integrationaipg/
├── 🎯 assistant.py # Main AI assistant orchestration
├── 🔌 api.py # FastAPI web server
├── 🧠 llm.py # LLM client abstractions
├── 📊 domain.py # Core data models
├── ⚙️ configs/ # Configuration management
│ ├── app_config.py # Application config schema
│ ├── loader.py # Config loading logic
│ └── default.yaml # Default configuration
├── 🎨 prompting/ # AI prompt templates
│ ├── project_generator.md # Project generation prompts
│ ├── feedback.md # Feedback generation prompts
│ └── prompt_generator.py # Prompt building utilities
├── 🔍 rag/ # Retrieval-Augmented Generation
│ ├── service.py # RAG orchestration
│ ├── adapters.py # Vector database adapters
│ └── ports.py # RAG interfaces
├── 🔒 sandbox/ # Safe code execution
│ ├── service.py # Sandbox orchestration
│ ├── adapters.py # Docker integration
│ └── domain.py # Execution result models
└── 🤖 task_inference/ # AI task processing pipeline
└── task_inference.py # Main inference logic
The system consists of two separate AI assistants that handle different phases of the workflow:
graph TD
%% Input Layer
A[User Comments/Issues] --> B[ProjectAssistant]
%% ProjectAssistant Pipeline
B --> E[DefineTopicsInference]
%% Parallel Processing Box - For Each Topic
E --> ParallelBox
subgraph ParallelBox [" 🔄 Parallel Execution (For Each Topic) "]
H[RAGServiceInference]
H --> I{Candidates Found?}
I -->|Yes| J[LLMRankerInference]
I -->|No| K[ProjectGenerationInference]
J --> L{Best Project Selected?}
L -->|No| K
L -->|Yes| M[Project Found]
%% Project Generation & Validation Branch
K --> N[ProjectValidatorInference]
N --> O{Valid Project?}
O -->|No| P[ProjectCorrectorInference]
P --> Q{Correction Successful?}
Q -->|Yes| N
Q -->|No| R[Use Previous Version]
O -->|Yes| S[CheckAutotestSandboxInference]
%% Bug Fixing Loop
S --> T{Bugs Detected?}
T -->|Yes| U[BugFixerInference]
U --> S
T -->|No| V[Save to RAG]
R --> V
M --> V
end
%% Final Output
ParallelBox --> W[Projects Generated]
W --> X[🏁 ProjectAssistant Ends]
%% Styling
classDef assistantClass fill:#e1f5fe,stroke:#01579b,stroke-width:2px
classDef inferenceClass fill:#f3e5f5,stroke:#4a148c,stroke-width:2px
classDef decisionClass fill:#fff3e0,stroke:#e65100,stroke-width:2px
classDef outputClass fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px
classDef sandboxClass fill:#ffebee,stroke:#c62828,stroke-width:2px
classDef endClass fill:#ffcdd2,stroke:#d32f2f,stroke-width:3px
class B assistantClass
class E,H,J,K,N,P,U inferenceClass
class I,L,O,Q,T decisionClass
class W outputClass
class S sandboxClass
class X endClass
%% Parallel Box Styling
style ParallelBox fill:#f0f8ff,stroke:#4169e1,stroke-width:3px,stroke-dasharray: 5 5
graph TD
%% Input Layer - New Agent Starts
A[🚀 FeedbackAssistant Starts] --> B[User Solution Input]
C[Generated Projects] --> D[Project Context Available]
%% FeedbackAssistant Pipeline
B --> E[CheckUserSolutionSandboxInference]
D --> E
E --> F[Execute AutoTests]
F --> G[FeedbackInference]
G --> H[AI-Generated Feedback]
%% Styling
classDef assistantClass fill:#e1f5fe,stroke:#01579b,stroke-width:2px
classDef inferenceClass fill:#f3e5f5,stroke:#4a148c,stroke-width:2px
classDef outputClass fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px
classDef sandboxClass fill:#ffebee,stroke:#c62828,stroke-width:2px
classDef startClass fill:#c8e6c9,stroke:#388e3c,stroke-width:3px
classDef contextClass fill:#fff9c4,stroke:#f57f17,stroke-width:2px
class A startClass
class E,G inferenceClass
class H outputClass
class F sandboxClass
class B,C,D contextClass
Phase 1 - ProjectAssistant Pipeline:
- DefineTopicsInference - Extracts learning topics from user comments
- RAGServiceInference - Searches existing project database for similar topics
- LLMRankerInference - Ranks and selects best matching projects
- ProjectGenerationInference - Generates new projects when no matches found
- ProjectValidatorInference - Validates project structure and content
- ProjectCorrectorInference - Fixes validation issues (up to 3 attempts)
- CheckAutotestSandboxInference - Tests project autotests in sandbox
- BugFixerInference - Fixes bugs found during testing
- 🏁 Pipeline Ends - ProjectAssistant completes with generated projects
Phase 2 - FeedbackAssistant Pipeline:
- 🚀 New Agent Starts - FeedbackAssistant initializes with project context
- CheckUserSolutionSandboxInference - Executes user code safely in sandbox
- FeedbackInference - Generates personalized feedback based on execution results
- Two-Phase Architecture: Separate specialized agents for project generation and feedback
- Clear Separation: ProjectAssistant ends after generating projects, FeedbackAssistant starts fresh
- Parallel Processing: Topics are processed concurrently for better performance
- Validation Loop: Projects undergo multiple validation and correction cycles
- Bug Detection: Automated testing and fixing of generated project code
- Safe Execution: All code runs in isolated Docker containers
- RAG Integration: Leverages existing project database to avoid duplication
The sandbox provides secure Python code execution with preinstalled libraries:
📦 Preinstalled Libraries:
pandas- Data manipulation and analysisnumpy- Numerical computingtorch- Machine learning frameworkscikit-learn- Machine learning librarymatplotlib- Plotting and visualizationrequests- HTTP client librarybeautifulsoup4- HTML/XML parsinglxml- XML processing
🛡️ Security Features:
- Network isolation (
--network none) - Read-only filesystem
- Memory and CPU limits
- Process limits
- Non-root user execution
📝 Usage Example:
from aipg.sandbox.builder import build_sandbox_service
from aipg.configs.app_config import AppConfig
# Initialize sandbox
config = AppConfig()
service = build_sandbox_service(config)
# Execute code safely
result = service.run_code("""
import pandas as pd
import numpy as np
df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
print(f'DataFrame shape: {df.shape}')
arr = np.array([1, 2, 3, 4, 5])
print(f'Array sum: {arr.sum()}')
""")
print(result.stdout) # Output: DataFrame shape: (3, 2)\nArray sum: 15The frontend is built with modern React and includes:
🛠️ Tech Stack:
- React 18 with TypeScript
- Tailwind CSS for styling
- Radix UI for accessible components
- React Router for navigation
- React Hook Form for form handling
- Vite for fast development
🚀 Development Commands:
cd frontend
# Start development server
npm run dev
# Build for production
npm run build
# Preview production build
npm run preview
# Point to different API
VITE_API_BASE=http://localhost:8000 npm run dev| Command | Description |
|---|---|
make help |
Show available commands |
make quality |
Run all quality checks |
make lint |
Run linting with ruff |
make lint-fix |
Auto-fix linting issues |
make format |
Format code and organize imports |
make docker-build |
Build all Docker images |
make pre-commit |
Install and run pre-commit hooks |
We welcome contributions! Please follow these steps:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Install dependencies:
uv sync --group dev - Make your changes and add tests
- Run quality checks:
make quality - Commit your changes:
git commit -m 'Add amazing feature' - Push to your fork:
git push origin feature/amazing-feature - Create a Pull Request
- Follow the existing code style (enforced by
ruff) - Add tests for new functionality
- Update documentation as needed
- Ensure all CI checks pass
This project was created as part of the AI Product Hack track Yandex#6, demonstrating practical application of AI in educational technology for creating personalized learning experiences.
- 🎯 Intelligent Learning: Transforms errors into learning opportunities
- 🔒 Safe Execution: Secure sandbox for code testing
- 🎨 Modern UI/UX: Beautiful, responsive interface
- 🚀 Production Ready: Complete deployment solution
- 📈 Scalable Architecture: Modular, extensible design
This project is licensed under the MIT License - see the LICENSE file for details.
Made with ❤️ by the AIPG Team