An AI-powered agentic coding assistant that:
- Turns your natural language requirements into a working Python module
- Designs the module, implements the code, writes simple tests, and builds a demo UI
- Packages the output and launches the generated Gradio app with a public URL
Backed by a multi-agent CrewAI pipeline, the app coordinates “engineering lead”, “backend”, “frontend”, “QA”, and a “runner” agent to deliver end-to-end results.
- Visit the hosted Space: https://projects.kaushikpaul.co.in/ai-agentic-coder
- Idea → Running App in Minutes
- One-click pipeline: design the module → implement code → generate tests → scaffold a Gradio demo → auto-package into a zip → upload to Google Cloud Storage → launch the app → return live/public URLs.
- Multi‑Agent Orchestration (CrewAI)
- Specialized agents for engineering lead, backend, frontend, QA, and runtime. Tasks are declared in YAML and executed sequentially for predictable outcomes.
- Production‑Friendly Reliability
- Built‑in retry limits and execution timeouts for coding/testing agents, plus automatic cleanup of previous app processes to avoid port conflicts.
- Model‑Flexible by Design
- Swap LLMs per‑agent via
config/agents.yamlusing OpenRouter model IDs. Compatible with OpenAI‑style endpoints and easy to tune per role.
- Swap LLMs per‑agent via
- Modern Developer UX
- Polished Gradio UI with non‑blocking background execution, streaming progress, one‑click example loader, and strict URL extraction/validation on completion.
- Secure Artifact Delivery
- Packages all generated code and dependencies into a zip file, uploads to Google Cloud Storage, and returns a time-limited, signed download URL. The app is automatically launched in a background process, and its public URL is captured and returned—no manual builds or deployments needed.
- Extensible & Maintainable
- Add agents, tasks, or custom tools (e.g.,
python_code_run_tool.py) without touching the core pipeline. Everything is declarative and composable.
- Add agents, tasks, or custom tools (e.g.,
- Runs Local or in the Cloud
- Works out of the box on your machine and is ready for Hugging Face Spaces deployment with the same entry point.
- Crew & Agents:
src/ai_agentic_coder/crew.py- Agents configured in
src/ai_agentic_coder/config/agents.yaml - Tasks configured in
src/ai_agentic_coder/config/tasks.yaml
- Agents configured in
- Tools:
- Python code runner and GCS uploader:
src/ai_agentic_coder/tools/python_code_run_tool.py
- Python code runner and GCS uploader:
- UI:
src/ai_agentic_coder/gradio_ui.py - Entry point:
src/ai_agentic_coder/main.py - Outputs:
src/ai_agentic_coder/output/- Design doc, backend module, test module, demo app, and utility artifacts
design_task→ writessrc/ai_agentic_coder/output/{module_name}_design.mdcode_task→ writessrc/ai_agentic_coder/output/{module_name}(e.g.,accounts.py)frontend_task→ writessrc/ai_agentic_coder/output/app.py(Gradio demo with share=True)test_task→ writessrc/ai_agentic_coder/output/test_{module_name}python_code_run_task→ uploads zip to GCS, runs the app, and returns two URLs; also writessrc/ai_agentic_coder/output/gradio_public_url.txt
- Python 3.10–3.12 (project targets >=3.10 per
pyproject.toml) - A modern browser (Chrome, Edge, Safari, Firefox)
- API keys and credentials (see Configuration)
- Docker Engine running (local only) — required for sandboxed "safe" execution of generated code. If you opt out and force
unsafemode locally (see Configuration → Execution Mode), Docker is not required.
git clone https://github.com/Kaushik-Paul/AI-Agentic-Coder.git
cd AI-Agentic-Coderpython3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate- Install uv
- Linux/macOS:
curl -LsSf https://astral.sh/uv/install.sh | sh
# ensure ~/.local/bin is on your PATH
export PATH="$HOME/.local/bin:$PATH"- Windows (PowerShell):
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"- Sync dependencies
uv syncpip install -r requirements.txtCreate a .env file in the project root with the following variables (adjust as needed):
# ——— LLM provider (using OpenRouter-hosted models in agents.yaml) ———
OPENROUTER_API_KEY=your_openrouter_key
# Optional but commonly needed when using OpenRouter via OpenAI-compatible clients:
# OPENAI_API_BASE=https://openrouter.ai/api/v1
# OPENAI_API_KEY=${OPENROUTER_API_KEY}
# ——— Google Cloud Storage (used by PythonCodeRunTool) ———
GCP_PROJECT_ID=your_gcp_project_id
GCP_BUCKET_NAME=your_public_or_private_bucket_name
# Base64-encoded service account JSON. Example to generate:
# cat service_account.json | base64 -w 0
GCP_SERVICE_KEY=base64_encoded_service_account_jsonUsing uv:
uv run python -m src.ai_agentic_coder.mainUsing python directly:
python -m src.ai_agentic_coder.mainGradio will print a local URL (e.g., http://127.0.0.1:7860). Open it in your browser.
This project leverages CrewAI, a powerful framework for orchestrating role-playing, autonomous AI agents. CrewAI enables the creation of sophisticated AI workflows where different agents can work together to accomplish complex tasks.
Key features used in this project:
- Agents: Specialized AI agents for design, backend, frontend, testing, and running
- Tasks: Well-defined tasks that agents perform sequentially
- Tools: Custom Python tool to package, upload, and run generated code
- Delegation: Sequential, YAML-driven orchestration of the pipeline
- File:
src/ai_agentic_coder/config/agents.yaml - Default agents use OpenRouter-hosted models (e.g.,
openrouter/meta-llama/llama-3.1-405b-instruct:free,openrouter/moonshotai/kimi-k2:free). - To change models or providers, update the
llmfield for each agent. Ensure appropriate API keys and base configuration are set for your provider.
- File:
src/ai_agentic_coder/config/tasks.yaml - Pipeline and outputs are described in the Architecture section.
- File:
src/ai_agentic_coder/gradio_ui.py - You provide:
Requirements,Module Name(without .py),Class Name. - The app displays a progress bar during execution and, on success, two URLs: a signed download URL and a live app URL.
- Files:
src/ai_agentic_coder/crew.py - Local runs use CrewAI
code_execution_mode="safe", executing generated code inside Docker for isolation. - On Hugging Face Spaces, the project automatically switches to
"unsafe"mode (no Docker) for compatibility:- See:
crew.py—is_running_in_hf_space()andrun_in_docker = "unsafe" if is_running_in_hf_space() else "safe". - Agents with code execution enabled inherit this setting:
backend_engineer,test_engineer(see theircode_execution_mode=run_in_docker).
- See:
- Requirement: Ensure Docker is running when executing locally (see Quick Start step 0).
- Opting out locally: If you don’t need Docker isolation on your machine, you can force non‑Docker execution by setting the variable unconditionally in
crew.py:
# src/ai_agentic_coder/crew.py
# Force no Docker even on local runs
run_in_docker = "unsafe"This removes the need to have Docker running locally.
- Open the app in your browser.
- Paste or write your requirements (what you want to build).
- Enter a module name (e.g.,
accounts) and class name (e.g.,Account). - Click “Run AI Coder”.
- Wait a few minutes while the pipeline runs. When done, you’ll see:
- A signed Google Cloud Storage URL to download the generated artifacts as a zip
- A public/live URL of the generated Gradio demo app
Generated files are saved under src/ai_agentic_coder/output/:
{module_name}_design.md— Detailed design produced by the engineering lead agent{module_name}.py— The generated backend moduleapp.py— A minimal Gradio UI demonstrating the backend (launched with share=True)test_{module_name}— Unit test module for the backendgradio_public_url.txt— A convenience file containing the live URL output
- The project is already hosted on Hugging Face Spaces: https://projects.kaushikpaul.co.in/ai-agentic-coder
- To deploy your own Space:
- Set Space SDK to “Gradio” and point to
src/ai_agentic_coder/main.pyas the entry file. - Add required secrets in the Space settings:
OPENROUTER_API_KEYGCP_PROJECT_ID,GCP_BUCKET_NAME,GCP_SERVICE_KEY(base64-encoded service account JSON)
- Ensure the Python version matches (3.10–3.12) and install via
requirements.txtorpyproject.toml.
- Set Space SDK to “Gradio” and point to
- Missing or invalid API keys/credentials
- Verify
.envvalues. Ensure OpenRouter key and GCP service key are valid; confirm bucket exists and is accessible.
- Verify
- GCS upload errors
- Confirm
GCP_SERVICE_KEYcontains a valid base64-encoded service account JSON withstorage.objects.createpermission.
- Confirm
- Live URL not detected
- The runner waits up to 60 seconds to capture a public URL. If the generated UI didn’t enable
share=Trueor the network blocks tunnels, you may see: “Gradio started but no accessible URL was detected within 60 seconds.” Re-run or check network settings.
- The runner waits up to 60 seconds to capture a public URL. If the generated UI didn’t enable
- Virtualenv issues on Windows
- Use
.venv\Scripts\activateand ensurepythonpoints to the venv interpreter.
- Use
- Python: 3.10–3.12
- Frameworks/Libraries: CrewAI, Gradio 5, google-cloud-storage, python-dotenv, requests, httpx
- Orchestration: YAML-configured agents and tasks via CrewAI
- UI: Gradio Blocks with live progress and URL surfacing
- Do not commit
.envfiles or secrets. - Use least-privilege GCP service accounts. Prefer short-lived signed URLs for distribution (already used here).
This project is licensed under the MIT License — see the LICENSE file for details.