diff --git a/README.md b/README.md
index a631f4fe..5e1f7201 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,8 @@
# Claudable
-
+
Powered by OPACTOR
+
- **Powerful Agent Performance**: Leverage the full power of Claude Code and Cursor CLI Agent capabilities with native MCP support
- **Natural Language to Code**: Simply describe what you want to build, and Claudable generates production-ready Next.js code
@@ -33,23 +39,81 @@ How to start? Simply login to Claude Code (or Cursor CLI), start Claudable, and
- **Supabase Database**: Connect production PostgreSQL with authentication ready to use
- **Automated Error Detection**: Detect errors in your app and fix them automatically
-## Technology Stack
-**AI Cooding Agent:**
-- **[Claude Code](https://docs.anthropic.com/en/docs/claude-code/setup)**: Advanced AI coding agent. We strongly recommend you to use Claude Code for the best experience.
+## Demo Examples
+
+### Codex CLI Example
+
+
+### Qwen Code Example
+
+
+## Supported AI Coding Agents
+
+Claudable supports multiple AI coding agents, giving you the flexibility to choose the best tool for your needs:
+
+- **Claude Code** - Anthropic's advanced AI coding agent
+- **Codex CLI** - OpenAI's lightweight coding agent
+- **Cursor CLI** - Powerful multi-model AI agent
+- **Gemini CLI** - Google's open-source AI agent
+- **Qwen Code** - Alibaba's open-source coding CLI
+
+### Claude Code (Recommended)
+**[Claude Code](https://docs.anthropic.com/en/docs/claude-code/setup)** - Anthropic's advanced AI coding agent with Claude Opus 4.1
+- **Features**: Deep codebase awareness, MCP support, Unix philosophy, direct terminal integration
+- **Context**: Native 256K tokens
+- **Pricing**: Included with ChatGPT Plus/Pro/Team/Edu/Enterprise plans
+- **Installation**:
```bash
- # Install
npm install -g @anthropic-ai/claude-code
- # Login
claude # then > /login
```
-- **[Cursor CLI](https://docs.cursor.com/en/cli/overview)**: Intelligent coding agent for complex coding tasks. It's little bit slower than Claude Code, but it's more powerful.
+
+### Codex CLI
+**[Codex CLI](https://github.com/openai/codex)** - OpenAI's lightweight coding agent with GPT-5 support
+- **Features**: High reasoning capabilities, local execution, multiple operating modes (interactive, auto-edit, full-auto)
+- **Context**: Varies by model
+- **Pricing**: Included with ChatGPT Plus/Pro/Business/Edu/Enterprise plans
+- **Installation**:
+ ```bash
+ npm install -g @openai/codex
+ codex # login with ChatGPT account
+ ```
+
+### Cursor CLI
+**[Cursor CLI](https://cursor.com/en/cli)** - Powerful AI agent with access to cutting-edge models
+- **Features**: Multi-model support (Anthropic, OpenAI, Gemini), MCP integration, AGENTS.md support
+- **Context**: Model dependent
+- **Pricing**: Free tier available, Pro plans for advanced features
+- **Installation**:
```bash
- # Install
curl https://cursor.com/install -fsS | bash
- # Login
cursor-agent login
```
+### Gemini CLI
+**[Gemini CLI](https://developers.google.com/gemini-code-assist/docs/gemini-cli)** - Google's open-source AI agent with Gemini 2.5 Pro
+- **Features**: 1M token context window, Google Search grounding, MCP support, extensible architecture
+- **Context**: 1M tokens (with free tier: 60 req/min, 1000 req/day)
+- **Pricing**: Free with Google account, paid tiers for higher limits
+- **Installation**:
+ ```bash
+ npm install -g @google/gemini-cli
+ gemini # follow authentication flow
+ ```
+
+### Qwen Code
+**[Qwen Code](https://github.com/QwenLM/qwen-code)** - Alibaba's open-source CLI for Qwen3-Coder models
+- **Features**: 256K-1M token context, multiple model sizes (0.5B to 480B), Apache 2.0 license
+- **Context**: 256K native, 1M with extrapolation
+- **Pricing**: Completely free and open-source
+- **Installation**:
+ ```bash
+ npm install -g @qwen-code/qwen-code@latest
+ qwen --version
+ ```
+
+## Technology Stack
+
**Database & Deployment:**
- **[Supabase](https://supabase.com/)**: Connect production-ready PostgreSQL database directly to your project.
- **[Vercel](https://vercel.com/)**: Publish your work immediately with one-click deployment
@@ -208,20 +272,22 @@ If you encounter the error: `Error output dangerously skip permissions cannot be
- Anon Key: Public key for client-side
- Service Role Key: Secret key for server-side
-## Design Comparison
-*Same prompt, different results*
-
-### Claudable
-
+## License
-[View Claudable Live Demo β](https://claudable-preview.vercel.app/)
+MIT License.
-### Lovable
-
+## Upcoming Features
+These features are in development and will be opened soon.
+- **New CLI Agents** - Trust us, you're going to LOVE this!
+- **Checkpoints for Chat** - Save and restore conversation/codebase states
+- **Advanced MCP Integration** - Native integration with MCP
+- **Enhanced Agent System** - Subagents, AGENTS.md integration
+- **Website Cloning** - You can start a project from a reference URL.
+- Various bug fixes and community PR merges
-[View Lovable Live Demo β](https://preview--goal-track-studio.lovable.app/)
+We're working hard to deliver the features you've been asking for. Stay tuned!
-## License
+## Star History
-MIT License.
\ No newline at end of file
+[](https://www.star-history.com/#opactorai/Claudable&Date)
diff --git a/apps/api/app/api/assets.py b/apps/api/app/api/assets.py
index ebf14305..a4c07005 100644
--- a/apps/api/app/api/assets.py
+++ b/apps/api/app/api/assets.py
@@ -28,6 +28,27 @@ async def upload_logo(project_id: str, body: LogoRequest, db: Session = Depends(
return {"path": f"assets/logo.png"}
+@router.get("/{project_id}/{filename}")
+async def get_image(project_id: str, filename: str, db: Session = Depends(get_db)):
+ """Get an image file from project assets directory"""
+ from fastapi.responses import FileResponse
+
+ # Verify project exists
+ row = db.get(ProjectModel, project_id)
+ if not row:
+ raise HTTPException(status_code=404, detail="Project not found")
+
+ # Build file path
+ file_path = os.path.join(settings.projects_root, project_id, "assets", filename)
+
+ # Check if file exists
+ if not os.path.exists(file_path):
+ raise HTTPException(status_code=404, detail="Image not found")
+
+ # Return the image file
+ return FileResponse(file_path)
+
+
@router.post("/{project_id}/upload")
async def upload_image(project_id: str, file: UploadFile = File(...), db: Session = Depends(get_db)):
"""Upload an image file to project assets directory"""
diff --git a/apps/api/app/api/chat/act.py b/apps/api/app/api/chat/act.py
index 7ea61cb9..8242698c 100644
--- a/apps/api/app/api/chat/act.py
+++ b/apps/api/app/api/chat/act.py
@@ -10,16 +10,43 @@
from sqlalchemy.orm import Session
from pydantic import BaseModel
+import os
+
from app.api.deps import get_db
+from app.core.config import settings
from app.models.projects import Project
from app.models.messages import Message
from app.models.sessions import Session as ChatSession
from app.models.commits import Commit
from app.models.user_requests import UserRequest
-from app.services.cli.unified_manager import UnifiedCLIManager, CLIType
+from app.services.cli.unified_manager import UnifiedCLIManager
+from app.services.cli.base import CLIType
from app.services.git_ops import commit_all
from app.core.websocket.manager import manager
from app.core.terminal_ui import ui
+def build_project_info(project: Project, db: Session) -> dict:
+ """Ensure project has a usable repo path and collect runtime info."""
+ repo_path = project.repo_path
+
+ if not repo_path or not os.path.exists(repo_path):
+ inferred_path = os.path.join(settings.projects_root, project.id, "repo")
+ if os.path.exists(inferred_path):
+ project.repo_path = inferred_path
+ db.commit()
+ repo_path = inferred_path
+ else:
+ raise HTTPException(
+ status_code=409,
+ detail="Project repository is not initialized yet. Please wait for project setup to complete."
+ )
+
+ return {
+ 'id': project.id,
+ 'repo_path': repo_path,
+ 'preferred_cli': project.preferred_cli or "claude",
+ 'fallback_enabled': project.fallback_enabled if project.fallback_enabled is not None else True,
+ 'selected_model': project.selected_model
+ }
router = APIRouter()
@@ -27,7 +54,9 @@
class ImageAttachment(BaseModel):
name: str
- base64_data: str
+ # Either base64_data or path must be provided
+ base64_data: Optional[str] = None
+ path: Optional[str] = None # Absolute path to image file
mime_type: str = "image/jpeg"
@@ -47,6 +76,47 @@ class ActResponse(BaseModel):
message: str
+def build_conversation_context(
+ project_id: str,
+ conversation_id: str | None,
+ db: Session,
+ *,
+ exclude_message_id: str | None = None,
+ limit: int = 20
+) -> str:
+ """Return a formatted snippet of recent chat history for context transfer."""
+
+ query = db.query(Message).filter(Message.project_id == project_id)
+ if conversation_id:
+ query = query.filter(Message.conversation_id == conversation_id)
+
+ history: list[Message] = []
+ for msg in query.order_by(Message.created_at.desc()):
+ if exclude_message_id and msg.id == exclude_message_id:
+ continue
+ if msg.metadata_json and msg.metadata_json.get("hidden_from_ui"):
+ continue
+ if msg.role not in ("user", "assistant"):
+ continue
+ history.append(msg)
+ if len(history) >= limit:
+ break
+
+ if not history:
+ return ""
+
+ history.reverse()
+ lines = []
+ for msg in history:
+ role = "User" if msg.role == "user" else "Assistant"
+ content = (msg.content or "").strip()
+ if not content:
+ continue
+ lines.append(f"{role}:\n{content}")
+
+ return "\n".join(lines)
+
+
async def execute_act_instruction(
project_id: str,
instruction: str,
@@ -80,13 +150,7 @@ async def execute_act_instruction(
db.commit()
# Extract project info to avoid DetachedInstanceError in background task
- project_info = {
- 'id': project.id,
- 'repo_path': project.repo_path,
- 'preferred_cli': project.preferred_cli or "claude",
- 'fallback_enabled': project.fallback_enabled if project.fallback_enabled is not None else True,
- 'selected_model': project.selected_model
- }
+ project_info = build_project_info(project, db)
# Execute the task
return await execute_act_task(
@@ -113,7 +177,9 @@ async def execute_chat_task(
db: Session,
cli_preference: CLIType = None,
fallback_enabled: bool = True,
- is_initial_prompt: bool = False
+ is_initial_prompt: bool = False,
+ _request_id: str | None = None,
+ user_message_id: str | None = None
):
"""Background task for executing Chat instructions"""
try:
@@ -156,11 +222,32 @@ async def execute_chat_task(
db=db
)
+ # Qwen Coder does not support images yet; drop them to prevent errors
+ safe_images = [] if cli_preference == CLIType.QWEN else images
+
+ instruction_payload = instruction
+ if not is_initial_prompt:
+ context_block = build_conversation_context(
+ project_id,
+ conversation_id,
+ db,
+ exclude_message_id=user_message_id
+ )
+ if context_block:
+ instruction_payload = (
+ "You are continuing an ongoing coding session. Reference the recent conversation history below before acting.\n"
+ "Deploy with Vercel, linked to your GitHub repo
+Deployment in progressβ¦
+Building and deploying your project. This may take a few minutes.
+Published successfully
+Deployment failed. Please try again.
+Connect the following services:
+