This repository provides a working example of using Casual MCP with the Blender MCP Server.
It demonstrates how to:
- Configure OpenAI models (or any compatible API)
- Use a tuned system prompt for Blender tool calling
- Launch the
casual-mcpAPI server via UVX - Send requests to the
/generateendpoint to manipulate Blender scenes
Make sure you have uv installed.
The supplied config.json provides access to 4 OpenAI models:
- gpt-4o-mini
- gpt-4.1-nano
- gpt-4.1-mini
- gpt-4.1
You can add others including models running on openai compatible APIs - see the section below on Configuration
cp .env-example .envIf using OpenAI models, add your API key to .env:
OPEN_AI_API_KEY=your-openai-key
TOOL_RESULT_FORMAT=resultFrom the root of the project:
uvx casual-mcp serveThis will start the API server at http://localhost:8000 and automatically register the blender-mcp tool server.
Use the /generate endpoint to send LLM prompts:
POST /generate
Content-Type: application/json
{
"session_id": "my-session",
"model": "gpt-4.1-nano",
"user_prompt": "Create a large plane, place an area light above it"
}The assistant will invoke Blender tool functions as needed.
For more advanced usage using the chat endpoint see the Casual MCP docs.
The session_id is optional, but by adding it Casual MCP will supply the previous messages from the session as context to the LLM. Just change the value to start a fresh session. Useful in this use case if building a scene in steps as it provides the previous messages back to the LLM.
To get all the messages from the session (good for seeing tools the LLM calls and the results) you can do a GET request against /generate/session/{session_id}
Sessions are stored in memory, any server restart will clear it.
This project uses a casual_mcp_config.json file to declare models and servers.
To add your own OpenAI-compatible models:
"my-local-model": {
"provider": "openai",
"endpoint": "http://localhost:1234/v1",
"model": "my-llm-name",
"template": "blender"
}You can read more about the config file structure here
Make sure the template matches a file in prompt-templates/ (e.g., blender.j2) in order for the model to use the supplied system template.
The prompt-templates/blender.j2 file is a Jinja2 template tailored for tool-calling with the Blender MCP toolset. It's rendered using the tool schema and sent as the system prompt.
This helps guide the LLM to use the correct tools with the right parameters.
Feel free to adjust this to see how it improves results. There is also another version blender-v2.j2 supplied that might provide improved results (I'm still testing it).
- Models must support function/tool calling to work properly (OpenAI GPT-4.1+ or compatible)
- The server uses Model Context Protocol (MCP) to define and invoke tool calls
- All sessions are stored in memory
MIT