Skip to content

A working example of using Casual MCP with the Blender MCP Server.

Notifications You must be signed in to change notification settings

AlexStansfield/casual-mcp-blender-example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧪 Casual MCP + Blender MCP Example

This repository provides a working example of using Casual MCP with the Blender MCP Server.

It demonstrates how to:

  • Configure OpenAI models (or any compatible API)
  • Use a tuned system prompt for Blender tool calling
  • Launch the casual-mcp API server via UVX
  • Send requests to the /generate endpoint to manipulate Blender scenes

🚀 How to Run

1. Install the dependencies

Make sure you have uv installed.

2. Adjust config.json as required

The supplied config.json provides access to 4 OpenAI models:

  • gpt-4o-mini
  • gpt-4.1-nano
  • gpt-4.1-mini
  • gpt-4.1

You can add others including models running on openai compatible APIs - see the section below on Configuration

3. Copy and Configure Environment Variables

cp .env-example .env

If using OpenAI models, add your API key to .env:

OPEN_AI_API_KEY=your-openai-key
TOOL_RESULT_FORMAT=result

4. Run the API

From the root of the project:

uvx casual-mcp serve

This will start the API server at http://localhost:8000 and automatically register the blender-mcp tool server.

🧠 Making a Tool-Calling Request

Use the /generate endpoint to send LLM prompts:

POST /generate
Content-Type: application/json

{
  "session_id": "my-session",
  "model": "gpt-4.1-nano",
  "user_prompt": "Create a large plane, place an area light above it"
}

The assistant will invoke Blender tool functions as needed.

For more advanced usage using the chat endpoint see the Casual MCP docs.

Sessions

The session_id is optional, but by adding it Casual MCP will supply the previous messages from the session as context to the LLM. Just change the value to start a fresh session. Useful in this use case if building a scene in steps as it provides the previous messages back to the LLM.

To get all the messages from the session (good for seeing tools the LLM calls and the results) you can do a GET request against /generate/session/{session_id}

Sessions are stored in memory, any server restart will clear it.

⚙️ Configuration

This project uses a casual_mcp_config.json file to declare models and servers.

➕ Add Your Own Models

To add your own OpenAI-compatible models:

"my-local-model": {
  "provider": "openai",
  "endpoint": "http://localhost:1234/v1",
  "model": "my-llm-name",
  "template": "blender"
}

You can read more about the config file structure here

Make sure the template matches a file in prompt-templates/ (e.g., blender.j2) in order for the model to use the supplied system template.

🧩 Prompt Template

The prompt-templates/blender.j2 file is a Jinja2 template tailored for tool-calling with the Blender MCP toolset. It's rendered using the tool schema and sent as the system prompt.

This helps guide the LLM to use the correct tools with the right parameters.

Feel free to adjust this to see how it improves results. There is also another version blender-v2.j2 supplied that might provide improved results (I'm still testing it).

🛠 Additional Notes

  • Models must support function/tool calling to work properly (OpenAI GPT-4.1+ or compatible)
  • The server uses Model Context Protocol (MCP) to define and invoke tool calls
  • All sessions are stored in memory

📄 License

MIT

About

A working example of using Casual MCP with the Blender MCP Server.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages