AI assistance for Embarcadero RAD Studio 10.x and 11.x, directly inside the IDE.
FusionAI is the current evolution of the original
ChatGPTWizardproject.
FusionAI is a Delphi plug-in for RAD Studio that brings AI-assisted workflows into the IDE:
- Ask free-form questions in a chat window
- Run inline questions directly from the editor
- Use predefined right-click actions such as explain, optimize, add comments, add tests, and find bugs
- Work with class/type-based prompts from a dedicated Class View
- Compare answers across multiple AI providers
- Keep a searchable local history database
- Use local file logging for diagnostics when needed
The project is intended for RAD Studio versions that do not have the newer built-in AI workflow as the primary path.
This repository currently targets:
- RAD Studio 10.x
- 10.1 Berlin
- 10.2 Tokyo
- 10.3 Rio
- 10.4 Sydney
- RAD Studio 11.x
- 11 Alexandria
This repository is not the main target for RAD Studio 12.2+.
For older IDEs, use the separate branch/repository:
FusionAI currently supports:
- OpenAI ChatGPT
- Google Gemini
- Anthropic Claude
- Ollama
- ChatGPT, Gemini, and Claude require valid API credentials.
- Ollama can be used locally without a cloud API key.
- You can choose a default AI service in the settings.
- Available models can be refreshed and cached per provider.
- Unified FusionAI chat window
- Dockable assistant UI when the IDE version supports it
- Inline editor prompts with the
cpt: ... :cptformat - Right-click editor actions for selected code
- Class View with prompt actions and code conversion helpers
- Provider-aware answer tabs
- Provider-aware SQLite history
- History filtering by provider and model
- Proxy configuration
- File-based diagnostic logging
- Configurable timeouts and provider-specific parameters
Install through Delphinus.
- Open FusionAI.dproj in RAD Studio.
- Build the package.
- Install the generated package from the IDE.
- Open
FusionAI Settings. - Go to
AI Services. - Enable at least one provider.
- Fill in the provider settings:
- Base URL
- Access key if required
- Default model
- Optional provider-specific parameters
- Save and reopen the assistant if needed.
Open FusionAI from the IDE menu and ask a normal question.
Use this for:
- code explanation
- refactoring ideas
- architecture questions
- debugging help
- ad hoc snippets
There are two main inline workflows:
- Use direct inline prompt markers:
cpt: Explain what this method does. :cptThen run the Ask action from the editor popup menu or use its shortcut.
- Select code and use a predefined action from the popup menu:
- Ask
- Add Test
- Find Bugs
- Optimize
- Add Comments
- Complete Code
- Explain Code
- Refactor Code
- Convert to Assembly
The response is inserted back into the editor as a multiline comment after the selected code.
For selected text or a code block, FusionAI can insert the result after the selection as a multiline Delphi comment block.
Available actions include:
- Ask
- Add Test
- Find Bugs
- Optimize
- Add Comments
- Complete Code
- Explain Code
- Refactor Code
- Convert to Assembly
If you select code and trigger the first Ask action without the cpt: ... :cpt format, FusionAI opens the chat window and prepares a draft question for you.
The Class View tab lets you work with types parsed from the current Delphi unit.
Typical uses:
- explain a type
- optimize a type
- add tests for a type
- run custom prompts against the selected type
- convert a type to another language
The parser has been improved to better tolerate newer Delphi syntax, but Class View is still best treated as a practical helper rather than a full compiler-grade parser.
Each provider has its own configuration tab under AI Services.
Depending on the provider, you can configure:
- enable/disable state
- base URL
- access key
- model
- timeout
- max tokens
- temperature
- top-p
- top-k
- API version fields when required by the provider
- Uses the OpenAI Chat Completions API
- Works with current GPT-4 and GPT-5 style models
- Uses the Google Generative Language API
- Uses the Anthropic API
- Uses a local or remote Ollama endpoint
- Suitable for offline or private workflows
Provider settings are managed from the AI Services page.

- Install Ollama from ollama.com.
- Make sure the server is running.
- Pull at least one model, for example:
ollama run llama3.2- In FusionAI settings, enable
Ollama. - Set the base URL, usually:
http://localhost:11434
- Choose or enter the model name.
Legacy setup screenshot:
FusionAI can store requests and responses in a local SQLite database.
History includes provider-aware metadata such as:
- provider
- model
- status
- timestamps
- duration
You can filter the history by provider and model, and use text or fuzzy search to find older conversations.
History supports text and fuzzy filtering with extra search options.
FusionAI supports optional file-based logging for troubleshooting.
When enabled, logs can include:
- request URL
- request JSON
- response JSON
- provider status transitions
- timeout and inline-flow diagnostics
API keys are not written to the log file.
- Some providers are paid services or have usage limits.
- Generated content is sent directly to the configured AI provider.
- You are responsible for reviewing generated code and text before using it.
- Class View parsing is best-effort and may not perfectly represent every source shape.
- Very large prompts or very large type bodies may still hit provider-side token limits.
If HTTPS requests fail inside the IDE, make sure the required SSL libraries are available in the environment used by bds.exe.
Check:
- provider is enabled
- base URL is correct
- access key is valid
- model is available for that provider
- timeout is high enough for the selected model
If Class View looks stale after switching units or reopening the assistant:
- switch away from
Class Viewand back again - reopen the assistant window
- verify the current unit is the one you expect
This repository still uses the historical GitHub repository name ChatGPTWizard, but the current plug-in and package name is FusionAI.
Issues, pull requests, and discussions are welcome.
Please include:
- RAD Studio version
- provider name
- active model
- exact steps to reproduce
- log output if file logging is enabled
MIT. See LICENSE.
If you find the project useful, starring the repository helps a lot.














