Skip to content

ggml-org/llama.qtcreator

 
 

Repository files navigation

llama.qtcreator

Local LLM-assisted text completion for Qt Creator.

Qt Creator llama.cpp Text


Qt Creator llama.cpp Qt Widgets

Features

  • Auto-suggest on cursor movement
  • Toggle the suggestion manually by pressing Ctrl+G
  • Accept a suggestion with Tab
  • Accept the first line of a suggestion with Shift+Tab
  • Control max text generation time
  • Configure scope of context around the cursor
  • Ring context with chunks from open and edited files and yanked text
  • Supports very large contexts even on low-end hardware via smart context reuse
  • Speculative FIM support
  • Speculative Decoding support
  • Display performance stats

llama.cpp setup

The plugin requires a llama.cpp server instance to be running at:

Qt Creator llama.cpp Settings

Mac OS

brew install llama.cpp

Windows

winget install llama.cpp

Any other OS

Either build from source or use the latest binaries: https://github.com/ggml-org/llama.cpp/releases

llama.cpp settings

Here are recommended settings, depending on the amount of VRAM that you have:

  • More than 16GB VRAM:

    llama-server --fim-qwen-7b-default
  • Less than 16GB VRAM:

    llama-server --fim-qwen-3b-default
  • Less than 8GB VRAM:

    llama-server --fim-qwen-1.5b-default

Use llama-server --help for more details.

Recommended LLMs

The plugin requires FIM-compatible models: HF collection

Examples

A Qt Quick example on MacBook Pro M3 Qwen2.5-Coder 3B Q8_0:

Qt Creator llama.cpp Qt Quick

Implementation details

The plugin aims to be very simple and lightweight and at the same time to provide high-quality and performant local FIM completions, even on consumer-grade hardware.

Other IDEs

About

Local LLM-assisted text completion for Qt Creator.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 97.2%
  • CMake 2.8%