Skip to content

prompt generator 0.1.0

Latest
Compare
Choose a tag to compare
@teshaTe teshaTe released this 21 Aug 12:27
· 4 commits to main since this release
061e49a

Updates:
vLLM backend:

  • added support for inferencing models on multiple GPUs;
  • added extra parameters for controlling the process of generation;
  • added support for speculative decoding (https://docs.vllm.ai/en/latest/models/spec_decode.html, experimental vLLM feature);
  • added several optimizations;

Other:

  • improved instruction prompt;
  • adjusted object categories from where objects are being sampled;
  • bug fixes