You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Im using the dual boost feature with the following setup
main model (reasoning): deepseek-r1 (kluster)
first_provider (coder): claude 3.5
second_provider (coder): qwen-coder-32b (groq)
And having a reasoning model evaluating the coder results have gave me great results (high response times tough) but I think this approach could be better if instead of prompt -> providers -> main model -> response the flow could be something like prompt -> main model -> providers -> main model -> response
I have manually inserted the second flow in chat apps and I think it gives better results, specially when you didnt created a huge prompt, just asked in chat something to be change, then main model reasoned to improved the prompt based in the codebase context and the question and calls providers ai with an improved version of the user prompt
I can code this change, but since it would be a "refactor" on how the dual_boost feature works I rather ask you before start writing lines and lines of code that may or may not be included. Let me know your thoughts about this process and if you think is something wort exploring to add to the plugin @yetone
Motivation
Im trying to get agent like behavior if using dual_boost. and I think this approach could get even more results (after tested it in a very manual way using chats)
Other
No response
The text was updated successfully, but these errors were encountered:
Feature request
Im using the dual boost feature with the following setup
main model (reasoning): deepseek-r1 (kluster)
first_provider (coder): claude 3.5
second_provider (coder): qwen-coder-32b (groq)
And having a reasoning model evaluating the coder results have gave me great results (high response times tough) but I think this approach could be better if instead of
prompt -> providers -> main model -> response
the flow could be something likeprompt -> main model -> providers -> main model -> response
I have manually inserted the second flow in chat apps and I think it gives better results, specially when you didnt created a huge prompt, just asked in chat something to be change, then main model reasoned to improved the prompt based in the codebase context and the question and calls providers ai with an improved version of the user prompt
I can code this change, but since it would be a "refactor" on how the dual_boost feature works I rather ask you before start writing lines and lines of code that may or may not be included. Let me know your thoughts about this process and if you think is something wort exploring to add to the plugin @yetone
Motivation
Im trying to get agent like behavior if using dual_boost. and I think this approach could get even more results (after tested it in a very manual way using chats)
Other
No response
The text was updated successfully, but these errors were encountered: