Extended llm support (e.g. Llama 3, M8x22b) and synthetic test generation #936
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Enhancements for new opensource models with focus on Llama 3 70b instruct
This is a draft PR - it introduces several improvements to RAGAs to enhance its compatibility with Llama 3 70b and improve the quality of generated outputs, especially for synthetic data generation.
Prompt Adjustments
prompt.py
andprompts.py
to increase the quality of generated outputs, which is particularly important for synthetic data generation.Support for Non-Typical LangChain LLM Configurations
LLMConfig
class to encapsulate the custom configuration options.together_prompt_callback
function to handle the dynamic prompt generation for the Together.ai Llama 3 70b instruct model.Example Configuration