Welcome to the "Improving Accuracy of LLM Applications" course! π The course provides a systematic approach to enhance the accuracy and reliability of your LLM applications.
Many developers struggle with inconsistent results in LLM applications. π This course is designed to address these challenges by offering hands-on experience in improving accuracy through evaluation, prompt engineering, self-reflection, and fine-tuning techniques.
What Youβll Do:
- π§ SQL Agent Development: Build a text-to-SQL agent and simulate situations where it hallucinates to begin the evaluation process.
- π Evaluation Framework: Create a robust framework to systematically measure performance, including criteria for good evaluations, best practices, and developing an evaluation score.
- π― Instruction Fine-tuning: Learn how instruction fine-tuning helps LLMs follow instructions more accurately and how memory fine-tuning embeds facts to reduce hallucinations.
- π Performance-Efficient Fine-tuning (PEFT): Discover advanced techniques like Low-Rank Adaptation (LoRA) and Mixture of Memory Experts (MoME) to reduce training time while improving model performance.
- π Iterative Fine-tuning: Go through an iterative process of generating training data, fine-tuning, and applying practical tips to increase model accuracy.
- π οΈ Systematic Improvement: Learn development steps, from evaluation, prompting, self-reflection, and fine-tuning, to improve your modelβs reliability and accuracy.
- π§ Memory Tuning: Enhance your model's performance by embedding facts to reduce hallucinations.
- π Llama Models: Use the Llama 3-8b model to build an LLM application that converts text to SQL with a custom schema.
- π©βπΌ Sharon Zhou: Co-Founder and CEO of Lamini, Sharon brings her expertise in LLM development and fine-tuning.
- π¨βπΌ Amit Sangani: Senior Director of Partner Engineering at Meta, Amit shares valuable insights on engineering reliable LLM applications.
π To enroll in the course or for further information, visit π deeplearning.ai.



