Skip to content

The course equips developers with techniques to enhance the reliability of LLMs, focusing on evaluation, prompt engineering, and fine-tuning. Learn to systematically improve model accuracy through hands-on projects, including building a text-to-SQL agent and applying advanced fine-tuning methods.

Notifications You must be signed in to change notification settings

ksm26/Improving-Accuracy-of-LLM-Applications

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

13 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Welcome to the "Improving Accuracy of LLM Applications" course! πŸš€ The course provides a systematic approach to enhance the accuracy and reliability of your LLM applications.

πŸ“˜ Course Summary

Many developers struggle with inconsistent results in LLM applications. πŸ˜“ This course is designed to address these challenges by offering hands-on experience in improving accuracy through evaluation, prompt engineering, self-reflection, and fine-tuning techniques.

What You’ll Do:

  1. 🧠 SQL Agent Development: Build a text-to-SQL agent and simulate situations where it hallucinates to begin the evaluation process.

  1. πŸ“Š Evaluation Framework: Create a robust framework to systematically measure performance, including criteria for good evaluations, best practices, and developing an evaluation score.

  1. 🎯 Instruction Fine-tuning: Learn how instruction fine-tuning helps LLMs follow instructions more accurately and how memory fine-tuning embeds facts to reduce hallucinations.
  2. πŸš€ Performance-Efficient Fine-tuning (PEFT): Discover advanced techniques like Low-Rank Adaptation (LoRA) and Mixture of Memory Experts (MoME) to reduce training time while improving model performance.
  3. πŸ”„ Iterative Fine-tuning: Go through an iterative process of generating training data, fine-tuning, and applying practical tips to increase model accuracy.

πŸ”‘ Key Points

  • πŸ› οΈ Systematic Improvement: Learn development steps, from evaluation, prompting, self-reflection, and fine-tuning, to improve your model’s reliability and accuracy.
  • 🧠 Memory Tuning: Enhance your model's performance by embedding facts to reduce hallucinations.
  • πŸ‘ Llama Models: Use the Llama 3-8b model to build an LLM application that converts text to SQL with a custom schema.

πŸ‘©β€πŸ« About the Instructors

  • πŸ‘©β€πŸ’Ό Sharon Zhou: Co-Founder and CEO of Lamini, Sharon brings her expertise in LLM development and fine-tuning.
  • πŸ‘¨β€πŸ’Ό Amit Sangani: Senior Director of Partner Engineering at Meta, Amit shares valuable insights on engineering reliable LLM applications.

πŸ”— To enroll in the course or for further information, visit πŸ“š deeplearning.ai.

About

The course equips developers with techniques to enhance the reliability of LLMs, focusing on evaluation, prompt engineering, and fine-tuning. Learn to systematically improve model accuracy through hands-on projects, including building a text-to-SQL agent and applying advanced fine-tuning methods.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors