Skip to content

ucdavisdatalab/workshop_llm_with_ollama

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Workshop: Local LLMs with Ollama and Python

License: CC BY-SA 4.0

UC Davis DataLab
Spring 2024

Instructor: Dr. Carl Stahmer <[email protected]>

Maintainer: Dr. Carl Stahmer <[email protected]>

This workshop focuses on the basics of working with large language models (LLMs) on your local computer using Ollama as part of the research pipeline, including understanding and interrogating the models themselves as well as interacting with their generative capabilities. Specific topics will include: setting up your own open-source Small Language Model (SLM), fine-tuning LLMs, and the basics of prompt engineering.

By the end of this series, you will be able to:

  • Clean and structure textual data for analysis Recognize and explain how these cleaning processes impact research findings
  • Explain key concepts and terminology in ther application of Large Lanugage Models (LLMs), including tokenization, word embedding, transformers, fine-tuning, and temperature
  • Successfully intall a local version of the Ollama SLM and interact with it via an API
  • Classify texts on the basis of their features as represented in the Ollama model
  • Produce models of word meanings from a corpus
  • Create a simple AI chatbot using Ollama and Python

Visit the Workshop Reader for an html rendered version of all content presented in this workshop.

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •