Skip to content

dahimbis/Deep-Learning-Project-Finetuning-with-LoRA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Project – Text Classification Using LoRA

This project fine-tunes a transformer model on the AG News dataset using Low-Rank Adaptation (LoRA). It demonstrates how to adapt large language models efficiently using fewer trainable parameters.

Key Features

  • Applied LoRA for parameter-efficient finetuning
  • Used Hugging Face Transformers and PEFT
  • Classified AG News into 4 categories with high accuracy

Dataset

  • AG News (via Hugging Face Datasets)

Requirements

  • Python
  • transformers
  • peft
  • accelerate

How to Run

pip install transformers datasets peft accelerate
jupyter notebook LoRA_Text_Classification.ipynb

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published