This project fine-tunes a transformer model on the AG News dataset using Low-Rank Adaptation (LoRA). It demonstrates how to adapt large language models efficiently using fewer trainable parameters.
- Applied LoRA for parameter-efficient finetuning
- Used Hugging Face Transformers and PEFT
- Classified AG News into 4 categories with high accuracy
- AG News (via Hugging Face Datasets)
- Python
- transformers
- peft
- accelerate
pip install transformers datasets peft accelerate
jupyter notebook LoRA_Text_Classification.ipynb