Skip to content

mehranhaddadi13/llm_compress

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Comparing Text Compression Capabilities of LLMs with Traditional Compression Algorithms.

Note

llm_encode.py and finetune.py are adapted from FineZip

Compressing text data using traditional compression approaches and LLMs and recording the compression time and resources usage.

Models

Used traditional methods are standard Python libraries of:

  • zlib
  • bz2
  • lzma

as well as pyppmd Python library and Tawa toolkit (implemented in C) [1].

The used LLM for our experiments is LLaMA 3.2-1B. Any LLM from HuggingFace can be utilised by editing the LLM name in compress.py.

Metrics

The following metrics are logged and stored in CSV files:

  • Compression ratio $=$ $\frac{compressed_size}{original_size}$
  • Compression duration
  • Peak and average GPU usage
  • Peak GPU memory usage
  • Peak and average CPU usage
  • Peak memory usage
  • Percentage of the true tokens ranked 0 among the predicted tokens by the LLM
  • Percentage of the true tokens ranked between 0 and 15 among the predicted tokens by the LLM

Note

GPU and GPU memory usage are not recorded for Tawa executions.

Ranking metrics are recorded only for the LLM.

How to Use

Important

The code is tested on Python version 3.11.2

To use, first clone this repository:

gh repo clone https://github.com/mehranhaddadi13/llm_compress

Then, change the current directory to the directory of the repo:

cd llm_compress

Next, make a directory named datasets and move your datasets to it:

mkdir datasets

Then, install the requirements:

python -m pip insatll -r requirements.txt

Finally, edit compress.py and add your HuggingFace login token , before running it:

python compress.py

Warning

Each fine-tuning and compression processes via LLMs take hours to days to finish, depending on the available resources and the size of the LLM and dataset.

[1]: William J. Teahan 2018. A Compression-Based Toolkit for Modelling and Processing Natural Language Text.

About

Comparing text compression performance of LLMs with traditional compression approaches.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages