Skip to content
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion QEfficient/finetune/configs/training.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ class train_config:
use_fp16: bool = True
use_autocast: bool = True
val_batch_size: int = 1
dataset = "samsum_dataset"
dataset = "alpaca_dataset"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good that you have added this change in this gerrit.

task_type = "generation" # "generation" / "seq_classification"
peft_method: str = "lora"
use_peft: bool = True # use parameter efficient fine tuning
Expand Down
4 changes: 2 additions & 2 deletions QEfficient/finetune/dataset/custom_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def load_module_from_py_file(py_file: str) -> object:
return module


def get_custom_dataset(dataset_config, tokenizer, split: str):
def get_custom_dataset(dataset_config, tokenizer, split: str, context_length=None):
if ":" in dataset_config.file:
module_path, func_name = dataset_config.file.split(":")
else:
Expand All @@ -38,7 +38,7 @@ def get_custom_dataset(dataset_config, tokenizer, split: str):

module = load_module_from_py_file(module_path.as_posix())
try:
return getattr(module, func_name)(dataset_config, tokenizer, split)
return getattr(module, func_name)(dataset_config, tokenizer, split, context_length)
except AttributeError as e:
print(
f"It seems like the given method name ({func_name}) is not present in the dataset .py file ({module_path.as_posix()})."
Expand Down
38 changes: 37 additions & 1 deletion docs/source/finetune.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,40 @@ to visualise the data,

```python
tensorboard --logdir runs/<file> --bind_all
```
```

## Fine-Tuning on custom dataset
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should include details on how we use gradient accumulation, how the dataset is shuffled, how activation checkpointing is enabled in separate sections.
In custom dataset, add a point that if any user wants to use these, refer xyz section

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added the section explaining how to use gradient accumulation and gradient checkpointing.
For single SOC, we are not doing any shuffling of data. By default, it is False.
For DDP, in case of sorting, shuffling is set to false.
For DDP without sorting, shuffling was set to True. Made it False to sync it up with single SOC run and also to be able to use 'resume finetuning from between' feature. this was not caught earlier because by default we run DDP with sorting only.
Hence, didn't add any information about shuffling in the doc.


To run fine tuning for any user specific dataset, prepare the dataset using the following steps:

1) Create a directory named 'dataset' inside efficient-transformers.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the location "at root of the repo."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

2) Inside this directory, create a file named 'custom_dataset.py'. This is different than the custom_dataset.py present at efficient-transformers/QEfficient/finetune/dataset.
3) Inside the newly created efficient-transformers/dataset/custom_dataset.py, define a function named 'get_custom_dataset'.
4) get_custom_dataset() should have following 4 parameters: dataset_config, tokenizer, split, context_length. This function gets called twice through Qefficient/cloud/finetune.py with the name get_preprocessed_dataset.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QEfficient not Qefficient

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

5) Inside get_custom_dataset(), dataset needs to prepared for fine tuning. So, the user needs to apply prompt and tokenize the dataset accordingly. Please refer the below template on how to define get_custom_dataset().
6) For examples, please refer python files present in efficient-transformers/QEfficient/finetune/dataset. In case of Samsum dataset, get_preprocessed_samsum() of efficient-transformers/QEfficient/finetune/dataset/samsum_dataset.py is called.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since default dataset is changed, we should mention alpaca here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The steps I have mentioned matches with the format of samsum_dataset.py. It doesn't match with alpaca_dataset.py. Hence, I didn't change it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

too verbose. Make it simple pointed steps

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Added the detailed points in confluence and made them crisp in the PR.

7) In efficient-transformers/QEfficient/finetune/configs/dataset_config.py, for custom_dataset class, pass the appropriate value for train_split and test_split according to the dataset keys corresponding to train and test data points. As an alternative, these values can be passed as command line arguemnets as well with the finetune command. For example "--train_split train".
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add hyperlinks to the relative paths annotated in the steps below

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

8) While running fine tuning, pass argument "-–dataset custom_dataset" to finetune on custom dataset.

Template for get_custom_dataset() to be defined inside efficient-transformers/dataset/custom_dataset.py is as follows:

```python
def get_custom_dataset(dataset_config, tokenizer, split, context_length=None):

# load dataset
# based on split, retrieve only the specific portion of the dataset (train or eval) either here or at the last

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add one more comment as "Define a prompt template"

Copy link
Contributor Author

@quic-swatia quic-swatia May 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's already there. ( # define prompt)

def apply_prompt_template():
# transform the passed datapoint by applying the prompt on it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add some comment as "Convert the raw input into format as per the template defined earlier."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

def tokenize():
# tokenize the passed datapoint

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add some comment as "Implement tokenization and prepare inputs for the training."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

# define the prompt
# call apply_prompt_template() for each data point:
# dataset = dataset.map(apply_prompt_template ,<other args>)
# call tokenize() for each data point:
# dataset = dataset.map(tokenize, <other args>)

return dataset
```
Loading