This repository investigates the effects of partially freezing layers during the fine-tuning of a distilled multilingual model (DistilBERT) using the Universal Dependencies dataset. The study focuses on the task of Part-of-Speech (PoS) tagging and aims to explore how layer freezing impacts both model performance and training efficiency in this specific linguistic analysis context.
x1ew/UD-LinguisticStudy
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|