-
Notifications
You must be signed in to change notification settings - Fork 28.8k
TypeError: Accelerator.__init__() got an unexpected keyword argument 'dispatch_batches' #34714
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
same here, trying to figure out what hapened |
same... I can confirm that downgrading to 0.28.0 still works |
cc @muellerzr, seems high priority given the number of users impacted |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
gentle ping @muellerzr @SunMarc |
Do you still have the issue with the latest transformers @andresmijares @PrettyBoyHelios @SiyuWu528 ? A minimal reproducer will be very helpful |
Yes, as per this morning still the issue persists. accelerate 1.2.1 pypi_0 pypi TypeError: Accelerator.init() got an unexpected keyword argument 'dispatch_batches' |
Could you share a minimal reproducer @rsanchezpizani ? |
degrading accelerate to 0.34.2 worked |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
@LysandreJik @muellerz we're also getting this on production. Downgrading |
Please share a traceback or a minimal reproducer @j-adamczyk . We can't do much without enough information. |
Relevant stacktrace part:
Relevant training code:
This is a part of automated pipeline. It worked last on 05.01.2025, stopped working with the error above on 26.01.2025. Right now I'm trying downgrading accelerate version by version to pinpoint this. Linux OS, relevant software versions:
Accelerate version was chosen by Poetry from:
|
Nice, I will try to find the issue ! |
You need to update your version of transformers to the latest one ! When we switched to accelerate v1, we deprecated a few arguments in Accelerator class. If you don't want to upgrade transformers, you can also downgrade accelerate. |
Why isn't this covered by package requirements then? I mean, if a breaking change is introduced, you can limit the upper allowed version of |
accelerate don't depend on transformers |
I found the solution:
blame the developers who don't specify the correct dependencies |
모두 감사합니다. 제 포를 찾아주세요.. |
if is_accelerate_available():
from accelerate import Accelerator, skip_first_batches
from accelerate import __version__ as accelerate_version
from accelerate.utils import DistributedDataParallelKwargs, GradientAccumulationPlugin
if version.parse(accelerate_version) > version.parse("0.20.3"):
from accelerate.utils import (
load_fsdp_model,
load_fsdp_optimizer,
save_fsdp_model,
save_fsdp_optimizer,
) But transformers depends on accelerate (at least to some extend). Covered by package requirements may be a better choice. |
System Info
transformers==4.37.2 python 3
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
#huggingface trainer, to train the model
from transformers import Trainer, TrainingArguments
model.to(device)
model_name = f"{model_ckpt}-finetuned"
batch_size = 2
training_args = TrainingArguments(
output_dir= model_name,
save_safetensors = False,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
evaluation_strategy='epoch',
logging_strategy='epoch',
learning_rate=1e-5,
num_train_epochs=10,
weight_decay=0.01,
gradient_accumulation_steps=2,
max_grad_norm=1.0,
optim='adamw_torch'
)
trainer = Trainer(
model= model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
compute_metrics=compute_metrics
)
trainer.train()
Expected behavior
I was run perfectly fun before Nov 12th midnight, and stop working on Nov 13th.....
The text was updated successfully, but these errors were encountered: