Skip to content

使用多模态数据集微调Qwen2.5-VL,KeyError: 'sequence_parallel_attention'。问题出现在模型加载阶段,Transformers库中的Qwen2.5-VL模型不支持 sequence_parallel_attention 这种注意力实现 #61

@hybdxxw

Description

@hybdxxw

[rank3]: Traceback (most recent call last):
[rank3]: File "/root/autodl-tmp/360-LLaMA-Factory/src/llamafactory/launcher.py", line 23, in
[rank3]: launch()
[rank3]: File "/root/autodl-tmp/360-LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch
[rank3]: run_exp()
[rank3]: File "/root/autodl-tmp/360-LLaMA-Factory/src/llamafactory/train/tuner.py", line 50, in run_exp
[rank3]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank3]: File "/root/autodl-tmp/360-LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 48, in run_sft
[rank3]: model = load_model(tokenizer, model_args, finetuning_args, training_args.do_train, full_determinism=training_args.full_determinism)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/root/autodl-tmp/360-LLaMA-Factory/src/llamafactory/model/loader.py", line 174, in load_model
[rank3]: model = load_class.from_pretrained(**init_kwargs)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 571, in from_pretrained
[rank3]: return model_class.from_pretrained(
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/modeling_utils.py", line 309, in _wrapper
[rank3]: return func(*args, **kwargs)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/modeling_utils.py", line 4508, in from_pretrained
[rank3]: model = cls(config, *model_args, **model_kwargs)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 529, in wrapper
[rank3]: f(module, *args, **kwargs)
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1799, in init
[rank3]: self.model = Qwen2_5_VLModel(config)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 529, in wrapper
[rank3]: f(module, *args, **kwargs)
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1392, in init
[rank3]: self.visual = Qwen2_5_VisionTransformerPretrainedModel._from_config(config.vision_config)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/modeling_utils.py", line 309, in _wrapper
[rank3]: return func(*args, **kwargs)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2077, in _from_config
[rank3]: model = cls(config, **kwargs)
[rank3]: ^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 529, in wrapper
[rank3]: f(module, *args, **kwargs)
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 400, in init
[rank3]: [Qwen2_5_VLVisionBlock(config, config._attn_implementation) for _ in range(config.depth)]
[rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 529, in wrapper
[rank3]: f(module, *args, **kwargs)
[rank3]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 329, in init
[rank3]: self.attn = QWEN2_5_VL_VISION_ATTENTION_CLASSES[attn_implementation](
[rank3]: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
[rank3]: KeyError: 'sequence_parallel_attention'
[rank2]: Traceback (most recent call last):
[rank2]: File "/root/autodl-tmp/360-LLaMA-Factory/src/llamafactory/launcher.py", line 23, in
[rank2]: launch()
[rank2]: File "/root/autodl-tmp/360-LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch
[rank2]: run_exp()
[rank2]: File "/root/autodl-tmp/360-LLaMA-Factory/src/llamafactory/train/tuner.py", line 50, in run_exp
[rank2]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
[rank2]: File "/root/autodl-tmp/360-LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 48, in run_sft
[rank2]: model = load_model(tokenizer, model_args, finetuning_args, training_args.do_train, full_determinism=training_args.full_determinism)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/root/autodl-tmp/360-LLaMA-Factory/src/llamafactory/model/loader.py", line 174, in load_model
[rank2]: model = load_class.from_pretrained(**init_kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 571, in from_pretrained
[rank2]: return model_class.from_pretrained(
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/modeling_utils.py", line 309, in _wrapper
[rank2]: return func(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/modeling_utils.py", line 4508, in from_pretrained
[rank2]: model = cls(config, *model_args, **model_kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 529, in wrapper
[rank2]: f(module, *args, **kwargs)
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1799, in init
[rank2]: self.model = Qwen2_5_VLModel(config)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 529, in wrapper
[rank2]: f(module, *args, **kwargs)
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1392, in init
[rank2]: self.visual = Qwen2_5_VisionTransformerPretrainedModel._from_config(config.vision_config)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/modeling_utils.py", line 309, in _wrapper
[rank2]: return func(*args, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/modeling_utils.py", line 2077, in _from_config
[rank2]: model = cls(config, **kwargs)
[rank2]: ^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 529, in wrapper
[rank2]: f(module, *args, **kwargs)
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 400, in init
[rank2]: [Qwen2_5_VLVisionBlock(config, config._attn_implementation) for _ in range(config.depth)]
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 529, in wrapper
[rank2]: f(module, *args, **kwargs)
[rank2]: File "/root/miniconda3/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 329, in init
[rank2]: self.attn = QWEN2_5_VL_VISION_ATTENTION_CLASSES[attn_implementation](
[rank2]: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
[rank2]: KeyError: 'sequence_parallel_attention'
####以上是错误,我想问一下是因为模型的原因不支持,还是因为其他的错误造成的

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions