Hi.
I wanted to know if it is possible to use scalene with deepspeed optimized python scripts. If yes, please let me know the procedure that needs to be followed to use it. I have tried using scalene within my deepspeed python command but it was not working. The error screenshot is attached.
Command used:
scalene --gpu deepspeed --num_nodes 1 --num_gpus 1 main.py --data_path databricks/databricks-dolly-15k --data_split 2,4,4 --model_name_or_path meta-llama/Llama-2-7b-chat-hf --per_device_train_batch_size 32 --per_device_eval_batch_size 32 --max_seq_len 512 --learning_rate 9.65e-6 --weight_decay 0. --num_train_epochs 2 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --num_warmup_steps 0 --seed 1234 --gradient_checkpointing --zero_stage 2 --deepspeed --offload --lora_dim 128 --lora_module_name "layers." --output_dir ./output_LLaMa2_scalene
Output:

Hi.
I wanted to know if it is possible to use scalene with deepspeed optimized python scripts. If yes, please let me know the procedure that needs to be followed to use it. I have tried using scalene within my deepspeed python command but it was not working. The error screenshot is attached.
Command used:
scalene --gpu deepspeed --num_nodes 1 --num_gpus 1 main.py --data_path databricks/databricks-dolly-15k --data_split 2,4,4 --model_name_or_path meta-llama/Llama-2-7b-chat-hf --per_device_train_batch_size 32 --per_device_eval_batch_size 32 --max_seq_len 512 --learning_rate 9.65e-6 --weight_decay 0. --num_train_epochs 2 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --num_warmup_steps 0 --seed 1234 --gradient_checkpointing --zero_stage 2 --deepspeed --offload --lora_dim 128 --lora_module_name "layers." --output_dir ./output_LLaMa2_scalene
Output:
