-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training failed. The lip shape of a character cannot change according to changes in speech #145
Comments
are Percep, Fake, and Real always 0.0? |
There was no occurrence of the situation you mentioned,Step 191706 | L1: 0.04096 | Vgg: 0.1543 | SW: 0.03 | Sync: 3.293 | DW: 0.025 | Percep: 1.905 | Fake: 0.188, Real: 0.2206 | Load: 0.0115, Train: 1.85。The current generated lip shape is the same as the lip shape in the reference frame, but different from the actual lip shape。 @see2run |
Did you not change the script to train SyncNet and Wav2Lip, or make any modifications? |
Train using train_syncnet_sam.py and hq_wav2lip_sam_train.py without making any changes to the code |
@Liming-belief While training syncnet, did you face an issue where loss gets stuck? I am stuck there, so I need help. |
I have a question, at the beginning, are the values for sync, dw, percep, fake, and real all 0.0 like this or not? Step 683 | L1: 0.09317 | Vgg: 0.3026 | SW: 0.03 | Sync: 0.0 | DW: 0.0 | Percep: 0.0 | Fake: 0.0, Real: 0.0 | Load: 0.008834, Train: 1.229 or have their values changed from the beginning? |
Hello, I trained Syncnet and wav2lip to reduce the loss to between 0.25-0.3, but after actual inference, I found that the lip shape of the character is not moving. May I ask what is the reason for this?
The text was updated successfully, but these errors were encountered: