Conversation
|
Is that up to date? What's keeping us from merging? |
|
Alright it's finally ready to review. Try running it before approving, just to make sure nothing is broken... |
omnigan/trainer.py
Outdated
| if "m2" in self.opts.tasks: | ||
| prediction = self.G.decoders[update_task]( | ||
| torch.cat( | ||
| (self.z, self.label_1[0, :, :, :].unsqueeze(0)), |
omnigan/trainer.py
Outdated
| if update_task == "m2": | ||
| prediction = self.G.decoders["m"]( | ||
| torch.cat( | ||
| (self.z, self.label_2[0, :, :, :].unsqueeze(0)), |
There was a problem hiding this comment.
self.label_0[0, :, :, :].unsqueeze(0) should be the same as self.label_0[:1, ...]
omnigan/trainer.py
Outdated
| task_saves.append(x * (1.0 - target.repeat(1, 3, 1, 1))) | ||
|
|
||
| elif update_task == "d": | ||
| if update_task == "d": |
|
|
||
| step_loss += update_loss | ||
|
|
||
| elif update_task == "m2": |
There was a problem hiding this comment.
If I read this right, the only things that change between the if and elif are self.label_1[:, 0, 0, 0].squeeze() vs self.label_2[:, 0, 0, 0].squeeze() and self.logger.losses.generator.task_loss. I bet you can refactor this whole block in a much shorter way by having variables and a common code dependent on those. Cleaner, shorter, less error-prone (don't need to change 2 pieces of code if you change the logic) more versatile (what about more flood levels?)
| self.D["m"]["Advent"], | ||
| ) | ||
| if "m2" in self.opts.tasks: | ||
| # --------ADVENT LOSS--------------- |
There was a problem hiding this comment.
Add CoBlock to your vscode extension and use cmd+shift+k to make nice comment blocks instead of those atrocious imbalanced hybrid headers
|
How about this PR, is it ready @51N84D ? |
Current approach just adds bit-conditioning to the latent vector, and determines which (simulated) ground truth mask to compute losses with.