-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/mask NaNs in training loss function #56
base: develop
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #56 +/- ##
========================================
Coverage 99.85% 99.85%
========================================
Files 23 23
Lines 1350 1374 +24
========================================
+ Hits 1348 1372 +24
Misses 2 2 ☔ View full report in Codecov by Sentry. |
This functionality seems to be related to ecmwf/anemoi-training#79 |
I see some similarities between the output masking and the post-processors, but the part that doesn't fit is that the post-processors are only applied at the end of the rollout. Instead, the masking is called not only at the end, but also in between all the rollout steps (to roll out the boundary forcing). So I don't know if it's better to include it as a special post-processor or leave it in the anemoi-training. I would say that we can do the loss masking here similar to the imputer, but I think the masking should remain in anemoi-training. |
Variables with missing values that are imputed by the imputer should not be considered in the loss.
The NaN masks are prepared in the imputer. The remapper contains a new function to remap the NaN masks from the imputer.
This goes together with PR #72 from anemoi-training.