Version 0.11.0
We are happy to announce the new skorch 0.11 release:
Two basic but very useful features have been added to our collection of callbacks. First, by setting load_best=True
on the Checkpoint
callback, the snapshot of the network with the best score will be loaded automatically when training ends. Second, we added a callback InputShapeSetter
that automatically adjusts your input layer to have the size of your input data (useful e.g. when that size is not known beforehand).
When it comes to integrations, the MlflowLogger
now allows to automatically log to MLflow. Thanks to a contributor, some regressions in net.history
have been fixed and it even runs faster now.
On top of that, skorch now offers a new module, skorch.probabilistic
. It contains new classes to work with Gaussian Processes using the familiar skorch API. This is made possible by the fantastic GPyTorch library, which skorch uses for this. So if you want to get started with Gaussian Processes in skorch, check out the documentation and this notebook. Since we're still learning, it's possible that we will change the API in the future, so please be aware of that.
Morever, we introduced some changes to make skorch more customizable. First of all, we changed the signature of some methods so that they no longer assume the dataset to always return exactly 2 values. This way, it's easier to work with custom datasets that return e.g. 3 values. Normal users should not notice any difference, but if you often create custom nets, take a look at the migration guide.
And finally, we made a change to how custom modules, criteria, and optimizers are handled. They are now "first class citizens" in skorch land, which means: If you add a second module to your custom net, it is treated exactly the same as the normal module. E.g., skorch takes care of moving it to CUDA if needed and of switching it to train or eval mode. This way, customizing your networks architectures with skorch is easier than ever. Check the docs for more details.
Since these are some big changes, it's possible that you encounter issues. If that's the case, please check our issue page or create a new one.
As always, this release was made possible by outside contributors. Many thanks to:
- Autumnii
- Cebtenzzre
- Charles Cabergs
- Immanuel Bayer
- Jake Gardner
- Matthias Pfenninger
- Prabhat Kumar Sahu
Find below the list of all changes:
Added
- Added
load_best
attribute toCheckpoint
callback to automatically load state of the best result at the end of training - Added a
get_all_learnable_params
method to retrieve the named parameters of all PyTorch modules defined on the net, including of criteria if applicable - Added
MlflowLogger
callback for logging to Mlflow (#769) - Added
InputShapeSetter
callback for automatically setting the input dimension of the PyTorch module - Added a new module to support Gaussian Processes through GPyTorch. To learn more about it, read the GP documentation or take a look at the GP notebook. This feature is experimental, i.e. the API could be changed in the future in a backwards incompatible way (#782)
Changed
- Changed the signature of
validation_step
,train_step_single
,train_step
,evaluation_step
,on_batch_begin
, andon_batch_end
such that instead of receivingX
andy
, they receive the whole batch; this makes it easier to deal with datasets that don't strictly return an(X, y)
tuple, which is true for quite a few PyTorch datasets; please refer to the migration guide if you encounter problems (#699) - Checking of arguments to
NeuralNet
is now during.initialize()
, not during__init__
, to avoid raising false positives for yet unknown module or optimizer attributes - Modules, criteria, and optimizers that are added to a net by the user are now first class: skorch takes care of setting train/eval mode, moving to the indicated device, and updating all learnable parameters during training (check the docs for more details, #751)
CVSplit
is renamed toValidSplit
to avoid confusion (#752)