Releases: mathpluscode/ImgX-DiffSeg
Releases · mathpluscode/ImgX-DiffSeg
Release v0.3.2
Added
- Added dropout in U-net, this may increase the memory consumption.
- Support anisotropic volumes for data augmentation.
- Added data augmentation including, random gamma adjustment, random flip, random shearing.
- Added registration related metrics and losses.
Changed
⚠️ Moved packageimgx_datasetsintoimgx/datasetsas submodule.- 😃 Moved data set iterator out of
Experimentto facilitate using non-TFDS data sets. - Aligned Transformer to haiku implementation.
- Used
jax.random.fold_infor random key splitting to avoid passing key between functions. - Used
optax.softmax_cross_entropyto replace custom implementation.
Release v0.3.1
Added
- Added example notebooks for inference on a single image without TFDS.
- Added integration tests for training, validation, and testing.
Changed
⚠️ Upgrade to JAX to 0.4.20.⚠️ Removed Haiku-specific modification to convolutional layers. This may impact model
performance.- Refactored config.
- Added
patch_sizeandscale_factorto data config. - Moved loss config from the main config to task config.
- Added
- Refactored code, including defining
imgx/tasksubmodule.
A Recycling Training Strategy for Medical Image Segmentation with Diffusion Denoising Models
This tag corresponds to the code used for the paper A Recycling Training Strategy for Medical Image Segmentation with Diffusion Denoising Models in August 2023.
Deep Generative Models workshop @ MICCAI 2023
This tag corresponds to the code used for the paper Importance of Aligning Training Strategy with Evaluation for Diffusion Models in 3D Multiclass Segmentation accepted at the Deep Generative Models workshop @ MICCAI 2023.