Skip to content

MIC-DKFZ/computing_membrane_property_extraction

Repository files navigation

COMPUTING - Connecting Membrane Pores and Production Parameters via Machine Learning

This repository contains the code for extracting and analyzing membrane pores using deep learning. The pipeline includes image preprocessing, pore segmentation with nnU-Net, and quantitative property extraction. It enables linking membrane morphology to production parameters for reproducible and automated analysis.

1. Installation

# Download Repository
git clone https://github.com/MIC-DKFZ/computing_membrane_property_extraction.git
cd hereon_2023_computing
# (Optional) - Create a new conda environment
conda create --name computing python=3.10
conda activate computing
# Install all required packages
pip install -r requirements.txt

Update on Installation

When using ensemble you have to manually add a custom nnUNet Trainer. Do the previous installation and afterward install nnUNet in the following way into your hereon_2023_computing folder.

# Download and install nnUNet manually
git clone https://github.com/MIC-DKFZ/nnUNet.git
cd nnUNet
pip install -e .
# Copy this the trainer file to this location
cp ../utils/nnUNetTrainerDA5BN.py nnunetv2/training/nnUNetTrainer/variants/data_augmentation/nnUNetTrainerDA5BN.py

Model Weights

Model weights can be found here: https://zenodo.org/records/17541943

  • (optional) Download and unzip them into the nnUNetv2_trained_models folder
  • otherwise they will be automatically downloaded when calling predict.py for the first time.

2. Membrane Property Extraction

How to run

Follow the steps (2.1 - 2.3) to run the complete pipeline step by step. Alternatively adopt the path and execute runner.sh script (only fo ubuntu) and the scripts will be executed automatically. On windows you have to do it step by step like shown below.

# For ubuntu (couldnt make it run on windows)
bash runner.sh 
# For Windows do it step by step
python preprocessing.py -i="C:\Users\l727r\Desktop\Computing\PS-P4VP" -o="C:\Users\l727r\Desktop\Computing\PS-P4VP_props"
python predict.py -i="C:\Users\l727r\Desktop\Computing\PS-P4VP_props"
python propertie_extraction.py -i="C:\Users\l727r\Desktop\Computing\PS-P4VP_props"

2.1 Preprocessing

The preprocessing can be executed as shown below. During preprocessing, the black bars at the bottom of the images are cut off, the files are renamed and converted to png images.

  • Why Renaming? A lot of the files contain special chars (e.g. "µ", "^", "+", "%"), this can make problems when handling with file paths, therefore these chars get removed. Additionally, a '_0000' postfix is added which is required for nnUNet. The mapping from original to the new file names will be saved in 'name_mapping.csv'.
  • Why Conversion from .tif to .png? nnUNet was trained with .png images and therefore requires the data to be in .png format.
python preprocessing.py -i=root_raw -o=root_props
  • root_raw: Path to the folder which contains the images from which the parameters should be extracted. All .tif files in the directory + all subdirectories will be used (files named "!black.tif" will be excluded).

  • root_props: Path to the folder in which all the processed data and later all the extracted membrane properties will be saved in.

  • Note set --cutoff=0 if you dont have a black bar in your image

python preprocessing_magnification.py -i=root_raw -o=root_props -m=magnification
  • root_raw: Path to the folder which contains the images from which the parameters should be extracted. All .tif files in the directory + all subdirectories will be used (files named "!black.tif" will be excluded).
  • root_props: Path to the folder in which all the processed data and later all the extracted membrane properties will be saved in.
  • magnification: Path to a xlsx file which contains the following two keys: "Image Name (file name on fileserver)" and "Magnification as number"
  • This function should be used when your images have a different magnification than 50.00KX (Image Pixel Size = 2.233 nm). In this case the images will be resized to have the same image pixel size. This will result in images with different sizes in the dataset

2.2 Predicting

How to run the prediction script is shown below. During Predicting a segmentation mask is created for each image. If a GPU is available the prediction is performed on the GPU, otherwise the CPU is used. Inference on CPU will take longer compared to the GPU. When executing the script for the first time, the model weights of nnUNet will be downloaded (~3 GB) and saved into the 'nnUNetv2_trained_models' folder. This has to be done only once.

python predict.py -i=root_props
  • root_props: Path to the folder which contains the processed data from the previous stage. The predictions will be saved into this directory.
  • For this Function a single nnUNet configurations is used (5 models in total) for predicting the images.
python predict_ensmble.py -i=root_props
  • root_props: Path to the folder which contains the processed data from the previous stage. The predictions will be saved into this directory
  • For this Function an ensemble of 4 nnUNet configurations is used (20 models in total) for predicting the images.

2.3 Property Extraction

How to run the script to extract the membrane properties is shown below. A 'membrane_properties.csv' will be created which contains all properties for each image. Additionally some folders will be created which contain visualizations for the Classes, Mesh, Mesh_Regularity, Pore_Size, Pore_Distance, Pore_Circularity.

python propertie_extraction.py -i=root_props
  • root_props: Path to the folder which contains the processed data from the previous stages. The membrane properties and all visualizations will be saved here.

Acknowledgements

    

This Repository is developed and maintained by the Applied Computer Vision Lab (ACVL) of Helmholtz Imaging.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published