Skip to content
This repository was archived by the owner on Apr 25, 2023. It is now read-only.
This repository was archived by the owner on Apr 25, 2023. It is now read-only.

how to use the pipeline #1

@bryantChhun

Description

@bryantChhun

@miaecle
Please see recent changes to readme.

Our current use mode is to call specific wrapper functions (run_preproc, run_segmentation, etc..) and pass file paths to data locations. There are a few problems with how we've implemented this that i'd like to document here, and address soon:

  1. The wrapper functions have hardcoded paths to data.
  2. The wrapper functions have multiple operations that are never run simultaneously and that are selected by commenting/uncommenting code. (example: run_patch.py Worker can both "extract_patches" and "build_trajectories")
  3. Some critical batches of code are standalone scripts or reference files that are generated outside of this codebase (example: NNSegmentation.run.py is a script that calls this file -> Annotations_8Sites.pkl, which is not written anywhere within the codebase)

potential solutions:

  1. wrapper functions could have main with CLI parsing.
  2. wrapper functions need to be rewritten to accept kwargs for the specific type of run. This works well with CLI
  3. The two files that do this are both for NN training -- the Unet and the VQ-VAE.

Additionally, I think it could be generally useful to have all functions that write critical files also return that data. This will make a full script "pipeline" feasible.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions