Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update for the daisybio profile #737

Merged
merged 14 commits into from
Aug 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 43 additions & 8 deletions conf/daisybio.config
Original file line number Diff line number Diff line change
@@ -1,30 +1,65 @@
params {
config_profile_description = 'DaiSyBio cluster profile provided by nf-core/configs.'
config_profile_contact = 'Johannes Kersting (Johannes Kersting)'
config_profile_url = 'https://biomedical-big-data.de/'
config_profile_contact = 'Johannes Kersting (@JohannesKersting)'
config_profile_url = 'https://www.mls.ls.tum.de/daisybio/startseite/'
max_memory = 1.TB
max_cpus = 120
max_time = 96.h
igenomes_base = '/nfs/data/references/igenomes'
}

// define workDir in /nfs/scratch/nf-core_work/ named after the launch dir
def work_dir = "/nfs/scratch/nf-core_work/"
if(new File(work_dir).exists() && System.getenv("PWD")) {
workDir = work_dir+System.getenv("PWD").tokenize('/').join('.')
}

process {
executor = 'slurm'
queue = 'shared-cpu'
maxRetries = 2
}

executor {
queueSize = 30
queueSize = 50
submitRateLimit = '10 sec'
}

singularity {
cacheDir = '/nfs/scratch/singularity_cache'
}
cleanup = true
profiles {
// profile to keep work directory
keep_work {
cleanup = false
}

//profile for singularity
singularity {
singularity.enabled = true
singularity.autoMounts = true
conda.enabled = false
docker.enabled = false
podman.enabled = false
shifter.enabled = false
charliecloud.enabled = false
apptainer.enabled = false
process.beforeScript = 'module load singularity'
singularity.cacheDir = '/nfs/scratch/singularity_cache'
}

// profile for apptainer
apptainer {
apptainer.enabled = true
apptainer.autoMounts = true
conda.enabled = false
docker.enabled = false
singularity.enabled = false
podman.enabled = false
shifter.enabled = false
charliecloud.enabled = false
process.beforeScript = 'module load apptainer'
apptainer.cacheDir = '/nfs/scratch/apptainer_cache'
}

apptainer {
cacheDir = '/nfs/scratch/apptainer_cache'
}


Expand Down
4 changes: 2 additions & 2 deletions docs/daisybio.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,6 @@ To use the DaiSyBio profile, run a nf-core pipeline with `-profile daisybio,<sin
This will automatically download and apply ['daisybio.config'](../conf/daisybio.config) as a nextflow config file.

The config file will set slurm as a scheduler for the compute cluster, define max resources, and specify cache locations for singularity, apptainer, and iGenomes.
Pipeline specific parameters still need to be configured manually.
Pipeline-specific parameters still need to be configured manually.

Singularity and apptainer are currently only installed on the exbio nodes. In order to use them, you can either install singularity/apptainer in a conda environment and execute nextflow inside the environment, or limit the queue to exbio nodes with `-process.queue exbio-cpu`
Work directories will be kept at `/nfs/scratch/nf-core_work/` in a directory named after the full path of the launch directory ("." separated). Thy are automatically removed after a successful pipeline run. To keep the intermediate file, e.g. for using the `-resume` function, add `keep_work` as a profile: `-profile daisybio,<singularity/apptainer>,keep_work`.
Loading