Skip to content

Fix spelling #6336

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ SSO credentials and instance profile credentials are the most recommended becaus

## AWS IAM policies

[IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) are the mechanism used by AWS to defines permissions for IAM identities. In order to access certain AWS services, the proper policies must be attached to the identity associated to the AWS credentials.
[IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) are the mechanism used by AWS to define permissions for IAM identities. In order to access certain AWS services, the proper policies must be attached to the identity associated to the AWS credentials.

Minimal permissions policies to be attached to the AWS account used by Nextflow are:

Expand Down Expand Up @@ -366,7 +366,7 @@ sudo service docker start
sudo usermod -a -G docker ec2-user
```

You must logging out and logging back in again to use the new `ec2-user` permissions.
You must log out and log back in again to use the new `ec2-user` permissions.

These steps must be done *before* creating the AMI from the current EC2 instance.

Expand All @@ -386,7 +386,7 @@ sudo systemctl enable --now ecs
To test the installation:

```bash
curl -s http://localhost:51678/v1/metadata | python -mjson.tool (test)
curl -s http://localhost:51678/v1/metadata | python -mjson.tool
```

:::{note}
Expand Down
10 changes: 5 additions & 5 deletions docs/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ To run pipelines with Azure Batch:

- Set `process.executor` to `azurebatch` to make Nextflow submit tasks to Azure Batch.

- Set `workDir` to a working directory on Azure Blob Storage. For example, `az://<BLOB_STORAGE>/work`, where `BLOB_CONTAINER` is a blob container in your storage account.
- Set `workDir` to a working directory on Azure Blob Storage. For example, `az://<BLOB_CONTAINER>/work`, where `BLOB_CONTAINER` is a blob container in your storage account.

5. Launch your pipeline with the above configuration:

Expand Down Expand Up @@ -152,7 +152,7 @@ azure {

Replace the following:

- `USER_ASSIGNED_MANAGED_IDENTITY_CLIENT_ID`: your user assigned managed identity object ID
- `USER_ASSIGNED_MANAGED_IDENTITY_CLIENT_ID`: your user assigned managed identity client ID
- `STORAGE_ACCOUNT_NAME`: your Azure Storage account name
- `BATCH_ACCOUNT_NAME`: your Azure Batch account name
- `BATCH_ACCOUNT_LOCATION`: your Azure Batch account location
Expand Down Expand Up @@ -289,7 +289,7 @@ This section describes how to configure and use Azure Batch with Nextflow for ef

Nextflow integrates with Azure Batch by mapping its execution model to Azure Batch's structure. A Nextflow process corresponds to an Azure Batch job, and every execution of that process (a Nextflow task) becomes an Azure Batch task. These Azure Batch tasks are executed on compute nodes within an Azure Batch pool, which is a collection of virtual machines that can scale up or down based on an autoscale formula.

Nextflow manages these pools dynamically. You can assign processes to specific, pre-existing pools using the process `queue` directive. Nextflow will create if it doesn't exist and `azure.batch.allowPoolCreation` is set to `true`. Alternatively, `autoPoolMode` enables Nextflow to automatically create multiple pools based on the CPU and memory requirements defined in your processes.
Nextflow manages these pools dynamically. You can assign processes to specific, pre-existing pools using the process `queue` directive. Nextflow will create it if it doesn't exist and `azure.batch.allowPoolCreation` is set to `true`. Alternatively, `autoPoolMode` enables Nextflow to automatically create multiple pools based on the CPU and memory requirements defined in your processes.

An Azure Batch task is created for each Nextflow task. This task first downloads the necessary input files from Azure Blob Storage to its assigned compute node. It then runs the process script. Finally, it uploads any output files back to Azure Blob Storage.

Expand Down Expand Up @@ -492,7 +492,7 @@ The `azure.batch.pools.<POOL_NAME>.scaleFormula` setting can be used to specify

### Task authentication

By default, Nextflow creates SAS tokens for specific containers and passes them to tasks to enable file operations with Azure Storage. SAS tokens expire after a set period of time. The expiration time is 48 hours by default and cat be configured using `azure.storage.tokenDuration` in your configuration.
By default, Nextflow creates SAS tokens for specific containers and passes them to tasks to enable file operations with Azure Storage. SAS tokens expire after a set period of time. The expiration time is 48 hours by default and can be configured using `azure.storage.tokenDuration` in your configuration.

:::{versionadded} 25.05.0-edge
:::
Expand Down Expand Up @@ -525,7 +525,7 @@ For example, consider a *Standard_D4d_v5* machine with 4 vCPUs, 16 GB of memory,

- If a process requests `cpus 4`, `memory 16.GB`, or `disk 150.GB`, four task slots are allocated (100% of resources), allowing one task to run on the node.

Resource overprovisioning can occur if tasks consume more than their allocated share of resources. For instance, the node described above my become overloaded and fail if a task with `cpus 2` uses more than 8 GB of memory or 75 GB of disk space. Make sure to accurately specify resource requirements to ensure optimal performance and prevent task failures.
Resource overprovisioning can occur if tasks consume more than their allocated share of resources. For instance, the node described above may become overloaded and fail if a task with `cpus 2` uses more than 8 GB of memory or 75 GB of disk space. Make sure to accurately specify resource requirements to ensure optimal performance and prevent task failures.

:::{warning}
Azure virtual machines come with fixed storage disks that are not expandable. Tasks will fail if the tasks running concurrently on a node use more storage than the machine has available.
Expand Down
2 changes: 1 addition & 1 deletion docs/cache-and-resume.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ The task hash is computed from the following metadata:
- Whether the task is a {ref}`stub run <process-stub>`

:::{note}
Nextflow also includes an incrementing component in the hash generation process, which allows it to iterate through multiple hash values until it finds one that does not match an existing execution directory. This mechanism typically usually aligns with task retries (i.e., task attempts), however this is not guaranteed.
Nextflow also includes an incrementing component in the hash generation process, which allows it to iterate through multiple hash values until it finds one that does not match an existing execution directory. This mechanism typically aligns with task retries (i.e., task attempts), however this is not guaranteed.
:::

:::{versionchanged} 23.09.2-edge
Expand Down
3 changes: 1 addition & 2 deletions docs/channel.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,7 @@ Commonly used operators include:

- {ref}`operator-filter`: select the values in a channel that satisfy a condition

- {ref}`operator-flatMap`: transform each value from a channel into a list and emit each list
element separately
- {ref}`operator-flatMap`: transform each value from a channel into a list and emit each list element separately

- {ref}`operator-grouptuple`: group the values from a channel based on a grouping key

Expand Down
8 changes: 4 additions & 4 deletions docs/cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# Command line

Nextflow provides a robust command line interface (CLI) for the management and execution pipelines.
Nextflow provides a robust command line interface (CLI) for the management and execution of pipelines.

Simply run `nextflow` with no options or `nextflow -h` to see the list of available top-level options and commands. See {ref}`cli-reference` for the full list of subcommands with examples.

Expand Down Expand Up @@ -36,7 +36,7 @@ Set JVM properties.
$ nextflow -Dkey=value COMMAND [arg...]
```

This options allows the definition of custom Java system properties that can be used to properly configure or fine tuning the JVM instance used by the Nextflow runtime.
This option allows the definition of custom Java system properties that can be used to properly configure or fine tuning the JVM instance used by the Nextflow runtime.

For specifying other JVM level options, please refer to the {ref}`config-env-vars` section.

Expand Down Expand Up @@ -96,7 +96,7 @@ Sets the path of the nextflow log file.
$ nextflow -log custom.log COMMAND [arg...]
```

The `-log` option takes a path of the new log file which to be used instead of the default `.nextflow.log` or to save logs files to another directory.
The `-log` option takes a path of the new log file which will be used instead of the default `.nextflow.log` or to save logs files to another directory.

- Save all execution logs to the custom `/var/log/nextflow.log` file:

Expand Down Expand Up @@ -144,7 +144,7 @@ Print the Nextflow version information.
$ nextflow -v
```

The `-v` option prints out information about Nextflow, such as the version and build. The `-version` option in addition prints out the citation reference and official website.
The `-v` option prints out information about Nextflow, such as the version and build. The `-version` option, in addition, prints out the citation reference and official website.

- The short version:

Expand Down
6 changes: 3 additions & 3 deletions docs/conda.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Alternatively, it can be specified by setting the variable `NXF_CONDA_ENABLED=tr

### Use Conda package names

Conda package names can specified using the `conda` directive. Multiple package names can be specified by separating them with a blank space. For example:
Conda package names can be specified using the `conda` directive. Multiple package names can be specified by separating them with a blank space. For example:

```nextflow
process hello {
Expand Down Expand Up @@ -144,9 +144,9 @@ If you're using Mamba or Micromamba, use this command instead:
micromamba env export --explicit > spec-file.txt
```

You can also download Conda lock files from [Wave](https://seqera.io/wave/) build pages.
You can also download Conda lock files from [Wave](https://seqera.io/wave/) container build pages.

These files list every package and its dependencies, so Conda doesn't need to resolve the environment. This makes environment setup faster and more reproducible.
These files list every package and its dependencies, so Conda doesn't need to perform dependency resolution. This makes environment setup faster and more reproducible.

Each file includes package URLs and, optionally, an MD5 hash for verifying file integrity:

Expand Down
4 changes: 2 additions & 2 deletions docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ process {
}
```

The above configuration snippet sets 2 cpus for every process labeled as `hello` and 4 cpus to every process *not* label as `hello`. It also specifies the `long` queue for every process whose name does *not* start with `align`.
The above configuration snippet sets 2 cpus for every process labeled as `hello` and 4 cpus to every process *not* labeled as `hello`. It also specifies the `long` queue for every process whose name does *not* start with `align`.

(config-selector-priority)=

Expand Down Expand Up @@ -339,7 +339,7 @@ workflow.onComplete = {
}

workflow.onError = {
println "Error: something when wrong"
println "Error: something went wrong"
}
```

Expand Down
12 changes: 6 additions & 6 deletions docs/container.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ You will need Apptainer installed on your execution environment e.g. your comput

### Images

Apptainer makes use of a container image file, which physically contains the container. Refer to the [Apptainer documentation](https://apptainer.org/docs) to learn how create Apptainer images.
Apptainer makes use of a container image file, which physically contains the container. Refer to the [Apptainer documentation](https://apptainer.org/docs) to learn how to create Apptainer images.

Apptainer allows paths that do not currently exist within the container to be created and mounted dynamically by specifying them on the command line. However this feature is only supported on hosts that support the [Overlay file system](https://en.wikipedia.org/wiki/OverlayFS) and is not enabled by default.

Expand All @@ -41,10 +41,10 @@ The integration for Apptainer follows the same execution model implemented for D
nextflow run <your script> -with-apptainer [apptainer image file]
```

Every time your script launches a process execution, Nextflow will run it into a Apptainer container created by using the specified image. In practice Nextflow will automatically wrap your processes and launch them by running the `apptainer exec` command with the image you have provided.
Every time your script launches a process execution, Nextflow will run it into an Apptainer container created by using the specified image. In practice Nextflow will automatically wrap your processes and launch them by running the `apptainer exec` command with the image you have provided.

:::{note}
A Apptainer image can contain any tool or piece of software you may need to carry out a process execution. Moreover, the container is run in such a way that the process result files are created in the host file system, thus it behaves in a completely transparent manner without requiring extra steps or affecting the flow in your pipeline.
An Apptainer image can contain any tool or piece of software you may need to carry out a process execution. Moreover, the container is run in such a way that the process result files are created in the host file system, thus it behaves in a completely transparent manner without requiring extra steps or affecting the flow in your pipeline.
:::

If you want to avoid entering the Apptainer image as a command line parameter, you can define it in the Nextflow configuration file. For example you can add the following lines in the configuration file:
Expand Down Expand Up @@ -124,7 +124,7 @@ Nextflow caches Apptainer images in the `apptainer` directory, in the pipeline w

Nextflow uses the library directory to determine the location of Apptainer containers. The library directory can be defined using the `apptainer.libraryDir` configuration setting or the `NXF_APPTAINER_LIBRARYDIR` environment variable. The configuration file option overrides the environment variable if both are set.

Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can added for caching purposes.
Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can be added for caching purposes.

:::{warning}
When using a compute cluster, the Apptainer cache directory must reside in a shared filesystem accessible to all compute nodes.
Expand Down Expand Up @@ -573,7 +573,7 @@ In the above example replace `/path/to/singularity.img` with any Singularity ima
Read the {ref}`config-page` page to learn more about the configuration file and how to use it to configure your pipeline execution.

:::{note}
Unlike Docker, Nextflow does not automatically mount host paths in the container when using Singularity. It expects that the paths are configure and mounted system wide by the Singularity runtime. If your Singularity installation allows user defined bind points, read the {ref}`Singularity configuration <config-singularity>` section to learn how to enable Nextflow auto mounts.
Unlike Docker, Nextflow does not automatically mount host paths in the container when using Singularity. It expects that the paths are configured and mounted system wide by the Singularity runtime. If your Singularity installation allows user defined bind points, read the {ref}`Singularity configuration <config-singularity>` section to learn how to enable Nextflow auto mounts.
:::

:::{warning}
Expand Down Expand Up @@ -657,7 +657,7 @@ Nextflow caches Singularity images in the `singularity` directory, in the pipeli

Nextflow uses the library directory to determine the location of Singularity images. The library directory can be defined using the `singularity.libraryDir` configuration setting or the `NXF_SINGULARITY_LIBRARYDIR` environment variable. The configuration file option overrides the environment variable if both are set.

Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can added for caching purposes.
Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can be added for caching purposes.

:::{warning}
When using a compute cluster, the Singularity cache directory must reside in a shared filesystem accessible to all compute nodes.
Expand Down
6 changes: 3 additions & 3 deletions docs/developer-env.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Setting up a Nextflow development environment is a prerequisite for creating, te
- {ref}`devenv-vscode`: A versatile code editor that enhances your Nextflow development with features like syntax highlighting and debugging.
- {ref}`devenv-extensions`: The VS Code marketplace offers a variety of extensions to enhance development. The {ref}`Nextflow extension <devenv-nextflow>` is specifically designed to enhance Nextflow development with diagnostics, hover hints, code navigation, code completion, and more.
- {ref}`devenv-docker`: A containerization platform that ensures your Nextflow workflows run consistently across different environments by packaging dependencies into isolated containers.
- {ref}`devenv-git`: A version control system that helps manage and track changes in your Nextflow projects, making collaboration, and code management more efficient.
- {ref}`devenv-git`: A version control system that helps manage and track changes in your Nextflow projects, making collaboration and code management more efficient.

The sections below outline the steps for setting up these tools.

Expand Down Expand Up @@ -37,7 +37,7 @@ To install VS Code on Windows:

1. Visit the [VS Code](https://code.visualstudio.com/download) website.
1. Download VS Code for Windows.
1. Double-click the installer executable (`.exe`) file and follow the set up steps.
1. Double-click the installer executable (`.exe`) file and follow the setup steps.

```

Expand Down Expand Up @@ -243,7 +243,7 @@ Nextflow supports multiple container technologies (e.g., Singularity and Podman)

Git provides powerful version control that helps track code changes. Git operates locally, meaning you don't need an internet connection to track changes, but it can also be used with remote platforms like GitHub, GitLab, or Bitbucket for collaborative development.

Nextflow seamlessly integrates with Git for source code management providers for managing pipelines as version-controlled Git repositories.
Nextflow seamlessly integrates with Git for source code management providers to manage pipelines as version-controlled Git repositories.

````{tabs}

Expand Down
Loading
Loading