From a4e26396203c7978a811478875f6bb84acf9ee93 Mon Sep 17 00:00:00 2001 From: Christopher Hakkaart Date: Tue, 12 Aug 2025 12:31:42 +1200 Subject: [PATCH 1/4] Fix mistakes Signed-off-by: Christopher Hakkaart --- docs/aws.md | 6 +++--- docs/azure.md | 10 +++++----- docs/cache-and-resume.md | 2 +- docs/channel.md | 3 +-- docs/cli.md | 8 ++++---- docs/conda.md | 6 +++--- docs/install.md | 2 +- 7 files changed, 18 insertions(+), 19 deletions(-) diff --git a/docs/aws.md b/docs/aws.md index 598ff3a30b..cb1c2e4888 100644 --- a/docs/aws.md +++ b/docs/aws.md @@ -39,7 +39,7 @@ SSO credentials and instance profile credentials are the most recommended becaus ## AWS IAM policies -[IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) are the mechanism used by AWS to defines permissions for IAM identities. In order to access certain AWS services, the proper policies must be attached to the identity associated to the AWS credentials. +[IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) are the mechanism used by AWS to define permissions for IAM identities. In order to access certain AWS services, the proper policies must be attached to the identity associated to the AWS credentials. Minimal permissions policies to be attached to the AWS account used by Nextflow are: @@ -366,7 +366,7 @@ sudo service docker start sudo usermod -a -G docker ec2-user ``` -You must logging out and logging back in again to use the new `ec2-user` permissions. +You must log out and log back in again to use the new `ec2-user` permissions. These steps must be done *before* creating the AMI from the current EC2 instance. @@ -386,7 +386,7 @@ sudo systemctl enable --now ecs To test the installation: ```bash -curl -s http://localhost:51678/v1/metadata | python -mjson.tool (test) +curl -s http://localhost:51678/v1/metadata | python -mjson.tool ``` :::{note} diff --git a/docs/azure.md b/docs/azure.md index 9342f07631..28e0ba1244 100644 --- a/docs/azure.md +++ b/docs/azure.md @@ -29,7 +29,7 @@ To run pipelines with Azure Batch: - Set `process.executor` to `azurebatch` to make Nextflow submit tasks to Azure Batch. - - Set `workDir` to a working directory on Azure Blob Storage. For example, `az:///work`, where `BLOB_CONTAINER` is a blob container in your storage account. + - Set `workDir` to a working directory on Azure Blob Storage. For example, `az:///work`, where `BLOB_CONTAINER` is a blob container in your storage account. 5. Launch your pipeline with the above configuration: @@ -152,7 +152,7 @@ azure { Replace the following: -- `USER_ASSIGNED_MANAGED_IDENTITY_CLIENT_ID`: your user assigned managed identity object ID +- `USER_ASSIGNED_MANAGED_IDENTITY_CLIENT_ID`: your user assigned managed identity client ID - `STORAGE_ACCOUNT_NAME`: your Azure Storage account name - `BATCH_ACCOUNT_NAME`: your Azure Batch account name - `BATCH_ACCOUNT_LOCATION`: your Azure Batch account location @@ -289,7 +289,7 @@ This section describes how to configure and use Azure Batch with Nextflow for ef Nextflow integrates with Azure Batch by mapping its execution model to Azure Batch's structure. A Nextflow process corresponds to an Azure Batch job, and every execution of that process (a Nextflow task) becomes an Azure Batch task. These Azure Batch tasks are executed on compute nodes within an Azure Batch pool, which is a collection of virtual machines that can scale up or down based on an autoscale formula. -Nextflow manages these pools dynamically. You can assign processes to specific, pre-existing pools using the process `queue` directive. Nextflow will create if it doesn't exist and `azure.batch.allowPoolCreation` is set to `true`. Alternatively, `autoPoolMode` enables Nextflow to automatically create multiple pools based on the CPU and memory requirements defined in your processes. +Nextflow manages these pools dynamically. You can assign processes to specific, pre-existing pools using the process `queue` directive. Nextflow will create it if it doesn't exist and `azure.batch.allowPoolCreation` is set to `true`. Alternatively, `autoPoolMode` enables Nextflow to automatically create multiple pools based on the CPU and memory requirements defined in your processes. An Azure Batch task is created for each Nextflow task. This task first downloads the necessary input files from Azure Blob Storage to its assigned compute node. It then runs the process script. Finally, it uploads any output files back to Azure Blob Storage. @@ -492,7 +492,7 @@ The `azure.batch.pools..scaleFormula` setting can be used to specify ### Task authentication -By default, Nextflow creates SAS tokens for specific containers and passes them to tasks to enable file operations with Azure Storage. SAS tokens expire after a set period of time. The expiration time is 48 hours by default and cat be configured using `azure.storage.tokenDuration` in your configuration. +By default, Nextflow creates SAS tokens for specific containers and passes them to tasks to enable file operations with Azure Storage. SAS tokens expire after a set period of time. The expiration time is 48 hours by default and can be configured using `azure.storage.tokenDuration` in your configuration. :::{versionadded} 25.05.0-edge ::: @@ -525,7 +525,7 @@ For example, consider a *Standard_D4d_v5* machine with 4 vCPUs, 16 GB of memory, - If a process requests `cpus 4`, `memory 16.GB`, or `disk 150.GB`, four task slots are allocated (100% of resources), allowing one task to run on the node. -Resource overprovisioning can occur if tasks consume more than their allocated share of resources. For instance, the node described above my become overloaded and fail if a task with `cpus 2` uses more than 8 GB of memory or 75 GB of disk space. Make sure to accurately specify resource requirements to ensure optimal performance and prevent task failures. +Resource overprovisioning can occur if tasks consume more than their allocated share of resources. For instance, the node described above may become overloaded and fail if a task with `cpus 2` uses more than 8 GB of memory or 75 GB of disk space. Make sure to accurately specify resource requirements to ensure optimal performance and prevent task failures. :::{warning} Azure virtual machines come with fixed storage disks that are not expandable. Tasks will fail if the tasks running concurrently on a node use more storage than the machine has available. diff --git a/docs/cache-and-resume.md b/docs/cache-and-resume.md index 680a5fef76..2128663900 100644 --- a/docs/cache-and-resume.md +++ b/docs/cache-and-resume.md @@ -32,7 +32,7 @@ The task hash is computed from the following metadata: - Whether the task is a {ref}`stub run ` :::{note} -Nextflow also includes an incrementing component in the hash generation process, which allows it to iterate through multiple hash values until it finds one that does not match an existing execution directory. This mechanism typically usually aligns with task retries (i.e., task attempts), however this is not guaranteed. +Nextflow also includes an incrementing component in the hash generation process, which allows it to iterate through multiple hash values until it finds one that does not match an existing execution directory. This mechanism typically aligns with task retries (i.e., task attempts), however this is not guaranteed. ::: :::{versionchanged} 23.09.2-edge diff --git a/docs/channel.md b/docs/channel.md index 2b39b032d2..c88b767c79 100644 --- a/docs/channel.md +++ b/docs/channel.md @@ -68,8 +68,7 @@ Commonly used operators include: - {ref}`operator-filter`: select the values in a channel that satisfy a condition -- {ref}`operator-flatMap`: transform each value from a channel into a list and emit each list -element separately +- {ref}`operator-flatMap`: transform each value from a channel into a list and emit each list element separately - {ref}`operator-grouptuple`: group the values from a channel based on a grouping key diff --git a/docs/cli.md b/docs/cli.md index 979a1b1507..8175ed5ade 100644 --- a/docs/cli.md +++ b/docs/cli.md @@ -2,7 +2,7 @@ # Command line -Nextflow provides a robust command line interface (CLI) for the management and execution pipelines. +Nextflow provides a robust command line interface (CLI) for the management and execution of pipelines. Simply run `nextflow` with no options or `nextflow -h` to see the list of available top-level options and commands. See {ref}`cli-reference` for the full list of subcommands with examples. @@ -36,7 +36,7 @@ Set JVM properties. $ nextflow -Dkey=value COMMAND [arg...] ``` -This options allows the definition of custom Java system properties that can be used to properly configure or fine tuning the JVM instance used by the Nextflow runtime. +This option allows the definition of custom Java system properties that can be used to properly configure or fine tuning the JVM instance used by the Nextflow runtime. For specifying other JVM level options, please refer to the {ref}`config-env-vars` section. @@ -96,7 +96,7 @@ Sets the path of the nextflow log file. $ nextflow -log custom.log COMMAND [arg...] ``` -The `-log` option takes a path of the new log file which to be used instead of the default `.nextflow.log` or to save logs files to another directory. +The `-log` option takes a path of the new log file which will be used instead of the default `.nextflow.log` or to save logs files to another directory. - Save all execution logs to the custom `/var/log/nextflow.log` file: @@ -144,7 +144,7 @@ Print the Nextflow version information. $ nextflow -v ``` -The `-v` option prints out information about Nextflow, such as the version and build. The `-version` option in addition prints out the citation reference and official website. +The `-v` option prints out information about Nextflow, such as the version and build. The `-version` option, in addition, prints out the citation reference and official website. - The short version: diff --git a/docs/conda.md b/docs/conda.md index 8b9c7a1faa..144ad8d864 100644 --- a/docs/conda.md +++ b/docs/conda.md @@ -43,7 +43,7 @@ Alternatively, it can be specified by setting the variable `NXF_CONDA_ENABLED=tr ### Use Conda package names -Conda package names can specified using the `conda` directive. Multiple package names can be specified by separating them with a blank space. For example: +Conda package names can be specified using the `conda` directive. Multiple package names can be specified by separating them with a blank space. For example: ```nextflow process hello { @@ -144,9 +144,9 @@ If you're using Mamba or Micromamba, use this command instead: micromamba env export --explicit > spec-file.txt ``` -You can also download Conda lock files from [Wave](https://seqera.io/wave/) build pages. +You can also download Conda lock files from [Wave](https://seqera.io/wave/) container build pages. -These files list every package and its dependencies, so Conda doesn't need to resolve the environment. This makes environment setup faster and more reproducible. +These files list every package and its dependencies, so Conda doesn't need to perform dependency resolution. This makes environment setup faster and more reproducible. Each file includes package URLs and, optionally, an MD5 hash for verifying file integrity: diff --git a/docs/install.md b/docs/install.md index fa503cf8af..ac1790e5e2 100644 --- a/docs/install.md +++ b/docs/install.md @@ -118,7 +118,7 @@ To install Nextflow with Conda: ```{code-block} bash :class: copyable - source activate nf_env + source activate nf-env ``` 3. Confirm Nextflow is installed correctly: From 3ff331839c5a86c381752216c5321872cd5d53c7 Mon Sep 17 00:00:00 2001 From: Christopher Hakkaart Date: Tue, 12 Aug 2025 12:45:36 +1200 Subject: [PATCH 2/4] Fix spelling mistakes Signed-off-by: Christopher Hakkaart --- docs/config.md | 4 ++-- docs/container.md | 12 ++++++------ docs/developer-env.md | 6 +++--- docs/executor.md | 8 ++++---- docs/git.md | 8 ++++---- 5 files changed, 19 insertions(+), 19 deletions(-) diff --git a/docs/config.md b/docs/config.md index abfbfe8811..9b48161b11 100644 --- a/docs/config.md +++ b/docs/config.md @@ -221,7 +221,7 @@ process { } ``` -The above configuration snippet sets 2 cpus for every process labeled as `hello` and 4 cpus to every process *not* label as `hello`. It also specifies the `long` queue for every process whose name does *not* start with `align`. +The above configuration snippet sets 2 cpus for every process labeled as `hello` and 4 cpus to every process *not* labeled as `hello`. It also specifies the `long` queue for every process whose name does *not* start with `align`. (config-selector-priority)= @@ -339,7 +339,7 @@ workflow.onComplete = { } workflow.onError = { - println "Error: something when wrong" + println "Error: something went wrong" } ``` diff --git a/docs/container.md b/docs/container.md index 5c4257fef3..58973e6a23 100644 --- a/docs/container.md +++ b/docs/container.md @@ -23,7 +23,7 @@ You will need Apptainer installed on your execution environment e.g. your comput ### Images -Apptainer makes use of a container image file, which physically contains the container. Refer to the [Apptainer documentation](https://apptainer.org/docs) to learn how create Apptainer images. +Apptainer makes use of a container image file, which physically contains the container. Refer to the [Apptainer documentation](https://apptainer.org/docs) to learn how to create Apptainer images. Apptainer allows paths that do not currently exist within the container to be created and mounted dynamically by specifying them on the command line. However this feature is only supported on hosts that support the [Overlay file system](https://en.wikipedia.org/wiki/OverlayFS) and is not enabled by default. @@ -41,10 +41,10 @@ The integration for Apptainer follows the same execution model implemented for D nextflow run -with-apptainer [apptainer image file] ``` -Every time your script launches a process execution, Nextflow will run it into a Apptainer container created by using the specified image. In practice Nextflow will automatically wrap your processes and launch them by running the `apptainer exec` command with the image you have provided. +Every time your script launches a process execution, Nextflow will run it into an Apptainer container created by using the specified image. In practice Nextflow will automatically wrap your processes and launch them by running the `apptainer exec` command with the image you have provided. :::{note} -A Apptainer image can contain any tool or piece of software you may need to carry out a process execution. Moreover, the container is run in such a way that the process result files are created in the host file system, thus it behaves in a completely transparent manner without requiring extra steps or affecting the flow in your pipeline. +An Apptainer image can contain any tool or piece of software you may need to carry out a process execution. Moreover, the container is run in such a way that the process result files are created in the host file system, thus it behaves in a completely transparent manner without requiring extra steps or affecting the flow in your pipeline. ::: If you want to avoid entering the Apptainer image as a command line parameter, you can define it in the Nextflow configuration file. For example you can add the following lines in the configuration file: @@ -124,7 +124,7 @@ Nextflow caches Apptainer images in the `apptainer` directory, in the pipeline w Nextflow uses the library directory to determine the location of Apptainer containers. The library directory can be defined using the `apptainer.libraryDir` configuration setting or the `NXF_APPTAINER_LIBRARYDIR` environment variable. The configuration file option overrides the environment variable if both are set. -Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can added for caching purposes. +Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can be added for caching purposes. :::{warning} When using a compute cluster, the Apptainer cache directory must reside in a shared filesystem accessible to all compute nodes. @@ -573,7 +573,7 @@ In the above example replace `/path/to/singularity.img` with any Singularity ima Read the {ref}`config-page` page to learn more about the configuration file and how to use it to configure your pipeline execution. :::{note} -Unlike Docker, Nextflow does not automatically mount host paths in the container when using Singularity. It expects that the paths are configure and mounted system wide by the Singularity runtime. If your Singularity installation allows user defined bind points, read the {ref}`Singularity configuration ` section to learn how to enable Nextflow auto mounts. +Unlike Docker, Nextflow does not automatically mount host paths in the container when using Singularity. It expects that the paths are configured and mounted system wide by the Singularity runtime. If your Singularity installation allows user defined bind points, read the {ref}`Singularity configuration ` section to learn how to enable Nextflow auto mounts. ::: :::{warning} @@ -657,7 +657,7 @@ Nextflow caches Singularity images in the `singularity` directory, in the pipeli Nextflow uses the library directory to determine the location of Singularity images. The library directory can be defined using the `singularity.libraryDir` configuration setting or the `NXF_SINGULARITY_LIBRARYDIR` environment variable. The configuration file option overrides the environment variable if both are set. -Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can added for caching purposes. +Nextflow first checks the library directory when searching for the image. If the image is not found it then checks the cache directory. The main difference between the library directory and the cache directory is that the first is assumed to be a read-only container repository, while the latter is expected to be writable path where container images can be added for caching purposes. :::{warning} When using a compute cluster, the Singularity cache directory must reside in a shared filesystem accessible to all compute nodes. diff --git a/docs/developer-env.md b/docs/developer-env.md index 4d1804a364..839297d839 100644 --- a/docs/developer-env.md +++ b/docs/developer-env.md @@ -9,7 +9,7 @@ Setting up a Nextflow development environment is a prerequisite for creating, te - {ref}`devenv-vscode`: A versatile code editor that enhances your Nextflow development with features like syntax highlighting and debugging. - {ref}`devenv-extensions`: The VS Code marketplace offers a variety of extensions to enhance development. The {ref}`Nextflow extension ` is specifically designed to enhance Nextflow development with diagnostics, hover hints, code navigation, code completion, and more. - {ref}`devenv-docker`: A containerization platform that ensures your Nextflow workflows run consistently across different environments by packaging dependencies into isolated containers. -- {ref}`devenv-git`: A version control system that helps manage and track changes in your Nextflow projects, making collaboration, and code management more efficient. +- {ref}`devenv-git`: A version control system that helps manage and track changes in your Nextflow projects, making collaboration and code management more efficient. The sections below outline the steps for setting up these tools. @@ -37,7 +37,7 @@ To install VS Code on Windows: 1. Visit the [VS Code](https://code.visualstudio.com/download) website. 1. Download VS Code for Windows. -1. Double-click the installer executable (`.exe`) file and follow the set up steps. +1. Double-click the installer executable (`.exe`) file and follow the setup steps. ``` @@ -243,7 +243,7 @@ Nextflow supports multiple container technologies (e.g., Singularity and Podman) Git provides powerful version control that helps track code changes. Git operates locally, meaning you don't need an internet connection to track changes, but it can also be used with remote platforms like GitHub, GitLab, or Bitbucket for collaborative development. -Nextflow seamlessly integrates with Git for source code management providers for managing pipelines as version-controlled Git repositories. +Nextflow seamlessly integrates with Git for source code management providers to manage pipelines as version-controlled Git repositories. ````{tabs} diff --git a/docs/executor.md b/docs/executor.md index 959aa220f7..88b91e9fe2 100644 --- a/docs/executor.md +++ b/docs/executor.md @@ -18,7 +18,7 @@ The pipeline processes must specify the Docker image to use by defining the `con To enable this executor, set `process.executor = 'awsbatch'` in the `nextflow.config` file. -The pipeline can be launched either in a local computer, or an EC2 instance. EC2 is suggested for heavy or long-running workloads. Additionally, an S3 bucket must be used as the pipeline work directory. +The pipeline can be launched either on a local computer, or an EC2 instance. EC2 is suggested for heavy or long-running workloads. Additionally, an S3 bucket must be used as the pipeline work directory. Resource requests and other job characteristics can be controlled via the following process directives: @@ -45,7 +45,7 @@ The pipeline processes must specify the Docker image to use by defining the `con To enable this executor, set `process.executor = 'azurebatch'` in the `nextflow.config` file. -The pipeline can be launched either in a local computer, or a cloud virtual machine. The cloud VM is suggested for heavy or long-running workloads. Additionally, an Azure Blob storage container must be used as the pipeline work directory. +The pipeline can be launched either on a local computer, or a cloud virtual machine. The cloud VM is suggested for heavy or long-running workloads. Additionally, an Azure Blob storage container must be used as the pipeline work directory. Resource requests and other job characteristics can be controlled via the following process directives: @@ -121,7 +121,7 @@ By default, Flux will send all output to the `.command.log` file. To send this o [Google Cloud Batch](https://cloud.google.com/batch) is a managed computing service that allows the execution of containerized workloads in the Google Cloud Platform infrastructure. -Nextflow provides built-in support for the Cloud Batch API, which allows the seamless deployment of a Nextflow pipeline in the cloud, offloading the process executions as pipelines. +Nextflow provides built-in support for the Cloud Batch API, which allows the seamless deployment of Nextflow pipelines in the cloud, offloading the pipeline process executions. The pipeline processes must specify the Docker image to use by defining the `container` directive, either in the pipeline script or the `nextflow.config` file. Additionally, the pipeline work directory must be located in a Google Storage bucket. @@ -299,7 +299,7 @@ Resource requests and other job characteristics can be controlled via the follow ## NQSII -The `nsqii` executor allows you to run your pipeline script using the [NQSII](https://www.rz.uni-kiel.de/en/our-portfolio/hiperf/nec-linux-cluster) resource manager. +The `nqsii` executor allows you to run your pipeline script using the [NQSII](https://www.rz.uni-kiel.de/en/our-portfolio/hiperf/nec-linux-cluster) resource manager. Nextflow manages each process as a separate job that is submitted to the cluster using the `qsub` command provided by the scheduler. diff --git a/docs/git.md b/docs/git.md index cddfcae4c7..c93ceac330 100644 --- a/docs/git.md +++ b/docs/git.md @@ -4,7 +4,7 @@ ## Git configuration -The file `$HOME/.nextflow/scm` allows you to centralise the security credentials required to access private project repositories on Bitbucket, GitHub and GitLab source code management (SCM) platforms or to manage the configuration properties of private server installations (of the same platforms). +The file `$HOME/.nextflow/scm` allows you to centralize the security credentials required to access private project repositories on Bitbucket, GitHub and GitLab source code management (SCM) platforms or to manage the configuration properties of private server installations (of the same platforms). The configuration properties for each Git provider are defined inside the `providers` section. Properties for the same provider are grouped with a common name and delimited with curly brackets. For example: @@ -74,7 +74,7 @@ App passwords are substitute passwords for a user account which you can use for BitBucket Server uses a different API from the [BitBucket](https://bitbucket.org/) Cloud service. Make sure to use the right configuration whether you are using the cloud service or a self-hosted installation. ::: -To access your local BitBucket Server create an entry in the [SCM configuration file](#git-configuration) specifying as shown below: +To access your local BitBucket Server create an entry in the [SCM configuration file](#git-configuration) as shown below: ```groovy providers { @@ -146,7 +146,7 @@ See [Gitea documentation](https://docs.gitea.io/en-us/api-usage/) about how to e ### Azure Repos -Nextflow has builtin support for [Azure Repos](https://azure.microsoft.com/en-us/services/devops/repos/), a Git source code management service hosted in the Azure cloud. To access your Azure Repos with Nextflow provide the repository credentials using the configuration snippet shown below: +Nextflow has built-in support for [Azure Repos](https://azure.microsoft.com/en-us/services/devops/repos/), a Git source code management service hosted in the Azure cloud. To access your Azure Repos with Nextflow provide the repository credentials using the configuration snippet shown below: ```groovy providers { @@ -194,7 +194,7 @@ Then the pipeline can be accessed with Nextflow as shown below: nextflow run https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/my-repo ``` -In the above example replace `my-repo` with your own repository. Note also that AWS CodeCommit has different URLs depending the region in which you are working. +In the above example replace `my-repo` with your own repository. Note also that AWS CodeCommit has different URLs depending on the region in which you are working. :::{note} The support for protocols other than HTTPS is not available at this time. From 46ae432bc216e3427d839bac37076274140d9742 Mon Sep 17 00:00:00 2001 From: Christopher Hakkaart Date: Tue, 12 Aug 2025 13:36:07 +1200 Subject: [PATCH 3/4] Fix spelling and grammar Signed-off-by: Christopher Hakkaart --- docs/git.md | 12 ++++++------ docs/google.md | 6 +++--- docs/notifications.md | 22 +++++++++++----------- 3 files changed, 20 insertions(+), 20 deletions(-) diff --git a/docs/git.md b/docs/git.md index c93ceac330..196738745d 100644 --- a/docs/git.md +++ b/docs/git.md @@ -49,7 +49,7 @@ The following configuration properties are supported for each provider configura ## Git providers -### BitBucket +### Bitbucket Create a `bitbucket` entry in the [SCM configuration file](#git-configuration) specifying your user name and app password, as shown below: @@ -66,15 +66,15 @@ providers { App passwords are substitute passwords for a user account which you can use for scripts and integrating tools in order to avoid putting your real password into configuration files. Learn more at [this link](https://support.atlassian.com/bitbucket-cloud/docs/app-passwords/). ::: -### BitBucket Server +### Bitbucket Server -[BitBucket Server](https://confluence.atlassian.com/bitbucketserver) is a self-hosted Git repository and management platform. +[Bitbucket Server](https://confluence.atlassian.com/bitbucketserver) is a self-hosted Git repository and management platform. :::{note} -BitBucket Server uses a different API from the [BitBucket](https://bitbucket.org/) Cloud service. Make sure to use the right configuration whether you are using the cloud service or a self-hosted installation. +Bitbucket Server uses a different API from the [Bitbucket](https://bitbucket.org/) Cloud service. Make sure to use the right configuration whether you are using the cloud service or a self-hosted installation. ::: -To access your local BitBucket Server create an entry in the [SCM configuration file](#git-configuration) as shown below: +To access your local Bitbucket Server create an entry in the [SCM configuration file](#git-configuration) as shown below: ```groovy providers { @@ -202,7 +202,7 @@ The support for protocols other than HTTPS is not available at this time. ## Private server configuration -Nextflow is able to access repositories hosted on private BitBucket, GitHub, GitLab and Gitea server installations. +Nextflow is able to access repositories hosted on private Bitbucket, GitHub, GitLab and Gitea server installations. In order to use a private SCM installation you will need to set the server name and access credentials in your [SCM configuration file](#git-configuration) . diff --git a/docs/google.md b/docs/google.md index 082af214ed..d2188a4e8f 100644 --- a/docs/google.md +++ b/docs/google.md @@ -191,7 +191,7 @@ disk 375.GB, type: 'local-ssd' ### Pipeline execution -The pipeline can be launched either in a local computer or a cloud instance. Pipeline input data can be stored either locally or in a Google Storage bucket. +The pipeline can be launched either on a local computer or a cloud instance. Pipeline input data can be stored either locally or in a Google Storage bucket. The pipeline execution must specify a Google Storage bucket where the workflow's intermediate results are stored using the `-work-dir` command line options. For example: @@ -204,7 +204,7 @@ Any input data **not** stored in a Google Storage bucket will automatically be t ::: :::{warning} -The Google Storage path needs to contain at least sub-directory. Don't use only the bucket name e.g. `gs://my-bucket`. +The Google Storage path needs to contain at least sub-directory. Do not use only the bucket name e.g. `gs://my-bucket`. ::: ### Spot Instances @@ -279,7 +279,7 @@ nextflow run