BinAC/Software/Nextflow: Difference between revisions
F Bartusch (talk | contribs) No edit summary |
F Bartusch (talk | contribs) |
||
Line 65: | Line 65: | ||
</pre> |
</pre> |
||
=== other Nextflow |
=== other Nextflow pipelines === |
||
If you are writing your own pipeline or using a pipeline that is not bases on nf-core, you will have to include the nf-core profiles by hand in the pipeline configuration: |
If you are writing your own pipeline or using a pipeline that is not bases on nf-core, you will have to include the nf-core profiles by hand in the pipeline configuration: |
Revision as of 16:08, 11 December 2024
Description
Nextflow is a scientific workflow system predominantly used for bioinformatics data analysis. This documentation also covers nf-core, a community-driven initiative to curate a collection of analysis pipelines built using Nextflow.
The documentation in the bwHPC Wiki serves as a 'getting started' guide for installing and using Nextflow with nf-core on BinAC. The nf-core documentation provides detailed information for each pipeline.
This documentation does not cover how to write your own pipelines. This information is available in the Nextflow documentation.
Installation
We recommend installing Nextflow via Conda.
Install and Update Nextflow
The following commands will create a new Conda environment and installs Nextflow in it.
conda create --name nextflow nextflow conda activate nextflow
If you want to update an already installed Nextflow:
conda activate nextflow conda update nextflow
Specific Nextflow version
You may want to install a specific Nextflow version , if your pipeline was written some time ago with an older Nextflow version. In this example we will install Nextflow version 20.07:
conda create --name nextflow_20.07 nextflow=20.07 conda activate nextflow_20.07
nf-core
If you're using nf-core pipelines, we recommend installing the nf-core tools together with Nextflow
conda create --name nextflow python=3.12 nf-core nextflow conda activate nextflow
Configuration
There are some BinAC specific configuration that you may want to use.
BinAC Nextflow profile
Nextflow configuration files can contain the definition of one or more profiles. These profiles tell Nextflow how to execute the pipelines' processes on specific systems like HPC-Clusters. The nf-core project manages a collection of such profiles. Among many others there is a BinAC profiles that will run your Pipeline with BinAC's batch system.
nf-core pipelines
If you are using an nf-core pipeline you can specify the profile via:
nextflow run <pipeline> -profile binac,<other profiles>, [...]
other Nextflow pipelines
If you are writing your own pipeline or using a pipeline that is not bases on nf-core, you will have to include the nf-core profiles by hand in the pipeline configuration:
1. Tell Nextflow where find the nf-core profiles
Add this to your nextflow.config
params { [...] custom_config_version = 'master' custom_config_base = "https://raw.githubusercontent.com/nf-core/configs/${params.custom_config_version}" } [...] // Load nf-core custom profiles from different Institutions includeConfig !System.getenv('NXF_OFFLINE') && params.custom_config_base ? "${params.custom_config_base}/nfcore_custom.config" : "/dev/null" // Load nf-core/demo custom profiles from different institutions. // nf-core: Optionally, you can add a pipeline-specific nf-core config at https://github.com/nf-core/configs includeConfig !System.getenv('NXF_OFFLINE') && params.custom_config_base ? "${params.custom_config_base}/pipeline/demo.config" : "/dev/null"
Now your pipeline should find the binac profile and you can run:
nextflow run <pipeline> -profile binac,<other profiles>, [...]
Singularity/Apptainer
If run your pipeline with Singularity/Apptainer containers, you can tell Nextflow where to store your containers. We have a specific location in the work file system for that:
echo "export NXF_SINGULARITY_CACHEDIR=/beegfs/work/container/apptainer_cache/$USER" >> ~/.bashrc echo "export SINGULARITY_CACHEDIR=/beegfs/work/container/apptainer_cache/$USER" >> ~/.bashrc source ~/.bashrc
Usage
Install a nf-core pipeline
You can start and run pipelines now and Nextflow will pull all containers automatically. However we encountered issues when a pipeline starts more than one job that pulls the same image simultaneously. Therefore we recommend downloading the pipeline and its containers first using the nf-core tools.
In this guide, we will use the rnaseq
pipeline in revision 3.14.0
. To make the code examples more readable and broadly applicable, we will first specify some environment variables.
If you use another pipeline
and/or another revision
, simply change the pipeline
and revision
environment variables.
The current working directory should be one of your workspaces under /beegfs/work
.
cd /beegfs/work/<path to your workspace> export pipeline=rnaseq export revision=3.14.0 export pipeline_dir=${PWD}/nf-core-${pipeline}/$(echo $revision | tr . _) export nxf_work_dir=${PWD}/work export nxf_output_dir=${PWD}/output echo "Pipeline will be downloaded to: ${pipeline_dir}"
The following command will download the pipeline into your current working directory and also pull any Singularity containers that aren't yet in the cache. This can take some time if the images aren't in your container cache yet, so grab a coffee.
nf-core download -o ${pipeline_dir} -x none -d -u amend --container-system singularity -r ${revision} ${pipeline}
If there are errors during this step, contact BinAC support , and provide the commands you used along with the error message.
Test nf-core pipeline
The first thing you should do after downloading the pipeline is to perform a test run. nf-core pipelines come with a test profile that should work right out of the box. Additionally, there is a BinAC profile for nf-core, which includes settings for BinAC's job scheduler and queue configurations.
Nextflow pipelines do not run in the background by default, so it is best to use a terminal multiplexer (like screen
or tmux
) when running a long pipeline. Terminal multiplexers allow you to have multiple windows within a single terminal. The advantage of using these for running Nextflow pipelines is that you can detach from the terminal and reattach them later (even through an SSH connection) to check on the pipeline’s progress.
This ensures that the pipeline continues to run even if you disconnect from the cluster. The detached session will keep running.
Start a screen session:
screen
Since this is a new terminal, you will need to load the Conda environment again.
Note that environment variables like pipeline
are already set because we defined them using the export
keyword, which makes them available to child processes.
conda activate nf-core
Now you can run the pipeline test.
You should always specify two directories when running the pipeline to ensure you know exactly where the results are stored.
One directory is work-dir
, where Nextflow stores intermediate results.
The other directory is outdir
, where Nextflow stores the final pipeline results.
nextflow run ${pipeline_dir} \ -profile binac,test \ -work-dir ${nxf_work_dir} \ --outdir ${nxf_output_dir}
As mentioned the pipeline runs in a screen session.
You can detach from the screen session and the pipeline will continue to run.
The keyboard shortcut for detaching is CTRL+c
followed by d
.
That means you press the CTRL
and c
keys at the same time. Then you release the keys and press d
.
You should now be detached from the screen session and back in your login terminal.
While in your login terminal (or another window in your screen session), you can observe that Nextflow has submitted a job to the cluster for each pipeline process execution.
Your output will differ, but it should show some pipeline jobs whose job names begin with nf-NFCORE
.
(base) [tu_iioba01@login03 ~]$ qstat -u $USER mgmt02: Req'd Req'd Elap Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time ----------------------- ----------- -------- ---------------- ------ ----- ------ --------- --------- - --------- 11626226 tu_iioba01 short nf-NFCORE_RNASE 19779 -- -- 6gb 04:00:00 C -- 11626227 tu_iioba01 short nf-NFCORE_RNASE 19788 1 2 6gb 06:00:00 C -- 11626228 tu_iioba01 short nf-NFCORE_RNASE 19805 1 2 6gb 06:00:00 C -- 11626229 tu_iioba01 short nf-NFCORE_RNASE 19819 -- -- 6gb 04:00:00 C -- 11626230 tu_iioba01 short nf-NFCORE_RNASE 19839 -- -- 6gb 04:00:00 C --
Now we are returning to the Nextflow process in the screen session where the pipeline is running.
You can list your screen sessions and their IDs with screen -ls
(nf-core) [tu_iioba01@login03 nextflow_tests]$ screen -ls There is a screen on: <screen session ID>.pts-2.login03 (Detached) 1 Socket in /var/run/screen/S-tu_iioba01.
If there is only one screen session, you can reattach with:
screen -r
Otherwise, you will need to specify the screen session number:
screen -r <screen session ID>
You can observe the pipeline's execution progress. In the end, it should look like this:
-[nf-core/rnaseq] Pipeline completed successfully - Completed at: 13-Aug-2024 16:35:24 Duration : 17m 37s CPU hours : 0.6 Succeeded : 194
The test run was successful. Now you can run the pipeline with your own data.
Run pipeline with your own data
Usually, you specify your input files for nf-core pipelines in a samplesheet.
A typical samplesheet for a pipeline is located at assets/samplesheet.csv
in the pipeline directory.
You can use this as a template for specifying your own datasets.
$ cat ${pipeline_dir}/assets/samplesheet.csv sample,fastq_1,fastq_2,strandedness control_REP1,/path/to/fastq/files/AEG588A1_S1_L002_R1_001.fastq.gz,/path/to/fastq/files/AEG588A1_S1_L002_R2_001.fastq.gz,forward control_REP2,/path/to/fastq/files/AEG588A2_S2_L002_R1_001.fastq.gz,/path/to/fastq/files/AEG588A2_S2_L002_R2_001.fastq.gz,forward control_REP3,/path/to/fastq/files/AEG588A3_S3_L002_R1_001.fastq.gz,/path/to/fastq/files/AEG588A3_S3_L002_R2_001.fastq.gz,forward treatment_REP1,/path/to/fastq/files/AEG588A4_S4_L003_R1_001.fastq.gz,,forward treatment_REP2,/path/to/fastq/files/AEG588A5_S5_L003_R1_001.fastq.gz,,forward treatment_REP3,/path/to/fastq/files/AEG588A6_S6_L003_R1_001.fastq.gz,,forward treatment_REP3,/path/to/fastq/files/AEG588A6_S6_L004_R1_001.fastq.gz,,forward
You can run the pipeline with your own samplesheet using:
nextflow run ${pipeline_dir} \ -profile binac \ -work-dir ${nxf_work_dir} \ --input <path to your samplesheet.csv> \ --outdir ${nxf_output_dir}
Please note that we cannot cover every possible parameter for each nf-core pipeline here. For detailed information, check the pipeline documentation before using a pipeline productively for the first time.
As usual, you can contact BinAC support if you have any problems or questions.