BinAC/Software/Nextflow: Difference between revisions

From bwHPC Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
 
(6 intermediate revisions by the same user not shown)
Line 17: Line 17:


<pre>
<pre>
conda create --name nf-core python nf-core nextflow
conda create --name nf-core python=3.12 nf-core nextflow


echo "export NXF_SINGULARITY_CACHEDIR=/beegfs/work/container/apptainer_cache" >> ~/.bashrc
echo "export NXF_SINGULARITY_CACHEDIR=/beegfs/work/container/apptainer_cache/$USER" >> ~/.bashrc
echo "export SINGULARITY_CACHEDIR=/beegfs/work/tu_iioba01/apptainer_cache " >> ~/.bashrc
echo "export SINGULARITY_CACHEDIR=/beegfs/work/container/apptainer_cache/$USER" >> ~/.bashrc
source ~/.bashrc
source ~/.bashrc


Line 30: Line 30:
== Install a nf-core pipeline ==
== Install a nf-core pipeline ==


You can start and run pipelines now and the pipeline will pull all containers automatically.
You can start and run pipelines now and Nextflow will pull all containers automatically.
<b>However</b> we encountered issues when a pipeline starts more than one job that pulls the same image simultaneously.
<b>However</b> we encountered issues when a pipeline starts more than one job that pulls the same image simultaneously.
Therefore we recommend downloading the pipeline first using the nf-core tools.
Therefore we recommend downloading the pipeline and its containers first using the nf-core tools.


In this guide, we will use the <code>rnaseq</code> pipeline in revision <code>3.14.0</code>. To make the code examples more readable and broadly applicable, we will first specify some environment variables.
In this guide, we will use the <code>rnaseq</code> pipeline in revision <code>3.14.0</code>. To make the code examples more readable and broadly applicable, we will first specify some environment variables.
If you use another <code>pipeline</code> and/or another <code>revision</code>, simply change the following environment variables:
If you use another <code>pipeline</code> and/or another <code>revision</code>, simply change the <code>pipeline</code> and <code>revision</code> environment variables.
The current working directory should be one of your workspaces under <code>/beegfs/work</code>.


<pre>
<pre>
cd /beegfs/work/<path to your workspace>

export pipeline=rnaseq
export pipeline=rnaseq
export revision=3.14.0
export revision=3.14.0


pipeline_dir=<install directory for the pipeline files>
export pipeline_dir=${PWD}/nf-core-${pipeline}/$(echo $revision | tr . _)
export nxf_work_dir=${PWD}/work
nxf_work_dir=<directory for intermediate results of the pipeline>
nxf_output_dir=<directory for the final pipeline output>
export nxf_output_dir=${PWD}/output

echo "Pipeline will be downloaded to: ${pipeline_dir}"
</pre>
</pre>


The following command will download the pipeline into a directory you specify and also pull any Singularity containers that aren't yet in the cache.
The following command will download the pipeline into your current working directory and also pull any Singularity containers that aren't yet in the cache. This can take some time if the images aren't in your container cache yet, so grab a coffee.


<pre>
<pre>
nf-core download -o ${pipeline_dir} -x none -d -u amend --container-system singularity -r ${revision} ${pipeline}
nf-core download -o ${pipeline_dir} -x none -d -u amend --container-system singularity -r ${revision} ${pipeline}
</pre>
</pre>

If there are errors during this step, contact [https://wiki.bwhpc.de/e/BinAC/Support BinAC support ], and provide the commands you used along with the error message.


== Test nf-core pipeline ==
== Test nf-core pipeline ==


The first thing you should do after downloading the pipeline is to perform a test run. nf-core pipelines come with a test profile, which should work right out of the box. Additionally, there is a BinAC profile for nf-core. This profile contains settings for BinAC's job scheduler and queue configurations.
The first thing you should do after downloading the pipeline is to perform a test run. nf-core pipelines come with a test profile that should work right out of the box. Additionally, there is a BinAC profile for nf-core, which includes settings for BinAC's job scheduler and queue configurations.


Nextflow pipelines do not run in the background by default, so it is best to use a terminal multiplexer when running a long pipeline. Terminal multiplexers allow you to have multiple windows within a single terminal window. The benefit of these for running Nextflow pipelines is that you can detach from these terminal windows and reattach them later (even through an SSH connection) to check on the pipeline’s progress.
Nextflow pipelines do not run in the background by default, so it is best to use a terminal multiplexer (like <code>screen</code> or <code>tmux</code>) when running a long pipeline. Terminal multiplexers allow you to have multiple windows within a single terminal. The advantage of using these for running Nextflow pipelines is that you can detach from the terminal and reattach them later (even through an SSH connection) to check on the pipeline’s progress.
This ensures that the pipeline continues to run even if you disconnect from the cluster. The detached session will keep running
This ensures that the pipeline continues to run even if you disconnect from the cluster. The detached session will keep running.


Start a screen session:
Start a screen session:
Line 65: Line 72:
</pre>
</pre>


As this is a new terminal session you will need to load the Conda environment. The you can run the pipeline test. You should specifiy two directories. One is the <code>work-dir</code>, where intermediate results should be stored. The other is the <code>outdir</code> where the pipeline results will be stored.
Since this is a new terminal, you will need to load the Conda environment again.
Note that environment variables like <code>pipeline</code> are already set because we defined them using the <code>export</code> keyword, which makes them available to child processes.

The BinAC profile is currently not up-to-date in the nf-core repository. Thus you should add the <code>custom_config_base</code> option.


<pre>
<pre>
conda activate nf-core
conda activate nf-core
</pre>


nextflow run nf-core-${pipeline}_3.14.0/3_14_0/ -profile binac,test \
Now you can run the pipeline test.
You should always specify two directories when running the pipeline to ensure you know exactly where the results are stored.
--custom_config_base 'https://raw.githubusercontent.com/fbartusch/configs/patch-1' \
-work-dir <directory for intermediate results> \
One directory is <code>work-dir</code>, where Nextflow stores intermediate results.
The other directory is <code>outdir</code>, where Nextflow stores the final pipeline results.
--outdir <directory for pipeline output>
<!-- TODO: Difference - and --arguments -->
<pre>
nextflow run ${pipeline_dir} \
-profile binac,test \
-work-dir ${nxf_work_dir} \
--outdir ${nxf_output_dir}
</pre>
</pre>
<!-- TODO add screenshot -->


As mentioned the pipeline runs in a screen session. You can detach from the screen session and the pipeline will continue to run. The keyboard shortcut for detaching is <code>CTRL-c + d</code>:
As mentioned the pipeline runs in a screen session.
You can detach from the screen session and the pipeline will continue to run.
The keyboard shortcut for detaching is <code>CTRL+c</code> followed by <code>d</code>.
That means you press the <code>CTRL</code> and <code>c</code> keys at the same time. Then you release the keys and press <code>d</code>.
You should now be detached from the screen session and back in your login terminal.
<!--TODO add screenshot-->
While in your login terminal (or another window in your screen session), you can observe that Nextflow has submitted a job to the cluster for each pipeline process execution.
Your output will differ, but it should show some pipeline jobs whose job names begin with <code>nf-NFCORE</code>.

<pre>
(base) [tu_iioba01@login03 ~]$ qstat -u $USER

mgmt02:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
----------------------- ----------- -------- ---------------- ------ ----- ------ --------- --------- - ---------
11626226 tu_iioba01 short nf-NFCORE_RNASE 19779 -- -- 6gb 04:00:00 C --
11626227 tu_iioba01 short nf-NFCORE_RNASE 19788 1 2 6gb 06:00:00 C --
11626228 tu_iioba01 short nf-NFCORE_RNASE 19805 1 2 6gb 06:00:00 C --
11626229 tu_iioba01 short nf-NFCORE_RNASE 19819 -- -- 6gb 04:00:00 C --
11626230 tu_iioba01 short nf-NFCORE_RNASE 19839 -- -- 6gb 04:00:00 C --
</pre>


Now we are returning to the Nextflow process in the screen session where the pipeline is running.
For listing your screen sessions:
You can list your screen sessions and their IDs with <code>screen -ls</code>


<pre>
<pre>
(nf-core) [tu_iioba01@login03 nextflow_tests]$ screen -ls
(nf-core) [tu_iioba01@login03 nextflow_tests]$ screen -ls
There is a screen on:
There is a screen on:
41342.pts-2.login03 (Detached)
<screen session ID>.pts-2.login03 (Detached)
1 Socket in /var/run/screen/S-tu_iioba01.
1 Socket in /var/run/screen/S-tu_iioba01.
</pre>
</pre>
Line 95: Line 131:
</pre>
</pre>


Otherwise you will need to specify the screen session number:
Otherwise, you will need to specify the screen session number:


<pre>
<pre>
screen -r 41342
screen -r <screen session ID>
</pre>
</pre>


You can observe the pipeline's execution progress. In the end, it should look like this:
Nextflow will start a job for each pipeline step:


<pre>
<pre>
-[nf-core/rnaseq] Pipeline completed successfully -
(base) [tu_iioba01@login03 ~]$ qstat -u $USER
Completed at: 13-Aug-2024 16:35:24

Duration : 17m 37s
mgmt02:
CPU hours : 0.6
Req'd Req'd Elap
Succeeded : 194
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
----------------------- ----------- -------- ---------------- ------ ----- ------ --------- --------- - ---------
11626226 tu_iioba01 short nf-NFCORE_RNASE 19779 -- -- 6gb 04:00:00 C --
11626227 tu_iioba01 short nf-NFCORE_RNASE 19788 1 2 6gb 06:00:00 C --
11626228 tu_iioba01 short nf-NFCORE_RNASE 19805 1 2 6gb 06:00:00 C --
11626229 tu_iioba01 short nf-NFCORE_RNASE 19819 -- -- 6gb 04:00:00 C --
11626230 tu_iioba01 short nf-NFCORE_RNASE 19839 -- -- 6gb 04:00:00 C --
</pre>
</pre>

The test run was successful. Now you can run the pipeline with your own data.


== Run pipeline with your own data ==
== Run pipeline with your own data ==


Usually, you specify your input files for nf-core pipelines in a samplesheet.
https://nf-co.re/rnaseq/3.14.0/#usage
A typical samplesheet for a pipeline is located at <code>assets/samplesheet.csv</code> in the pipeline directory.
You can use this as a template for specifying your own datasets.

<pre>
$ cat ${pipeline_dir}/assets/samplesheet.csv
sample,fastq_1,fastq_2,strandedness
control_REP1,/path/to/fastq/files/AEG588A1_S1_L002_R1_001.fastq.gz,/path/to/fastq/files/AEG588A1_S1_L002_R2_001.fastq.gz,forward
control_REP2,/path/to/fastq/files/AEG588A2_S2_L002_R1_001.fastq.gz,/path/to/fastq/files/AEG588A2_S2_L002_R2_001.fastq.gz,forward
control_REP3,/path/to/fastq/files/AEG588A3_S3_L002_R1_001.fastq.gz,/path/to/fastq/files/AEG588A3_S3_L002_R2_001.fastq.gz,forward
treatment_REP1,/path/to/fastq/files/AEG588A4_S4_L003_R1_001.fastq.gz,,forward
treatment_REP2,/path/to/fastq/files/AEG588A5_S5_L003_R1_001.fastq.gz,,forward
treatment_REP3,/path/to/fastq/files/AEG588A6_S6_L003_R1_001.fastq.gz,,forward
treatment_REP3,/path/to/fastq/files/AEG588A6_S6_L004_R1_001.fastq.gz,,forward
</pre>

You can run the pipeline with your own samplesheet using:

<pre>
nextflow run ${pipeline_dir} \
-profile binac \
-work-dir ${nxf_work_dir} \
--input <path to your samplesheet.csv> \
--outdir ${nxf_output_dir}
</pre>

Please note that we cannot cover every possible parameter for each nf-core pipeline here. For detailed information, check the pipeline documentation before using a pipeline productively for the first time.

As usual, you can contact [https://wiki.bwhpc.de/e/BinAC/Support BinAC support ] if you have any problems or questions.


<!-- TODO Add hints to potential useful tools like nf-core creating parameter file or creating parameter file on the web. -->
/beegfs/work/tu_iioba01/nextflow_tests/nf-core-rnaseq_3.14.0/3_14_0/assets/samplesheet.csv

Latest revision as of 17:49, 22 August 2024

Description

Nextflow is a scientific workflow system predominantly used for bioinformatics data analysis. This documentation also covers nf-core, a community-driven initiative to curate a collection of analysis pipelines built using Nextflow.

The documentation in the bwHPC Wiki serves as a 'getting started' guide for installing and using Nextflow with nf-core on BinAC. The nf-core documentation provides detailed information for each pipeline.

This documentation does not cover how to write your own pipelines. This information is available in the Nextflow documentation.

Installation

We recommend installing Nextflow via Miniconda. Since Nextflow is often used with nf-core pipelines, we also recommend installing the nf-core tools.

The following commands will create a new Conda environment that provides Nextflow and nf-core tools. It also sets a shared Singularity cache directory in your bashrc where all Singularity containers are stored.

conda create --name nf-core python=3.12 nf-core nextflow

echo "export NXF_SINGULARITY_CACHEDIR=/beegfs/work/container/apptainer_cache/$USER" >> ~/.bashrc
echo "export SINGULARITY_CACHEDIR=/beegfs/work/container/apptainer_cache/$USER" >> ~/.bashrc
source ~/.bashrc

conda activate nf-core

Usage

Install a nf-core pipeline

You can start and run pipelines now and Nextflow will pull all containers automatically. However we encountered issues when a pipeline starts more than one job that pulls the same image simultaneously. Therefore we recommend downloading the pipeline and its containers first using the nf-core tools.

In this guide, we will use the rnaseq pipeline in revision 3.14.0. To make the code examples more readable and broadly applicable, we will first specify some environment variables. If you use another pipeline and/or another revision, simply change the pipeline and revision environment variables. The current working directory should be one of your workspaces under /beegfs/work.

cd /beegfs/work/<path to your workspace>

export pipeline=rnaseq
export revision=3.14.0

export pipeline_dir=${PWD}/nf-core-${pipeline}/$(echo $revision | tr . _)
export nxf_work_dir=${PWD}/work
export nxf_output_dir=${PWD}/output

echo "Pipeline will be downloaded to: ${pipeline_dir}"

The following command will download the pipeline into your current working directory and also pull any Singularity containers that aren't yet in the cache. This can take some time if the images aren't in your container cache yet, so grab a coffee.

nf-core download -o ${pipeline_dir} -x none -d -u amend --container-system singularity -r ${revision} ${pipeline}

If there are errors during this step, contact BinAC support , and provide the commands you used along with the error message.

Test nf-core pipeline

The first thing you should do after downloading the pipeline is to perform a test run. nf-core pipelines come with a test profile that should work right out of the box. Additionally, there is a BinAC profile for nf-core, which includes settings for BinAC's job scheduler and queue configurations.

Nextflow pipelines do not run in the background by default, so it is best to use a terminal multiplexer (like screen or tmux) when running a long pipeline. Terminal multiplexers allow you to have multiple windows within a single terminal. The advantage of using these for running Nextflow pipelines is that you can detach from the terminal and reattach them later (even through an SSH connection) to check on the pipeline’s progress. This ensures that the pipeline continues to run even if you disconnect from the cluster. The detached session will keep running.

Start a screen session:

screen

Since this is a new terminal, you will need to load the Conda environment again. Note that environment variables like pipeline are already set because we defined them using the export keyword, which makes them available to child processes.

conda activate nf-core

Now you can run the pipeline test. You should always specify two directories when running the pipeline to ensure you know exactly where the results are stored. One directory is work-dir, where Nextflow stores intermediate results. The other directory is outdir, where Nextflow stores the final pipeline results.

nextflow run ${pipeline_dir} \
  -profile binac,test \
  -work-dir ${nxf_work_dir} \
  --outdir ${nxf_output_dir}

As mentioned the pipeline runs in a screen session. You can detach from the screen session and the pipeline will continue to run. The keyboard shortcut for detaching is CTRL+c followed by d. That means you press the CTRL and c keys at the same time. Then you release the keys and press d. You should now be detached from the screen session and back in your login terminal. While in your login terminal (or another window in your screen session), you can observe that Nextflow has submitted a job to the cluster for each pipeline process execution. Your output will differ, but it should show some pipeline jobs whose job names begin with nf-NFCORE.

(base) [tu_iioba01@login03 ~]$ qstat -u $USER

mgmt02: 
                                                                                  Req'd       Req'd       Elap
Job ID                  Username    Queue    Jobname          SessID  NDS   TSK   Memory      Time    S   Time
----------------------- ----------- -------- ---------------- ------ ----- ------ --------- --------- - ---------
11626226                tu_iioba01  short    nf-NFCORE_RNASE   19779   --     --        6gb  04:00:00 C       -- 
11626227                tu_iioba01  short    nf-NFCORE_RNASE   19788     1      2       6gb  06:00:00 C       -- 
11626228                tu_iioba01  short    nf-NFCORE_RNASE   19805     1      2       6gb  06:00:00 C       -- 
11626229                tu_iioba01  short    nf-NFCORE_RNASE   19819   --     --        6gb  04:00:00 C       -- 
11626230                tu_iioba01  short    nf-NFCORE_RNASE   19839   --     --        6gb  04:00:00 C       -- 

Now we are returning to the Nextflow process in the screen session where the pipeline is running. You can list your screen sessions and their IDs with screen -ls

(nf-core) [tu_iioba01@login03 nextflow_tests]$ screen -ls
There is a screen on:
        <screen session ID>.pts-2.login03     (Detached)
1 Socket in /var/run/screen/S-tu_iioba01.

If there is only one screen session, you can reattach with:

screen -r

Otherwise, you will need to specify the screen session number:

screen -r <screen session ID>

You can observe the pipeline's execution progress. In the end, it should look like this:

-[nf-core/rnaseq] Pipeline completed successfully -
Completed at: 13-Aug-2024 16:35:24
Duration    : 17m 37s
CPU hours   : 0.6
Succeeded   : 194

The test run was successful. Now you can run the pipeline with your own data.

Run pipeline with your own data

Usually, you specify your input files for nf-core pipelines in a samplesheet. A typical samplesheet for a pipeline is located at assets/samplesheet.csv in the pipeline directory. You can use this as a template for specifying your own datasets.

$ cat  ${pipeline_dir}/assets/samplesheet.csv
sample,fastq_1,fastq_2,strandedness
control_REP1,/path/to/fastq/files/AEG588A1_S1_L002_R1_001.fastq.gz,/path/to/fastq/files/AEG588A1_S1_L002_R2_001.fastq.gz,forward
control_REP2,/path/to/fastq/files/AEG588A2_S2_L002_R1_001.fastq.gz,/path/to/fastq/files/AEG588A2_S2_L002_R2_001.fastq.gz,forward
control_REP3,/path/to/fastq/files/AEG588A3_S3_L002_R1_001.fastq.gz,/path/to/fastq/files/AEG588A3_S3_L002_R2_001.fastq.gz,forward
treatment_REP1,/path/to/fastq/files/AEG588A4_S4_L003_R1_001.fastq.gz,,forward
treatment_REP2,/path/to/fastq/files/AEG588A5_S5_L003_R1_001.fastq.gz,,forward
treatment_REP3,/path/to/fastq/files/AEG588A6_S6_L003_R1_001.fastq.gz,,forward
treatment_REP3,/path/to/fastq/files/AEG588A6_S6_L004_R1_001.fastq.gz,,forward

You can run the pipeline with your own samplesheet using:

nextflow run ${pipeline_dir} \
  -profile binac \
  -work-dir ${nxf_work_dir} \
  --input <path to your samplesheet.csv> \
  --outdir ${nxf_output_dir}

Please note that we cannot cover every possible parameter for each nf-core pipeline here. For detailed information, check the pipeline documentation before using a pipeline productively for the first time.

As usual, you can contact BinAC support if you have any problems or questions.