snakemake-workflows/rna-seq-star-deseq2
RNA-seq workflow using STAR and DESeq2
Overview
Latest release: v3.0.1, Last update: 2025-09-05
Linting: linting: passed, Formatting: formatting: passed
Topics: snakemake sciworkflows reproducibility gene-expression-analysis deseq2
Wrappers: bio/bwa/index bio/fastp bio/multiqc bio/reference/ensembl-annotation bio/reference/ensembl-sequence bio/samtools/faidx bio/sra-tools/fasterq-dump bio/star/align bio/star/index
Deployment
Step 1: Install Snakemake and Snakedeploy
Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.
When using Mamba, run
mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy
to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via
conda activate snakemake
Step 2: Deploy workflow
With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:
mkdir -p path/to/project-workdir
cd path/to/project-workdir
In all following steps, we will assume that you are inside of that directory. Then run
snakedeploy deploy-workflow https://github.com/snakemake-workflows/rna-seq-star-deseq2 . --tag v3.0.1
Snakedeploy will create two folders, workflow
and config
. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.
Step 3: Configure workflow
To configure the workflow, adapt config/config.yml
to your needs following the instructions below.
Step 4: Run workflow
The deployment method is controlled using the --software-deployment-method
(short --sdm
) argument.
To run the workflow with automatic deployment of all required software via conda
/mamba
, use
snakemake --cores all --sdm conda
Snakemake will automatically detect the main Snakefile
in the workflow
subfolder and execute the workflow module that has been defined by the deployment in step 2.
For further options such as cluster and cloud execution, see the docs.
Step 5: Generate report
After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using
snakemake --report report.zip
Configuration
The following section is imported from the workflow’s config/README.md
.
General configuration
To configure this workflow, modify config/config.yaml
according to your needs, following the explanations provided in the file.
DESeq2
differential expression analysis configuration
To successfully run the differential expression analysis, you will need to tell DESeq2 which sample annotations to use (annotations are columns in the samples.tsv
file described below).
This is done in the config.yaml
file with the entries under diffexp:
.
The comments for the entries should give all the necessary infos and linkouts.
But if in doubt, please also consult the DESeq2
manual.
Sample and unit setup
The sample and unit setup is specified via tab-separated tabular files (.tsv
).
Missing values can be specified by empty columns or by writing NA
.
sample sheet
The default sample sheet is config/samples.tsv
(as configured in config/config.yaml
).
Each sample refers to an actual physical sample, and replicates (both biological and technical) may be specified as separate samples.
For each sample, you will always have to specify a sample_name
.
In addition, all variables_of_interest
and batch_effects
specified in the config/config.yaml
under the diffexp:
entry, will have to have corresponding columns in the config/samples.tsv
.
Finally, the sample sheet can contain any number of additional columns.
So if in doubt about whether you might at some point need some metadata you already have at hand, just put it into the sample sheet already—your future self will thank you.
unit sheet
The default unit sheet is config/units.tsv
(as configured in config/config.yaml
).
For each sample, add one or more sequencing units (for example if you have several runs or lanes per sample).
.fastq
file source
For each unit, you will have to define a source for your .fastq
files.
This can be done via the columns fq1
, fq2
and sra
, with either of:
A single
.fastq
file for single-end reads (fq1
column only;fq2
andsra
columns present, but empty). The entry can be any path on your system, but we suggest something like araw/
data directory within your analysis directory.Two
.fastq
files for paired-end reads (columnsfq1
andfq2
; columnsra
present, but empty). As for thefq1
column, thefq2
column can also point to anywhere on your system.A sequence read archive (SRA) accession number (
sra
column only;fq1
andfq2
columns present, but empty). The workflow will automatically download the corresponding.fastq
data (currently assumed to be paired-end). The accession numbers usually start with SRR or ERR and you can find accession numbers for studies of interest with the SRA Run Selector. If both local files and an SRA accession are specified for the same unit, the local files will be used.
strandedness of library preparation protocol
To get the correct geneCounts
from STAR
output, you can provide information on the strandedness of the library preparation protocol used for a unit.
STAR
can produce counts for unstranded (none
- this is the default), forward oriented (yes
) and reverse oriented (reverse
) protocols.
Enter the respective value into a strandedness
column in the units.tsv
file.
adapter trimming and read filtering
Finally, you can provide settings for the adapter trimming with fastp
(see the fastp
documentation) via the units.tsv
columns fastp_adapters
and fastp_extra
.
However, if you leave those two columns empty (no whitespace!), fastp
will auto-detect adapters and the workflow will set sensible defaults for trimming of RNA-seq data.
If you use this automatic inference, make sure to double-check the Detected read[12] adapter:
entries in the resulting fastp
HTML report.
This is part of the final snakemake
report of the workflow, or can be found in the sample-specific folders under results/trimmed/
, once a sample has been processed this far.
If the auto-detection didn’t work at all (empty Detected read[12] adapter:
entries), or the Occurrences
in the Adapters
section are lower than you would expect, please ensure that you find out which adapters were used and configure the adapter trimming manually:
In the column fastp_adapters
, you can specify known adapter sequences to be trimmed off by fastp
, including the command-line argument for the trimming.
For example, specify the following string in this column: --adapter_sequence=AGATCGGAAGAGCACACGTCTGAACTCCAGTCA --adapter_sequence_r2=AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT
If you don’t know the adapters used, leave this empty (an empty string, containing no whitespace), and fastp
will auto-detect the adapters that need to be trimmed.
If you want to make the auto-detection explicit for paired-end samples, you can also specify --detect_adapter_for_pe
.
In the column fastp_extra
, you can specify further fastp
command-line settings.
If you leave this empty (an empty string, containing no whitespace), the workflow will set its default:
--trim_poly_x --poly_x_min_len 7
: This poly-X trimming removes polyA tails if they are 7 nucleotides or longer. It is run after adapter trimming.--trim_poly_g --poly_g_min_len 7
: This poly-G trimming removes erroneous G basecalls at the tails of reads, if they are 7 nucleotides or longer. These Gs are artifacts in Illumina data from machines with a one channel or two channel color chemistry. We currently set this by default, because the auto-detection for the respective machines is lacking the latest machine types. When the above-linked pull request is updated and merged, we can remove this and rely on the auto-detection. If you want to specify additional command line options, we recommend always including those parameters in your units.tsv, as well. Here’s the full concatenation for copy-pasting:
--trim_poly_x --poly_x_min_len 7 --trim_poly_g --poly_g_min_len 7
Lexogen 3’ QuantSeq adapter trimming
For this data, adapter trimming should automatically work as expected with the use of fastp
.
The above-listed defaults are equivalent to an adaptation of the Lexogen read preprocessing recommendations for 3’ FWD QuantSeq data with cutadapt
.
The only difference is that we don’t do any length filtering with these defaults.
If you want to exactly mirror the Lexogen recommendations, please use this for the fastp_extra
column in your units.tsv
:
--length_required 20 --trim_poly_x --poly_x_min_len 7 --trim_poly_g --poly_g_min_len 7
The fastp
equivalents, including minimal deviations from the recommendations, are motivated as follows:
-m
: In cutadapt, this is the short version of--minimum-length
. Thefastp
equivalent is--length_required
.-O
: Here,fastp
doesn’t have an equivalent option, so we currently have to live with the suboptimal default of4
. This is greater than themin_overlap=3
used here; but smaller than the value of7
, a threshold that we have found avoids removing randomly matching sequences when combined with the typical Illuminamax_error_rate=0.005
.-a "polyA=A{20}"
: This can be replaced byfastp
’s dedicated option for--trim_poly_x
tail removal (which is run after adapter trimming).-a "QUALITY=G{20}"
: This can be replaced byfastp
’s dedicated option for the removal of artifactual trailingG
s in Illumina data from machines with a one channel or two channel color chemistry:--trim_poly_g
. This is automatically activated for earlier Illumina machine models with this chemistry, but we recommend to activate it manually in thefastp_extra
column of yourconfig/units.tsv
file for now, as there are newer models that are not auto-detected, yet. Also, we recommend to set--poly_g_min_len 7
, to avoid trimming spurious matches of G-only stretches at the end of reads.-n
: With the dedicatedfastp
options getting applied in the right order, this option is not needed any more.-a "r1adapter=A{18}AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC;min_overlap=3;max_error_rate=0.100000"
: We remove A{18}, as this is handled by--trim_poly_x
.fastp
uses the slightly highermin_overlap
equivalent of4
, which is currently hard-coded (and not exposed as a command-line argument). Because of this, we cannot set themax_error_rate
to the Illumina error rate of about0.005
.-g "r1adapter=AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC;min_overlap=20"
: This is not needed any more, asfastp
searches the read sequence for adapter sequences from the start of the read (see thefastp
adapter search code).--discard-trimmed
: We omit this, as adapter sequence removal early in the read will leave short remaining read sequences that are subsequently filtered by--length_required
.
Linting and formatting
Linting results
All tests passed!
Formatting results
All tests passed!