snakemake-workflows/rna-seq-star-deseq2
RNA-seq workflow using STAR and DESeq2
Overview
Topics: snakemake sciworkflows reproducibility gene-expression-analysis deseq2
Latest release: v2.1.2, Last update: 2024-09-13
Linting: linting: passed, Formatting:formatting: passed
Deployment
Step 1: Install Snakemake and Snakedeploy
Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.
When using Mamba, run
mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy
to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via
conda activate snakemake
Step 2: Deploy workflow
With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:
mkdir -p path/to/project-workdir
cd path/to/project-workdir
In all following steps, we will assume that you are inside of that directory. Then run
snakedeploy deploy-workflow https://github.com/snakemake-workflows/rna-seq-star-deseq2 . --tag v2.1.2
Snakedeploy will create two folders, workflow
and config
. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.
Step 3: Configure workflow
To configure the workflow, adapt config/config.yml
to your needs following the instructions below.
Step 4: Run workflow
The deployment method is controlled using the --software-deployment-method
(short --sdm
) argument.
To run the workflow with automatic deployment of all required software via conda
/mamba
, use
snakemake --cores all --sdm conda
Snakemake will automatically detect the main Snakefile
in the workflow
subfolder and execute the workflow module that has been defined by the deployment in step 2.
For further options such as cluster and cloud execution, see the docs.
Step 5: Generate report
After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using
snakemake --report report.zip
Configuration
The following section is imported from the workflow’s config/README.md
.
To configure this workflow, modify config/config.yaml
according to your needs, following the explanations provided in the file.
To successfully run the differential expression analysis, you will need to tell DESeq2 which sample annotations to use (annotations are columns in the samples.tsv
file described below).
This is done in the config.yaml
file with the entries under diffexp:
.
The comments for the entries should give all the necessary infos and linkouts.
But if in doubt, please also consult the DESeq2
manual.
The sample and unit setup is specified via tab-separated tabular files (.tsv
).
Missing values can be specified by empty columns or by writing NA
.
The default sample sheet is config/samples.tsv
(as configured in config/config.yaml
).
Each sample refers to an actual physical sample, and replicates (both biological and technical) may be specified as separate samples.
For each sample, you will always have to specify a sample_name
.
In addition, all variables_of_interest
and batch_effects
specified in the config/config.yaml
under the diffexp:
entry, will have to have corresponding columns in the config/samples.tsv
.
Finally, the sample sheet can contain any number of additional columns.
So if in doubt about whether you might at some point need some metadata you already have at hand, just put it into the sample sheet already---your future self will thank you.
The default unit sheet is config/units.tsv
(as configured in config/config.yaml
).
For each sample, add one or more sequencing units (for example if you have several runs or lanes per sample).
For each unit, you will have to define a source for your .fastq
files.
This can be done via the columns fq1
, fq2
and sra
, with either of:
- A single
.fastq
file for single-end reads (fq1
column only;fq2
andsra
columns present, but empty). The entry can be any path on your system, but we suggest something like araw/
data directory within your analysis directory. - Two
.fastq
files for paired-end reads (columnsfq1
andfq2
; columnsra
present, but empty). As for thefq1
column, thefq2
column can also point to anywhere on your system. - A sequence read archive (SRA) accession number (
sra
column only;fq1
andfq2
columns present, but empty). The workflow will automatically download the corresponding.fastq
data (currently assumed to be paired-end). The accession numbers usually start with SRR or ERR and you can find accession numbers for studies of interest with the SRA Run Selector. If both local files and an SRA accession are specified for the same unit, the local files will be used.
If you set trimming: activate:
in the config/config.yaml
to True
, you will have to provide at least one cutadapt
adapter argument for each unit in the adapters
column of the units.tsv
file.
You will need to find out the adapters used in the sequencing protocol that generated a unit: from your sequencing provider, or for published data from the study's metadata (or its authors).
Then, enter the adapter sequences into the adapters
column of that unit, preceded by the correct cutadapt
adapter argument.
To get the correct geneCounts
from STAR
output, you can provide information on the strandedness of the library preparation protocol used for a unit.
STAR
can produce counts for unstranded (none
- this is the default), forward oriented (yes
) and reverse oriented (reverse
) protocols.
Enter the respective value into a strandedness
column in the units.tsv
file.
Linting and formatting
Linting results
None
Formatting results
None