snakemake-workflows/dna-seq-mtb
A flavor of https://github.com/snakemake-workflows/dna-seq-varlociraptor preconfigured for molecular tumor boards
Overview
Topics:
Latest release: v1.10.0, Last update: 2025-03-06
Linting: linting: passed, Formatting:formatting: passed
Deployment
Step 1: Install Snakemake and Snakedeploy
Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.
When using Mamba, run
mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy
to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via
conda activate snakemake
Step 2: Deploy workflow
With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:
mkdir -p path/to/project-workdir
cd path/to/project-workdir
In all following steps, we will assume that you are inside of that directory. Then run
snakedeploy deploy-workflow https://github.com/snakemake-workflows/dna-seq-mtb . --tag v1.10.0
Snakedeploy will create two folders, workflow
and config
. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.
Step 3: Configure workflow
To configure the workflow, adapt config/config.yml
to your needs following the instructions below.
Step 4: Run workflow
The deployment method is controlled using the --software-deployment-method
(short --sdm
) argument.
To run the workflow with automatic deployment of all required software via conda
/mamba
, use
snakemake --cores all --sdm conda
Snakemake will automatically detect the main Snakefile
in the workflow
subfolder and execute the workflow module that has been defined by the deployment in step 2.
For further options such as cluster and cloud execution, see the docs.
Step 5: Generate report
After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using
snakemake --report report.zip
Configuration
The following section is imported from the workflow’s config/README.md
.
To configure this workflow, modify config/config.yaml
according to your needs, following the explanations provided in the file.
Add samples to config/samples.tsv
. For each sample, the columns sample_name
, alias
, platform
, and group
have to be defined.
- Samples within the same
group
will be called jointly. - Aliases represent the name of the sample within its group (they can be the same as the sample name, or something simpler, e.g. tumor or normal).
- The
platform
column needs to contain the used sequencing plaform (one of 'CAPILLARY', 'LS454', 'ILLUMINA', 'SOLID', 'HELICOS', 'IONTORRENT', 'ONT', 'PACBIO’). - The
ffpe
column specifies whether a sample is a ffpe substrate (1) or not (0). ffpe treated normal samples are not supported.
Missing values can be specified by empty columns or by writing NA
. Lines can be commented out with #
.
For each sample, add one or more sequencing units (runs, lanes or replicates) to the unit sheet config/units.tsv
.
- Each unit has a
unit_name
, which can be e.g. a running number, or an actual run, lane or replicate id. - Each unit has a
sample_name
, which associates it with the biological sample it comes from. - For each unit, define either one (column
fq1
) or two (columnsfq1
,fq2
) FASTQ files (these can point to anywhere in your system). - Alternatively, you can define an SRA (sequence read archive) accession (starting with e.g. ERR or SRR) by using a column
sra
. In the latter case, the pipeline will automatically download the corresponding paired end reads from SRA. If both local files and SRA accession are available, the local files will be preferred. - Define adapters in the
adapters
column, by putting cutadapt arguments in quotation marks (e.g."-a ACGCGATCG -A GCTAGCGTACT"
).
Missing values can be specified by empty columns or by writing NA
. Lines can be commented out with #
.
For panel data the pipeline allows trimming of amplicon primers on both ends of a fragment but also on a single end only.
In case of single end primers these are supposed to be located at the left end of a read.
When primer trimming is enabled, primers have to be defined either directly in the config.yaml
or in a seperate tsv-file.
Defining primers directly in the config file is prefered when all samples come from the same primer set.
In case of different panels, primers have to be set panel-wise in a seperate tsv-file (the path to that tsv can be set in the config under primers/trimming/tsv
).
For each panel the following columns need to be set: panel
, fa1
and fa2
(optional).
Additionally, for each sample the corresponding panel must be defined in samples.tsv
(column panel
).
For single primer trimming only, the first entry in the config (respective in the tsv file) needs to be defined.
Linting and formatting
Linting results
None
Formatting results
None