MPUSP/snakemake-simple-mapping

A Snakemake workflow for the mapping of reads to reference genomes, minimalistic and simple.

Overview

Latest release: v1.4.1, Last update: 2025-09-25

Linting: linting: passed, Formatting: formatting: passed

Topics: bowtie2 bwa-mem2 genomics mapping next-generation-sequencing snakemake snakemake-workflow star-alignment variant-calling

Wrappers: bio/bcftools/call bio/bcftools/filter bio/bcftools/mpileup bio/bcftools/stats bio/bcftools/view bio/bowtie2/align bio/bowtie2/build bio/bwa-mem2/index bio/bwa-mem2/mem bio/deeptools/bamcoverage bio/fastp bio/fastqc bio/freebayes bio/gffread bio/minimap2/aligner bio/minimap2/index bio/multiqc bio/rseqc/bam_stat bio/rseqc/infer_experiment bio/samtools/index bio/samtools/sort bio/snpeff/annotate bio/star/align bio/star/index bio/vep/annotate bio/vep/plugins

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Conda. It is recommended to install conda via Miniforge. Run

conda create -c conda-forge -c bioconda -c nodefaults --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

For other installation methods, refer to the Snakemake and Snakedeploy documentation.

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/MPUSP/snakemake-simple-mapping . --tag v1.4.1

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

Workflow overview

This workflow is a best-practice workflow for mapping of reads to reference genomes, minimalistic and simple.

It will attempt to map reads to the reference using one of the included mappers, report read and experiment statistics, create coverage profiles, quantify variants (such as SNPs) using two different tools, and predict the effect of these variants. All of this is performed with minimal input and without lookups to external databases (e.g. for variant effects), which makes the workflow ideal for bacteria and other low-complexity non-model organisms.

The workflow is built using snakemake and consists of the following steps:

  1. Download genome reference from NCBI (ncbi tools), or use manual input (fasta, gff format)

  2. Check quality of input read data (FastQC)

  3. Trim adapters and apply quality filtering (fastp)

  4. Map reads to reference genome using:

    1. (Bowtie2)[http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml] or

    2. (BWA-MEM2)[https://github.com/bwa-mem2/bwa-mem2] or

    3. (STAR)[https://github.com/alexdobin/STAR]

    4. (minimap2)[https://github.com/lh3/minimap2]

  5. Determine experiment type, get mapping stats (rseqc)

  6. Generate bigwig or bedgaph coverage profiles (deeptools)

  7. Quantify variations and SNPs (bcftools, freebayes)

  8. Predict effect of variants such as premature stop codons (VEP or SnpEff)

  9. Create consensus of variants and create a visual report (R markdown)

  10. Collect statistics from tool output (MultiQC)

Running the workflow

Input data

The workflow requires sequencing data in *.fastq.gz format, and a reference genome to map to. The sample sheet listing read input files needs to have the following layout:

sample

description

read1

read2

sample1

strain XY

sample1_R1.fastq.gz

sample1_R2.fastq.gz

Parameters

This table lists all parameters that can be used to run the workflow.

parameter

type

details

default

samplesheet

string

path to the sample sheet in tsv format

get_genome

database

string

database to use for genome retrieval, ‘ncbi’ or ‘manual’

ncbi

assembly

string

Refseq ID to use for genome retrieval

GCF_000307535.1

fasta

string

path to a custom FASTA file (optional)

gff

string

path to a custom GFF file (optional)

gff_source_type

array

mapping of GFF source types to feature types

fastp

extra

string

additional arguments to Fastp

mapping

tool

string

mapping tool to use, one of ‘bowtie2’, ‘bwa_mem2’

bwa_mem2

bowtie2

index

string

additional arguments to bowtie build

extra

string

additional arguments to bowtie align

bwa_mem2

extra

string

additional arguments to bwa-mem2

sort

string

sorting tool to use

samtools

sort_order

string

sorting order to use

coordinate

sort_extra

string

additional arguments to the sorting tool

samtools_sort

extra

string

additional arguments to Samtools sort

-m 4G

index

object

Samtools index options

extra

string

additional arguments to Samtools index

star

index

string

additional arguments to STAR index

extra

string

additional arguments to STAR align

minimap2

index

string

additional arguments to minimap2 index

extra

string

additional arguments to minimap2 align

-ax map-ont

sorting

string

sorting order to use

coordinate

sort_extra

string

additional arguments to the sorting tool

mapping_stats

gffread

extra

string

additional arguments to GFFread

rseqc_infer_experiment

extra

string

additional arguments to RSeQC infer_experiment

rseqc_bam_stat

extra

string

additional arguments to RSeQC bam_stat

deeptools_coverage

genome_size

integer

genome size in base pairs

1000

extra

string

additional arguments to DeepTools bamCoverage

variant_calling

bcftools_pileup

uncompressed

boolean

whether to output uncompressed BCF files

False

extra

string

additional arguments to BCFtools pileup

bcftools_call

uncompressed

boolean

whether to output uncompressed VCF files

False

caller

string

use ‘-c’ for consensus or ‘-m’ for multiallelic

-c

extra

string

additional arguments to BCFtools view

bcftools_view

extra

string

additional arguments to BCFtools call

bcftools_filter

filter

string

expression by which to filter BCF/VCF result

-e 'ALT=\".\"'

extra

string

additional arguments to BCFtools filter

freebayes

extra

string

additional arguments to Freebayes call

variant_annotation

tool

string

annotation tool to use, one of ‘vep’, ‘snpeff’

vep

vep

convert_gff

boolean

whether to convert NCBI GFF to Ensemble style GFF

True

plugins

array

VEP plugins to use

[]

extra

string

additional arguments to VEP

see config.yml

snpeff

extra

string

additional arguments to SnpEff

see config.yml

qc

fastqc

extra

string

additional arguments to FastQC

multiqc

extra

string

additional arguments to MultiQC

Linting and formatting

Linting results

All tests passed!

Formatting results

All tests passed!