aryazand/fastq-process-align

snakemake based pipeline for processing and mapping fastq files

Overview

Latest release: None, Last update: 2026-04-08

Share link: https://snakemake.github.io/snakemake-workflow-catalog?wf=aryazand/fastq-process-align

Quality control: linting: failed formatting: failed

Wrappers: bio/bowtie2/align bio/bowtie2/build bio/bwa-mem2/index bio/bwa-mem2/mem bio/deeptools/bamcoverage bio/fastp bio/fastqc bio/gffread bio/minimap2/aligner bio/minimap2/index bio/multiqc bio/rseqc/bam_stat bio/rseqc/infer_experiment bio/samtools/faidx bio/samtools/index bio/samtools/sort bio/samtools/view bio/star/align bio/star/index bio/trim_galore/pe

Workflow Rule Graph

This visualization of the workflow’s rule graph was automatically generated using Snakevision

Rule Graph light

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Conda package manager. It is recommended to install conda via Miniforge. Run

conda create -c conda-forge -c bioconda -c nodefaults --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

For other installation methods, refer to the Snakemake and Snakedeploy documentation.

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/aryazand/fastq-process-align . --tag None

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow using a combination of conda and apptainer/singularity for software deployment, use

snakemake --cores all --sdm conda apptainer

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

Workflow overview

This workflow is a best-practice workflow for mapping of reads to reference genomes, minimalistic and simple. It will attempt to map reads to the reference using one of the included mappers, report read and experiment statistics

The workflow is built using snakemake and consists of the following steps:

  1. Download genome reference from NCBI (ncbi tools), or use manual input (fasta, gff format)

  2. Check quality of input read data (FastQC)

  3. Trim adapters and apply quality filtering (fastp)

  4. Map reads to reference genome using:

    1. (Bowtie2)[http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml] or

    2. (BWA-MEM2)[https://github.com/bwa-mem2/bwa-mem2] or

    3. (STAR)[https://github.com/alexdobin/STAR]

    4. (minimap2)[https://github.com/lh3/minimap2]

  5. Determine experiment type, get mapping stats (rseqc)

  6. Generate bigwig or bedgaph coverage profiles (deeptools)

  7. Collect statistics from tool output (MultiQC)

Running the workflow

Input data

The workflow requires sequencing data in *.fastq.gz format, and a reference genome to map to. The sample sheet listing read input files needs to have the following layout:

sample

description

read1

read2

sample1

strain XY

sample1_R1.fastq.gz

sample1_R2.fastq.gz

Parameters

This table lists all parameters that can be used to run the workflow.

parameter

type

details

default

samplesheet

string

path to the sample sheet in tsv format

get_genome

database

string

database to use for genome retrieval, ‘ncbi’ or ‘manual’

ncbi

assembly

string

Refseq ID to use for genome retrieval

GCF_000307535.1

fasta

string

path to a custom FASTA file (optional)

gff

string

path to a custom GFF file (optional)

gff_source_type

array

mapping of GFF source types to feature types

fastp

extra

string

additional arguments to Fastp

mapping

tool

string

mapping tool to use, one of ‘bowtie2’, ‘bwa_mem2’

bwa_mem2

bowtie2

index

string

additional arguments to bowtie build

extra

string

additional arguments to bowtie align

bwa_mem2

extra

string

additional arguments to bwa-mem2

sort

string

sorting tool to use

samtools

sort_order

string

sorting order to use

coordinate

sort_extra

string

additional arguments to the sorting tool

samtools_sort

extra

string

additional arguments to Samtools sort

-m 4G

index

object

Samtools index options

extra

string

additional arguments to Samtools index

star

index

string

additional arguments to STAR index

extra

string

additional arguments to STAR align

minimap2

index

string

additional arguments to minimap2 index

extra

string

additional arguments to minimap2 align

-ax map-ont

sorting

string

sorting order to use

coordinate

sort_extra

string

additional arguments to the sorting tool

mapping_stats

gffread

extra

string

additional arguments to GFFread

rseqc_infer_experiment

extra

string

additional arguments to RSeQC infer_experiment

rseqc_bam_stat

extra

string

additional arguments to RSeQC bam_stat

deeptools_coverage

genome_size

integer

genome size in base pairs

1000

extra

string

additional arguments to DeepTools bamCoverage

qc

fastqc

extra

string

additional arguments to FastQC

multiqc

extra

string

additional arguments to MultiQC

Workflow parameters

The following table is automatically parsed from the workflow’s config.schema.y(a)ml file.

Parameter

Type

Description

Required

Default

samplesheet

string

path to the sample sheet in tsv format

yes

get_genome

yes

. database

string

database to use for genome retrieval, ‘ncbi’ or ‘manual’

yes

. assembly

string

assembly version to use for genome retrieval, e.g. ‘GCF_000307535.1’

yes

. fasta

[‘string’, ‘null’]

path to a custom FASTA file (optional)

. gff

[‘string’, ‘null’]

path to a custom GFF file (optional)

. gff_source_type

array

mapping of GFF source types to feature types

yes

processing

yes

. tool

string

which tool is being used to process fastq files

. fastp

. . extra

string

additional arguments to pass to Fastp

. trim_galore

. . extra

string

additional arguments to pass to trim_galore

. umi_tools_extract

. . enabled

boolean

whether to extract UMIs with umi_tools extract

. . extra

string

additional arguments to pass to umi_tools extract

mapping

yes

. tool

string

mapping tool to use, one of ‘bowtie2’, ‘bwa_mem2’, ‘star’

. bowtie2

. . index

string

additional arguments to bowtie build

. . extra

string

additional arguments to bowtie align

. bwa_mem2

. . extra

string

additional arguments to bwa-mem2

. . sort

string

sorting tool to use, e.g. ‘samtools’

. . sort_order

string

sorting order to use

. . sort_extra

string

additional arguments to pass to the sorting tool

. star

. . index

string

additional arguments to STAR index

. . extra

string

additional arguments to STAR align

. minimap2

. . index

string

additional arguments to minimap2 index

. . extra

string

additional arguments to minimap2 align

. . sorting

string

sorting order to use

. . sort_extra

string

additional arguments to pass to the sorting tool

. samtools_sort

. . extra

string

additional arguments to pass to Samtools sort

. samtools_index

. . extra

string

additional arguments to pass to Samtools index

. umi_tools_dedup

. . enabled

boolean

whether to dedup with umi_tools dedup

. . extra

string

additional arguments to pass to umi_tools dedup

mapping_stats

yes

. gffread

yes

. . extra

string

additional arguments to pass to GFFread

. rseqc_infer_experiment

yes

. . extra

string

additional arguments to pass to RSeQC infer_experiment.py

. rseqc_bam_stat

yes

. . extra

string

additional arguments to pass to RSeQC bam_stat.py

. deeptools_coverage

yes

. . genome_size

integer

genome size in base pairs

. . extra

string

additional arguments to pass to DeepTools bamCoverage

qc

yes

. multiqc

yes

. . extra

string

additional arguments to pass to MultiQC

. fastqc

yes

. . extra

string

additional arguments to pass to FastQC

Linting and formatting

Linting results
 1Lints for rule add_overhang_for_circular_chromosomes (line 1, /tmp/tmphjjs0c87/workflow/rules/overhang.smk):
 2    * No log directive defined:
 3      Without a log directive, all output will be printed to the terminal. In
 4      distributed environments, this means that errors are harder to discover.
 5      In local environments, output of concurrent jobs will be mixed and become
 6      unreadable.
 7      Also see:
 8      https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#log-files
 9
10Lints for rule remove_overhang (line 32, /tmp/tmphjjs0c87/workflow/rules/process_bam.smk):
11    * Specify a conda environment or container for each rule.:
12      This way, the used software for each specific step is documented, and the
13      workflow can be executed on any machine without prerequisites.
14      Also see:
15      https://snakemake.readthedocs.io/en/latest/snakefiles/deployment.html#integrated-package-management
16      https://snakemake.readthedocs.io/en/latest/snakefiles/deployment.html#running-jobs-in-containers
Formatting results
 1[DEBUG] 
 2[DEBUG] 
 3[DEBUG] 
 4[DEBUG] 
 5[DEBUG] 
 6[DEBUG] 
 7[DEBUG] 
 8[DEBUG] 
 9[DEBUG] 
10[DEBUG] 
11[DEBUG] 
12[DEBUG] 
13[DEBUG] 
14[DEBUG] In file "/tmp/tmphjjs0c87/workflow/rules/process_bam.smk":  Formatted content is different from original
15[INFO] 1 file(s) would be changed 😬
16[INFO] 12 file(s) would be left unchanged 🎉
17
18snakefmt version: 0.11.5