tdayris/fair_macs2_calling

Snakemake workflow used to call peaks with Macs2

Overview

Topics: bowtie2 homer macs2 sambamba snakemake snakemake-workflow snakemake-wrappers

Latest release: 3.2.2, Last update: 2025-03-07

Linting: linting: passed, Formatting:formatting: passed

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.

When using Mamba, run

mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/tdayris/fair_macs2_calling . --tag 3.2.2

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

This pipeline requires two configuration file:

config.yaml

A standard Snakemake configuration, yaml-formatted file containing a list of all parameters accepted in this workflow:

  • samples: Path to the file containing link between samples and their fastq file(s)
  • params: Per-tool list of optional parameters

Example:

samples: config/samples.csv

Optional parameters

params:

Optional parameters for FastQC

fastqc: “”

Optional parameters for FastP

fastp: # Optional command line adapters adapters: “” # Optional command line parameters extra: “” bowtie2: # Optional parameters for bowtie2-build build: “” # Optional parameters for bowtie2-align align: “” sambamba: # Optional parameters for sambamba view view: “–format ‘bam’ –filter ‘mapping_quality >= 30 and not (unmapped or mate_is_unmapped)’ “ # Optional parameters for sambamba markdup markdup: “–remove-duplicates –overflow-size 500000” picard: # Mapping QC optional parameters metrics: “”

Optional parameters for samtools stats

samtools: “” macs2: # Optional parameters for Macs2 Callpeaks callpeak: “” homer: # Optional parameters for Homer AnnotatePeaks annotatepeaks: “-homer2” deeptools: # Optional parameters for DeepTools bamCoverage bamcoverage: “–ignoreDuplicates –minMappingQuality 30 –samFlagExclude 4 –ignoreForNormalization X Y MT” # Optional parameters for DeepTools plotCoverage plot_coverage: “–skipZeros –ignoreDuplicates –minMappingQuality 30 –samFlagExclude 4” # Optional parameters for DeepTools plotFingerprint plot_fingerprint: “–skipZeros –ignoreDuplicates –minMappingQuality 30 –samFlagExclude 4” # Optional parameters for DeepTools plotPCA plot_pca: “–ntop 1000” # Optional parameters for DeepTools plotEnrichment plot_enrichment: “–ignoreDuplicates –minMappingQuality 30 –samFlagExclude 4 –smartLabels” # Optional parameters for DeepTools plotCorrelations plot_correlation: “–whatToPlot heatmap –corMethod spearman –skipZeros –plotNumbers –colorMap RdYlBu”

Optional parameters for MultQC

multiqc: “–module deeptools –module macs2 –module picard –module fastqc –module fastp –module samtools –module bowtie2 –module sambamba –zip-data-dir –verbose –no-megaqc-upload –no-ansi –force”

samples.csv

A CSV-formatted text file containing the following mandatory columns:

  • sample_id: Unique name of the sample
  • upstream_file: Path to upstream fastq file
  • species: The species name, according to Ensembl standards
  • build: The corresponding genome build, according to Ensembl standards
  • release: The corresponding genome release, according to Ensembl standards
  • downstream_file: Optional path to downstream fastq file
  • input: Sample id of the corresponding input signal

Example:

sample_id,upstream_file,downstream_file,species,build,release,input
sac_a,data/reads/a.scerevisiae.1.fq,data/reads/a.scerevisiae.2.fq,saccharomyces_cerevisiae,R64-1-1,110,sac_a_input
sac_a_input,data/reads/a.scerevisiaeI.1.fq,data/reads/a.scerevisiaeI.2.fq,saccharomyces_cerevisiae,R64-1-1,110,

While CSV format is tested and recommended, this workflow uses python csv.Sniffer() to detect column separator. Tabulation and semicolumn are also accepted as field separator. Remember that only comma-separator is tested.

Linting and formatting

Linting results

None

Formatting results

None