regulatory-genomics/ATAC-sm

Snakemake pipeline for ATAC-seq

Overview

Latest release: None, Last update: 2025-11-24

Linting: linting: failed, Formatting: formatting: failed

Wrappers: bio/fastqc

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Conda. It is recommended to install conda via Miniforge. Run

conda create -c conda-forge -c bioconda -c nodefaults --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

For other installation methods, refer to the Snakemake and Snakedeploy documentation.

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/regulatory-genomics/ATAC-sm . --tag None

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

Configuration

You need one configuration file and one annotation file to run the complete workflow. You can use the provided examples as starting point. If in doubt read the comments in the config and/or try the default values.

  • project configuration (config/config.yaml): different for every project/dataset and configures the processing and quantification. The fields are described within the file. Resources are pre-filled with relative locations from the respective Zenodo download.

  • annotation.csv: CSV file consisting of one technical sequencing unit per row (i.e., one sample can include multiple sequencing units, hence mutliple rows) and 4 mandatory columns:

    • sample_name (first column!)

    • read_type: “single” or “paired”.

    • bam_file: path to the raw/unaligned/unmapped uBAM file.

    • pass_qc: number between 0 (not used for downstream steps e.g., quantification) and 1. Every sample with pass_qc>0 is included in the downstream quantification and annotation steps.

    • (optional) additional sample metadata columns can be added and indicated for inclusion in the report.

Set workflow-specific resources or command line arguments (CLI) in the workflow profile workflow/profiles/default.config.yaml, which supersedes global Snakemake profiles.

Linting and formatting

Linting results

 1Using workflow specific profile workflow/profiles/default for setting default command line arguments.
 2usage: snakemake [-h] [--dry-run] [--profile PROFILE]
 3                 [--workflow-profile WORKFLOW_PROFILE] [--cache [RULE ...]]
 4                 [--snakefile FILE] [--cores N] [--jobs N] [--local-cores N]
 5                 [--resources NAME=INT [NAME=INT ...]]
 6                 [--set-threads RULE=THREADS [RULE=THREADS ...]]
 7                 [--max-threads MAX_THREADS]
 8                 [--set-resources RULE:RESOURCE=VALUE [RULE:RESOURCE=VALUE ...]]
 9                 [--set-scatter NAME=SCATTERITEMS [NAME=SCATTERITEMS ...]]
10                 [--set-resource-scopes RESOURCE=[global|local]
11                 [RESOURCE=[global|local] ...]]
12                 [--default-resources [NAME=INT ...]]
13                 [--preemptible-rules [PREEMPTIBLE_RULES ...]]
14                 [--preemptible-retries PREEMPTIBLE_RETRIES]
15                 [--configfile FILE [FILE ...]] [--config [KEY=VALUE ...]]
16                 [--replace-workflow-config] [--envvars VARNAME [VARNAME ...]]
17                 [--directory DIR] [--touch] [--keep-going]
18                 [--rerun-triggers {code,input,mtime,params,software-env} [{code,input,mtime,params,software-env} ...]]
19                 [--force] [--executor {local,dryrun,touch}] [--forceall]
20                 [--forcerun [TARGET ...]]
21                 [--consider-ancient RULE=INPUTITEMS [RULE=INPUTITEMS ...]]
22                 [--prioritize TARGET [TARGET ...]]
23                 [--batch RULE=BATCH/BATCHES] [--until TARGET [TARGET ...]]
24                 [--omit-from TARGET [TARGET ...]] [--rerun-incomplete]
25                 [--shadow-prefix DIR]
26                 [--strict-dag-evaluation {cyclic-graph,functions,periodic-wildcards} [{cyclic-graph,functions,periodic-wildcards} ...]]
27                 [--scheduler [{greedy,ilp}]]
28                 [--conda-base-path CONDA_BASE_PATH] [--no-subworkflows]
29                 [--precommand PRECOMMAND] [--groups GROUPS [GROUPS ...]]
30                 [--group-components GROUP_COMPONENTS [GROUP_COMPONENTS ...]]
31                 [--report [FILE]] [--report-after-run]
32                 [--report-stylesheet CSSFILE] [--report-metadata FILE]
33                 [--reporter PLUGIN] [--draft-notebook TARGET]
34                 [--edit-notebook TARGET] [--notebook-listen IP:PORT]
35                 [--lint [{text,json}]] [--generate-unit-tests [TESTPATH]]
36                 [--containerize] [--export-cwl FILE] [--list-rules]
37                 [--list-target-rules] [--dag [{dot,mermaid-js}]]
38                 [--rulegraph [{dot,mermaid-js}]] [--filegraph] [--d3dag]
39                 [--summary] [--detailed-summary] [--archive FILE]
40                 [--cleanup-metadata FILE [FILE ...]] [--cleanup-shadow]
41                 [--skip-script-cleanup] [--unlock]
42                 [--list-changes {params,input,code}] [--list-input-changes]
43                 [--list-params-changes] [--list-untracked]
44                 [--delete-all-output | --delete-temp-output]
45                 [--keep-incomplete] [--drop-metadata] [--version]
46                 [--printshellcmds] [--debug-dag] [--nocolor]
47                 [--quiet [{all,host,progress,reason,rules} ...]]
48                 [--print-compilation] [--verbose] [--force-use-threads]
49                 [--allow-ambiguity] [--nolock] [--ignore-incomplete]
50                 [--max-inventory-time SECONDS] [--trust-io-cache]
51                 [--max-checksum-file-size SIZE] [--latency-wait SECONDS]
52                 [--wait-for-free-local-storage WAIT_FOR_FREE_LOCAL_STORAGE]
53                 [--wait-for-files [FILE ...]] [--wait-for-files-file FILE]
54                 [--runtime-source-cache-path PATH]
55                 [--queue-input-wait-time SECONDS]
56                 [--omit-flags OMIT_FLAGS [OMIT_FLAGS ...]] [--notemp]
57                 [--all-temp] [--unneeded-temp-files FILE [FILE ...]]
58                 [--keep-storage-local-copies] [--not-retrieve-storage]
59                 [--target-files-omit-workdir-adjustment]
60                 [--allowed-rules ALLOWED_RULES [ALLOWED_RULES ...]]
61                 [--max-jobs-per-timespan MAX_JOBS_PER_TIMESPAN]
62                 [--max-status-checks-per-second MAX_STATUS_CHECKS_PER_SECOND]
63                 [--seconds-between-status-checks SECONDS_BETWEEN_STATUS_CHECKS]
64                 [--retries RETRIES] [--wrapper-prefix WRAPPER_PREFIX]
65                 [--default-storage-provider DEFAULT_STORAGE_PROVIDER]
66                 [--default-storage-prefix DEFAULT_STORAGE_PREFIX]
67                 [--local-storage-prefix LOCAL_STORAGE_PREFIX]
68                 [--remote-job-local-storage-prefix REMOTE_JOB_LOCAL_STORAGE_PREFIX]
69                 [--shared-fs-usage {input-output,persistence,software-deployment,source-cache,sources,storage-local-copies,none} [{input-output,persistence,software-deployment,source-cache,sources,storage-local-copies,none} ...]]
70                 [--scheduler-greediness SCHEDULER_GREEDINESS]
71                 [--scheduler-subsample SCHEDULER_SUBSAMPLE] [--no-hooks]
72                 [--debug] [--runtime-profile FILE]
73                 [--local-groupid LOCAL_GROUPID] [--attempt ATTEMPT]
74                 [--show-failed-logs] [--logger {} [{} ...]]
75                 [--job-deploy-sources] [--benchmark-extended]
76                 [--container-image IMAGE] [--immediate-submit]
77                 [--jobscript SCRIPT] [--jobname NAME]
78                 [--software-deployment-method {apptainer,conda,env-modules} [{apptainer,conda,env-modules} ...]]
79                 [--container-cleanup-images] [--use-conda]
80                 [--conda-not-block-search-path-envvars] [--list-conda-envs]
81                 [--conda-prefix DIR] [--conda-cleanup-envs]
82                 [--conda-cleanup-pkgs [{tarballs,cache}]]
83                 [--conda-create-envs-only] [--conda-frontend {conda,mamba}]
84                 [--use-apptainer] [--apptainer-prefix DIR]
85                 [--apptainer-args ARGS] [--use-envmodules]
86                 [--deploy-sources QUERY CHECKSUM]
87                 [--target-jobs TARGET_JOBS [TARGET_JOBS ...]]
88                 [--mode {subprocess,default,remote}]
89                 [--scheduler-solver-path SCHEDULER_SOLVER_PATH]
90                 [--max-jobs-per-second MAX_JOBS_PER_SECOND]
91                 [--report-html-path VALUE]
92                 [--report-html-stylesheet-path VALUE]
93                 [--scheduler-greedy-greediness VALUE]
94                 [--scheduler-greedy-omit-prioritize-by-temp-and-input]
95                 [--scheduler-ilp-solver VALUE]
96                 [--scheduler-ilp-solver-path VALUE]
97                 [targets ...]
98snakemake: error: argument --executor/-e: invalid choice: 'slurm' (choose from local, dryrun, touch)

Formatting results

 1[DEBUG] 
 2[DEBUG] In file "/tmp/tmpnuedn3_l/workflow/rules/region_annotation.smk":  Formatted content is different from original
 3[DEBUG] 
 4[DEBUG] In file "/tmp/tmpnuedn3_l/workflow/rules/export.smk":  Formatted content is different from original
 5[DEBUG] 
 6[DEBUG] In file "/tmp/tmpnuedn3_l/workflow/Snakefile":  Formatted content is different from original
 7[DEBUG] 
 8[DEBUG] In file "/tmp/tmpnuedn3_l/workflow/rules/processing.smk":  Formatted content is different from original
 9[DEBUG] 
10[DEBUG] In file "/tmp/tmpnuedn3_l/workflow/rules/resources.smk":  Formatted content is different from original
11[DEBUG] 
12[DEBUG] In file "/tmp/tmpnuedn3_l/workflow/rules/quantification.smk":  Formatted content is different from original
13[DEBUG] 
14[DEBUG] In file "/tmp/tmpnuedn3_l/workflow/rules/common.smk":  Formatted content is different from original
15[DEBUG] 
16[DEBUG] In file "/tmp/tmpnuedn3_l/workflow/rules/qc.smk":  Formatted content is different from original
17[DEBUG] 
18[DEBUG] In file "/tmp/tmpnuedn3_l/workflow/rules/genome_prepare.smk":  Formatted content is different from original
19[DEBUG] 
20[DEBUG] In file "/tmp/tmpnuedn3_l/workflow/rules/report.smk":  Formatted content is different from original
21[INFO] 10 file(s) would be changed 😬
22
23snakefmt version: 0.11.2