regulatory-genomics/hic_sm

None

Overview

Latest release: None, Last update: 2026-01-03

Linting: linting: failed, Formatting: formatting: failed

Wrappers: bio/bowtie2/build bio/bwa-memx/index

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Conda. It is recommended to install conda via Miniforge. Run

conda create -c conda-forge -c bioconda -c nodefaults --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

For other installation methods, refer to the Snakemake and Snakedeploy documentation.

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/regulatory-genomics/hic_sm . --tag None

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

General configuration

To configure this workflow, modify config/config.yaml according to your needs, following the explanations provided in the file.

Input

For each sample the input files are typically a pairs of .fastq.gz files, one with forward and one with reverse reads. Additionally, the input can be specified as an accession in the SRA database, and reads will be downloaded automatically.

For each biological sample multiple technical replicates (“lanes”) can be provided, they are then merged at the stage of pairs.

Biological samples (e.g. biological replicates) can also be grouped into “library groups”, so they are merged at the level of coolers.

You need to provide the name of the genome assembly, the path to the bwa index with a wildcard, and to the chromsizes file. The index doesn’t need to already exist, as long as provided path matches exactly the fasta file with the reference genome (e.g. sequence in mm10.fa.gz, provide mm10.fa.gz*). If the index doesn’t exist, it will be created.

Mapping

Mapping can be done with bwa-mem, bwa-mem2, bwa-meme (all produce identical or near-identical results), or chromap. Chromap outputs .pairs directly and works very fast, but you lose the flexibility of custom parising options.

Linting and formatting

Linting results

 1Using workflow specific profile workflow/profiles/default for setting default command line arguments.
 2usage: snakemake [-h] [--dry-run] [--profile PROFILE]
 3                 [--workflow-profile WORKFLOW_PROFILE] [--cache [RULE ...]]
 4                 [--snakefile FILE] [--cores N] [--jobs N] [--local-cores N]
 5                 [--resources NAME=INT [NAME=INT ...]]
 6                 [--set-threads RULE=THREADS [RULE=THREADS ...]]
 7                 [--max-threads MAX_THREADS]
 8                 [--set-resources RULE:RESOURCE=VALUE [RULE:RESOURCE=VALUE ...]]
 9                 [--set-scatter NAME=SCATTERITEMS [NAME=SCATTERITEMS ...]]
10                 [--set-resource-scopes RESOURCE=[global|local]
11                 [RESOURCE=[global|local] ...]]
12                 [--default-resources [NAME=INT ...]]
13                 [--preemptible-rules [PREEMPTIBLE_RULES ...]]
14                 [--preemptible-retries PREEMPTIBLE_RETRIES]
15                 [--configfile FILE [FILE ...]] [--config [KEY=VALUE ...]]
16                 [--replace-workflow-config] [--envvars VARNAME [VARNAME ...]]
17                 [--directory DIR] [--touch] [--keep-going]
18                 [--rerun-triggers {code,input,mtime,params,software-env} [{code,input,mtime,params,software-env} ...]]
19                 [--force] [--executor {local,dryrun,touch}] [--forceall]
20                 [--forcerun [TARGET ...]]
21                 [--consider-ancient RULE=INPUTITEMS [RULE=INPUTITEMS ...]]
22                 [--prioritize TARGET [TARGET ...]]
23                 [--batch RULE=BATCH/BATCHES] [--until TARGET [TARGET ...]]
24                 [--omit-from TARGET [TARGET ...]] [--rerun-incomplete]
25                 [--shadow-prefix DIR]
26                 [--strict-dag-evaluation {cyclic-graph,functions,periodic-wildcards} [{cyclic-graph,functions,periodic-wildcards} ...]]
27                 [--scheduler [{greedy,ilp}]]
28                 [--conda-base-path CONDA_BASE_PATH] [--no-subworkflows]
29                 [--precommand PRECOMMAND] [--groups GROUPS [GROUPS ...]]
30                 [--group-components GROUP_COMPONENTS [GROUP_COMPONENTS ...]]
31                 [--report [FILE]] [--report-after-run]
32                 [--report-stylesheet CSSFILE] [--report-metadata FILE]
33                 [--reporter PLUGIN] [--draft-notebook TARGET]
34                 [--edit-notebook TARGET] [--notebook-listen IP:PORT]
35                 [--lint [{text,json}]] [--generate-unit-tests [TESTPATH]]
36                 [--containerize] [--export-cwl FILE] [--list-rules]
37                 [--list-target-rules] [--dag [{dot,mermaid-js}]]
38                 [--rulegraph [{dot,mermaid-js}]] [--filegraph] [--d3dag]
39                 [--summary] [--detailed-summary] [--archive FILE]
40                 [--cleanup-metadata FILE [FILE ...]] [--cleanup-shadow]
41                 [--skip-script-cleanup] [--unlock]
42                 [--list-changes {code,input,params}] [--list-input-changes]
43                 [--list-params-changes] [--list-untracked]
44                 [--delete-all-output | --delete-temp-output]
45                 [--keep-incomplete] [--drop-metadata] [--version]
46                 [--printshellcmds] [--debug-dag] [--nocolor]
47                 [--quiet [{all,host,progress,reason,rules} ...]]
48                 [--print-compilation] [--verbose] [--force-use-threads]
49                 [--allow-ambiguity] [--nolock] [--ignore-incomplete]
50                 [--max-inventory-time SECONDS] [--trust-io-cache]
51                 [--max-checksum-file-size SIZE] [--latency-wait SECONDS]
52                 [--wait-for-free-local-storage WAIT_FOR_FREE_LOCAL_STORAGE]
53                 [--wait-for-files [FILE ...]] [--wait-for-files-file FILE]
54                 [--runtime-source-cache-path PATH]
55                 [--queue-input-wait-time SECONDS]
56                 [--omit-flags OMIT_FLAGS [OMIT_FLAGS ...]] [--notemp]
57                 [--all-temp] [--unneeded-temp-files FILE [FILE ...]]
58                 [--keep-storage-local-copies] [--not-retrieve-storage]
59                 [--target-files-omit-workdir-adjustment]
60                 [--allowed-rules ALLOWED_RULES [ALLOWED_RULES ...]]
61                 [--max-jobs-per-timespan MAX_JOBS_PER_TIMESPAN]
62                 [--max-status-checks-per-second MAX_STATUS_CHECKS_PER_SECOND]
63                 [--seconds-between-status-checks SECONDS_BETWEEN_STATUS_CHECKS]
64                 [--retries RETRIES] [--wrapper-prefix WRAPPER_PREFIX]
65                 [--default-storage-provider DEFAULT_STORAGE_PROVIDER]
66                 [--default-storage-prefix DEFAULT_STORAGE_PREFIX]
67                 [--local-storage-prefix LOCAL_STORAGE_PREFIX]
68                 [--remote-job-local-storage-prefix REMOTE_JOB_LOCAL_STORAGE_PREFIX]
69                 [--shared-fs-usage {input-output,persistence,software-deployment,source-cache,sources,storage-local-copies,none} [{input-output,persistence,software-deployment,source-cache,sources,storage-local-copies,none} ...]]
70                 [--scheduler-greediness SCHEDULER_GREEDINESS]
71                 [--scheduler-subsample SCHEDULER_SUBSAMPLE] [--no-hooks]
72                 [--debug] [--runtime-profile FILE]
73                 [--local-groupid LOCAL_GROUPID] [--attempt ATTEMPT]
74                 [--show-failed-logs] [--logger {} [{} ...]]
75                 [--job-deploy-sources] [--benchmark-extended]
76                 [--container-image IMAGE] [--immediate-submit]
77                 [--jobscript SCRIPT] [--jobname NAME]
78                 [--software-deployment-method {apptainer,conda,env-modules} [{apptainer,conda,env-modules} ...]]
79                 [--container-cleanup-images] [--use-conda]
80                 [--conda-not-block-search-path-envvars] [--list-conda-envs]
81                 [--conda-prefix DIR] [--conda-cleanup-envs]
82                 [--conda-cleanup-pkgs [{tarballs,cache}]]
83                 [--conda-create-envs-only] [--conda-frontend {conda,mamba}]
84                 [--use-apptainer] [--apptainer-prefix DIR]
85                 [--apptainer-args ARGS] [--use-envmodules]
86                 [--deploy-sources QUERY CHECKSUM]
87                 [--target-jobs TARGET_JOBS [TARGET_JOBS ...]]
88                 [--mode {remote,subprocess,default}]
89                 [--scheduler-solver-path SCHEDULER_SOLVER_PATH]
90                 [--max-jobs-per-second MAX_JOBS_PER_SECOND]
91                 [--report-html-path VALUE]
92                 [--report-html-stylesheet-path VALUE]
93                 [--scheduler-greedy-greediness VALUE]
94                 [--scheduler-greedy-omit-prioritize-by-temp-and-input]
95                 [--scheduler-ilp-solver VALUE]
96                 [--scheduler-ilp-solver-path VALUE]
97                 [targets ...]
98snakemake: error: argument --executor/-e: invalid choice: 'slurm' (choose from local, dryrun, touch)

Formatting results

 1[DEBUG] 
 2[DEBUG] In file "/tmp/tmp82den0gh/workflow/rules/downstream.smk":  Formatted content is different from original
 3[DEBUG] 
 4[DEBUG] In file "/tmp/tmp82den0gh/workflow/rules/qc.smk":  Formatted content is different from original
 5[DEBUG] 
 6[DEBUG] In file "/tmp/tmp82den0gh/workflow/rules/bowtie2_rescue.smk":  Formatted content is different from original
 7[DEBUG] 
 8[DEBUG] In file "/tmp/tmp82den0gh/workflow/rules/align.smk":  Formatted content is different from original
 9[DEBUG] 
10[ERROR] In file "/tmp/tmp82den0gh/workflow/Snakefile":  InvalidPython: Black error:

Cannot parse for target version Python 3.13: 1:0: elif config[“map”][“mapper”] == “bowtie2”:

(Note reported line number may be incorrect, as snakefmt could not determine the true line number)


[DEBUG] In file "/tmp/tmp82den0gh/workflow/Snakefile":  
[DEBUG] In file "/tmp/tmp82den0gh/workflow/rules/cooltools.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] In file "/tmp/tmp82den0gh/workflow/rules/pairtools.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] In file "/tmp/tmp82den0gh/workflow/rules/hicpro_style.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] In file "/tmp/tmp82den0gh/workflow/rules/common.smk":  Formatted content is different from original
[INFO] 1 file(s) raised parsing errors 🤕
[INFO] 8 file(s) would be changed 😬

snakefmt version: 0.11.2