SilvanCodes/1001_genomes_analysis

None

Overview

Latest release: None, Last update: 2025-12-03

Linting: linting: failed, Formatting: formatting: passed

Wrappers: bio/fastqc bio/multiqc

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Conda. It is recommended to install conda via Miniforge. Run

conda create -c conda-forge -c bioconda -c nodefaults --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

For other installation methods, refer to the Snakemake and Snakedeploy documentation.

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/SilvanCodes/1001_genomes_analysis . --tag None

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow using apptainer/singularity, use

snakemake --cores all --sdm apptainer

To run the workflow using a combination of conda and apptainer/singularity for software deployment, use

snakemake --cores all --sdm conda apptainer

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

Workflow overview

This workflow is a best-practice workflow for <detailed description>. The workflow is built using snakemake and consists of the following steps:

  1. Download genome reference from NCBI

  2. Validate downloaded genome (python script)

  3. Simulate short read sequencing data on the fly (dwgsim)

  4. Check quality of input read data (FastQC)

  5. Collect statistics from tool output (MultiQC)

Running the workflow

Input data

This template workflow creates artificial sequencing data in *.fastq.gz format. It does not contain actual input data. The simulated input files are nevertheless created based on a mandatory table linked in the config.yaml file (default: .test/samples.tsv). The sample sheet has the following layout:

sample

condition

replicate

read1

read2

sample1

wild_type

1

sample1.bwa.read1.fastq.gz

sample1.bwa.read2.fastq.gz

sample2

wild_type

2

sample2.bwa.read1.fastq.gz

sample2.bwa.read2.fastq.gz

Parameters

This table lists all parameters that can be used to run the workflow.

parameter

type

details

default

samplesheet

path

str

path to samplesheet, mandatory

“config/samples.tsv”

get_genome

ncbi_ftp

str

link to a genome on NCBI’s FTP server

link to S. cerevisiae genome

simulate_reads

read_length

num

length of target reads in bp

100

read_number

num

number of total reads to be simulated

10000

Linting and formatting

Linting results

 1Lints for snakefile /tmp/tmp0tc2hl61/workflow/rules/ncbi.smk:
 2    * Absolute path "/tmp/ncbi_dataset/data" in line 24:
 3      Do not define absolute paths inside of the workflow, since this renders
 4      your workflow irreproducible on other machines. Use path relative to the
 5      working directory instead, or make the path configurable via a config
 6      file.
 7      Also see:
 8      https://snakemake.readthedocs.io/en/latest/snakefiles/configuration.html#configuration
 9
10Lints for rule download_ncbi_dataset (line 6, /tmp/tmp0tc2hl61/workflow/rules/ncbi.smk):
11    * No log directive defined:
12      Without a log directive, all output will be printed to the terminal. In
13      distributed environments, this means that errors are harder to discover.
14      In local environments, output of concurrent jobs will be mixed and become
15      unreadable.
16      Also see:
17      https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#log-files
18
19Lints for rule unpack_ncbi_dataset (line 17, /tmp/tmp0tc2hl61/workflow/rules/ncbi.smk):
20    * Do not access input and output files individually by index in shell commands:
21      When individual access to input or output files is needed (i.e., just
22      writing '{input}' is impossible), use names ('{input.somename}') instead
23      of index based access.
24      Also see:
25      https://snakemake.readthedocs.io/en/latest/snakefiles/rules.html#rules
26    * No log directive defined:
27      Without a log directive, all output will be printed to the terminal. In
28      distributed environments, this means that errors are harder to discover.
29      In local environments, output of concurrent jobs will be mixed and become
30      unreadable.
31      Also see:
32      https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#log-files
33    * Specify a conda environment or container for each rule.:
34      This way, the used software for each specific step is documented, and the
35      workflow can be executed on any machine without prerequisites.
36      Also see:
37      https://snakemake.readthedocs.io/en/latest/snakefiles/deployment.html#integrated-package-management
38      https://snakemake.readthedocs.io/en/latest/snakefiles/deployment.html#running-jobs-in-containers
39
40Lints for rule get_1001_genome_snps (line 56, /tmp/tmp0tc2hl61/workflow/Snakefile):
41    * No log directive defined:
42      Without a log directive, all output will be printed to the terminal. In
43      distributed environments, this means that errors are harder to discover.
44      In local environments, output of concurrent jobs will be mixed and become
45      unreadable.
46      Also see:
47      https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#log-files
48
49Lints for rule create_annotation_db (line 70, /tmp/tmp0tc2hl61/workflow/Snakefile):
50    * No log directive defined:
51      Without a log directive, all output will be printed to the terminal. In
52      distributed environments, this means that errors are harder to discover.
53      In local environments, output of concurrent jobs will be mixed and become
54      unreadable.
55      Also see:
56      https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#log-files
57
58Lints for rule add_sequence_window (line 81, /tmp/tmp0tc2hl61/workflow/Snakefile):
59    * No log directive defined:
60      Without a log directive, all output will be printed to the terminal. In
61      distributed environments, this means that errors are harder to discover.
62      In local environments, output of concurrent jobs will be mixed and become
63      unreadable.
64      Also see:
65      https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#log-files

Formatting results

All tests passed!