Albinam1/bioinformatics_school

None

Overview

Topics:

Latest release: None, Last update: 2022-07-13

Linting: linting: failed, Formatting:formatting: failed

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.

When using Mamba, run

mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/Albinam1/bioinformatics_school . --tag None

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

General settings

To configure this workflow, modify config/config.yaml according to your needs, following the explanations provided in the file.

Sample and unit sheet

  • Add samples to config/samples.tsv. Only the column sample is mandatory, but any additional columns can be added.
  • For each sample, add one or more sequencing units (runs, lanes or replicates) to the unit sheet config/units.tsv. For each unit, define platform, and either one (column fq1) or two (columns fq1, fq2) FASTQ files (these can point to anywhere in your system).

The pipeline will jointly call all samples that are defined, following the GATK best practices.

Linting and formatting

Linting results

Workflow defines that rule get_genome is eligible for caching between workflows (use the --cache argument to enable this).
Workflow defines that rule genome_faidx is eligible for caching between workflows (use the --cache argument to enable this).
Workflow defines that rule genome_dict is eligible for caching between workflows (use the --cache argument to enable this).
Workflow defines that rule get_known_variation is eligible for caching between workflows (use the --cache argument to enable this).
Workflow defines that rule remove_iupac_codes is eligible for caching between workflows (use the --cache argument to enable this).
Workflow defines that rule tabix_known_variants is eligible for caching between workflows (use the --cache argument to enable this).
Workflow defines that rule bwa_index is eligible for caching between workflows (use the --cache argument to enable this).
SyntaxError:
Not all output, log and benchmark files of rule map_reads_with_minimap contain the same wildcards. This is crucial though, in order to avoid that two or more jobs write to the same file.
  File "/tmp/tmpn0wv3lfq/workflow/Snakefile", line 19, in <module>
  File "/tmp/tmpn0wv3lfq/workflow/rules/mapping.smk", line 44, in <module>

Formatting results

[DEBUG] 
[DEBUG] In file "/tmp/tmpn0wv3lfq/workflow/rules/stats.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] In file "/tmp/tmpn0wv3lfq/workflow/rules/filtering.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] In file "/tmp/tmpn0wv3lfq/workflow/rules/qc.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] 
[DEBUG] In file "/tmp/tmpn0wv3lfq/workflow/rules/mapping.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] 
[DEBUG] 
[DEBUG] 
[INFO] 4 file(s) would be changed 😬
[INFO] 5 file(s) would be left unchanged 🎉

snakefmt version: 0.6.1