MathiasEskildsen/ONT-AmpSeq

Snakemake workflow to generate OTU tables from barcoded ONT data

Overview

Topics:

Latest release: v1.1.2, Last update: 2025-01-28

Linting: linting: passed, Formatting:formatting: failed

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.

When using Mamba, run

mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/MathiasEskildsen/ONT-AmpSeq . --tag v1.1.2

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

Configuration

  • input_dir : Path to the input folder, containing fastq files in compressed or decompressed format. The pipeline expects the input files to conform to 1 of 2 directory structures. 1: input_dir contains subfolders for each sampleID/barcode, if that is the case all fastq files in each subfolder are concatenated and the subfolder name is used as a sample ID downstream. This is usually the "fastq_pass" folder from nanopore sequencing and basecalling output (atleast when using Guppy). 2: input_dir contains already concatenated fastq files, directly located in input_dir. If that is the case, the pipeline uses the entire filename as a sample ID downstream. This is usually the case output from Dorado re-basecalling with demultiplexing enabled.
  • output_dir: Output directory with the final results and a few intermediary files, that can be used for other downstream purposes if desired.
  • tmp_dir: Directory for temporary files.
  • log_dir: Directory for log files for all invoked rules.
  • db_path_sintax: Database to infer taxonomy using the SINTAX algorithm. Contains sequenceID, taxonomy string and fasta sequence.
  • db_path_blast: Nucleotide blast formatted database to infer taxonomy using BLASTn algorithm.
  • evalue: E-value cutoff for blast. Default = 1e-10.
  • length_lower_limit: Argument passed on to chopper for filtering reads. Appropriate values depends on amplicon length. This can be checked by running the helper script scripts/nanoplot.sh
  • length_upper_limit: Argument passed on to chopper for filtering reads. Appropriate values depends on amplicon length. This can be checked by running the helper script scripts/nanoplot.sh
  • quality_cut_off: Argument passed on to chopper for filtering reads. Appropriate value depends on the quality of your sequencing data. This can be checked by running the helper script scripts/nanoplot.sh. It is recommended to pick a Q-score >20, if your data permits it.
  • max_threads: Maximum number of threads that can be used for any rule.
  • include_blast_output: Default = True. If true snakemake will output a final OTU-table with taxonomy infered from a blastn search against a nt blast database.
  • include_sintax_output: Default = True. If true snakemake will output a final OTU-table with taxonomy infered from a sintax formatted database.
  • ids: Clustering identity for OTUs. Default is 97% and 99%. Use "." decimal seperator i.e 99.9.
  • primer_f: Forward primer length to trim off. Default = 22
  • primer_r: Reverse primer length to trim off. Defeault = 22
  • f: Minimap2 mapping option. Default = 0.0002 More information at here.
  • K: Minimap2 mapping option. Default = 500M More information at here.

Linting and formatting

Linting results

None

Formatting results

[DEBUG] 
[DEBUG] In file "/tmp/tmpnzqufpzt/MathiasEskildsen-ONT-AmpSeq-4f0694d/workflow/rules/02-filtering.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] 
[DEBUG] In file "/tmp/tmpnzqufpzt/MathiasEskildsen-ONT-AmpSeq-4f0694d/workflow/rules/common.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] 
[DEBUG] In file "/tmp/tmpnzqufpzt/MathiasEskildsen-ONT-AmpSeq-4f0694d/workflow/rules/04-mapping.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] 
[DEBUG] 
[DEBUG] 
[DEBUG] In file "/tmp/tmpnzqufpzt/MathiasEskildsen-ONT-AmpSeq-4f0694d/workflow/rules/08-taxonomy.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] 
[DEBUG] 
[DEBUG] 
[DEBUG] 
[DEBUG] 
[DEBUG] 

... (truncated)