mynameisdidit/UTS_Bioinformatika_SV-Callers

None

Overview

Topics:

Latest release: None, Last update: 2024-05-31

Linting: linting: failed, Formatting:formatting: failed

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.

When using Mamba, run

mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/mynameisdidit/UTS_Bioinformatika_SV-Callers . --tag None

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

sv-callers

DOI Published in PeerJ CI Codacy Badge Codacy Badge

Structural variants (SVs) are an important class of genetic variation implicated in a wide array of genetic diseases. sv-callers is a Snakemake-based workflow that combines several state-of-the-art tools for detecting SVs in whole genome sequencing (WGS) data. The workflow is easy to use and deploy on any Linux-based machine. In particular, the workflow supports automated software deployment, easy configuration and addition of new analysis tools as well as enables to scale from a single computer to different HPC clusters with minimal effort.

Dependencies

  • Python
  • Conda - package/environment management system
  • Snakemake - workflow management system
  • Xenon CLI - command-line interface to compute and storage resources
  • jq - command-line JSON processor (optional)
  • YAtiML - library for YAML type inference and schema validation

The workflow includes the following bioinformatics tools:

The software dependencies can be found in the conda environment files: [1],[2],[3].

1. Clone this repo.

git clone https://github.com/GooglingTheCancerGenome/sv-callers.git
cd sv-callers

2. Install dependencies.

# download Miniconda3 installer
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh
# install Conda (respond by 'yes')
bash miniconda.sh
# update Conda
conda update -y conda
# install Mamba
conda install -n base -c conda-forge -y mamba
# create a new environment with dependencies & activate it
mamba env create -n wf -f environment.yaml
conda activate wf

3. Configure the workflow.

  • config files:

    • analysis.yaml - analysis-specific settings (e.g., workflow mode, I/O files, SV callers, post-processing or resources used etc.)
    • samples.csv - list of (paired) samples
  • input files:

    • example data in workflow/data directory
    • reference genome in .fasta (incl. index files)
    • excluded regions in .bed (optional)
    • WGS samples in .bam (incl. index files)
  • output files:

    • (filtered) SVs per caller and merged calls in .vcf (incl. index files)

4. Execute the workflow.

cd workflow

Locally

# 'dry' run only checks I/O files
snakemake -np

# ‘vanilla’ run if echo_run set to 1 (default) in analysis.yaml, # it merely mimics the execution of SV callers by writing (dummy) VCF files; # SV calling if echo_run set to 0 snakemake –use-conda –jobs

Submit jobs to Slurm or GridEngine cluster

SCH=slurm   # or gridengine
snakemake  --use-conda --latency-wait 30 --jobs \
--cluster "xenon scheduler $SCH --location local:// submit --name smk.{rule} --inherit-env --cores-per-task {threads} --max-run-time 1 --max-memory {resources.mem_mb} --working-directory . --stderr stderr-%j.log --stdout stdout-%j.log" &>smk.log&

Note: One sample or a tumor/normal pair generates in total 18 SV calling and post-processing jobs. See the workflow instance of single-sample (germline) or paired-sample (somatic) analysis.

To perform SV calling:

  • edit (default) parameters in analysis.yaml

    • set echo_run to 0
    • choose between two workflow modes: single- (s) or paired-sample (p - default)
    • select one or more callers using enable_callers (default all)
  • use xenon CLI to set:

    • --max-run-time of workflow jobs (in minutes)
    • --temp-space (optional, in MB)
  • adjust compute requirements per SV caller according to the system used:

    • the number of threads,
    • the amount of memory(in MB),
    • the amount of temporary disk space or tmpspace (path in TMPDIR env variable) can be used for intermediate files by LUMPY and GRIDSS only.

Query job accounting information

SCH=slurm   # or gridengine
xenon --json scheduler $SCH --location local:// list --identifier [jobID] | jq ...

Linting and formatting

Linting results

Traceback (most recent call last):

  File "/home/runner/micromamba-root/envs/snakemake-workflow-catalog/lib/python3.12/site-packages/snakemake/cli.py", line 1986, in args_to_api
    any_lint = workflow_api.lint()
               ^^^^^^^^^^^^^^^^^^^

  File "/home/runner/micromamba-root/envs/snakemake-workflow-catalog/lib/python3.12/site-packages/snakemake/api.py", line 337, in _handle_no_dag
    return method(self, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/home/runner/micromamba-root/envs/snakemake-workflow-catalog/lib/python3.12/site-packages/snakemake/api.py", line 354, in lint
    workflow.include(

  File "/home/runner/micromamba-root/envs/snakemake-workflow-catalog/lib/python3.12/site-packages/snakemake/workflow.py", line 1398, in include
    exec(compile(code, snakefile.get_path_or_uri(), "exec"), self.globals)

  File "/tmp/tmp8m0dp20x/workflow/Snakefile", line 7, in <module>
    from helper_functions import *

  File "/tmp/tmp8m0dp20x/workflow/helper_functions.py", line 5, in <module>

... (truncated)

Formatting results

[DEBUG] 
[DEBUG] 
<unknown>:1: SyntaxWarning: invalid escape sequence '\s'
[DEBUG] 
[DEBUG] 
[DEBUG] In file "/tmp/tmp8m0dp20x/workflow/rules/manta.smk":  Formatted content is different from original
[DEBUG] 
[DEBUG] 
[DEBUG] 
[INFO] 1 file(s) would be changed 😬
[INFO] 6 file(s) would be left unchanged 🎉

snakefmt version: 0.10.2