snakemake-workflows/star-arriba-fusion-calling

A standardized snakemake workflow to map RNAseq reads with star and call fusions on the resulting alignment files with arriba.

Overview

Latest release: v1.0.2, Last update: 2026-04-09

Share link: https://snakemake.github.io/snakemake-workflow-catalog?wf=snakemake-workflows/star-arriba-fusion-calling

Quality control: linting: passed formatting: passed

Wrappers: bio/arriba bio/reference/ensembl-annotation bio/reference/ensembl-sequence bio/samtools/index bio/star/align bio/star/index

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Conda package manager. It is recommended to install conda via Miniforge. Run

conda create -c conda-forge -c bioconda -c nodefaults --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

For other installation methods, refer to the Snakemake and Snakedeploy documentation.

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/snakemake-workflows/star-arriba-fusion-calling . --tag v1.0.2

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow using apptainer/singularity, use

snakemake --cores all --sdm apptainer

To run the workflow using a combination of conda and apptainer/singularity for software deployment, use

snakemake --cores all --sdm conda apptainer

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

workflow overview

This workflow is a best-practice workflow for calling Fusions using Arriba. The workflow is built using snakemake and consists of the following steps:

  1. Download genome reference from Ensembl

  2. Generate STAR index of the reference genome (STAR).

  3. Align reads (STAR).

  4. Call and filter fusions (Arriba).

  5. Create fusion plots for all fusions that pass the filters (Arriba’s draw_fusions.sh).

workflow setup

There are three things that you need to set up to run this workflow:

  1. In the unit sheet config/units.tsv, specify where to find raw FASTQ files and which units belong to which sample.

  2. In the sample sheet config/samples.tsv, specify which samples belong to which group of samples and what type (alias) of sample they are.

  3. Go through the whole workflow configuration file config/config.yaml and adjust it for your analysis. The options are explained in detailed comments within the file.

sample sheet

Add samples to config/samples.tsv. For each sample, the columns sample_name, group, alias and platform have to be defined.

  • The sample_name clearly identifies an individual biological sample.

  • Multiple samples sharing the same group indicate that they belong together in some way, for example that they come from the same patient.

  • aliases represent the type of the sample within its group. They are meant to be some abstract description of the sample type, and should thus be used consistently across groups. A classic example would be a combination of the tumor and normal aliases.

  • The platform column needs to contain the used sequencing plaform (one of ‘CAPILLARY’, ‘LS454’, ‘ILLUMINA’, ‘SOLID’, ‘HELICOS’, ‘IONTORRENT’, ‘ONT’, ‘PACBIO’).

  • The same sample_name entry can be used multiple times within a samples.tsv sample sheet, with only the value in the group column differing between repeated rows. This way, you can use the same sample for variant calling in different groups, for example if you use a panel of normal samples when you don’t have matched normal samples for tumor variant calling.

In addition, the optional sv_file column can be filled with the path for sample-specific structural variant calls, meant to improve the fusion calling and filtering by Arriba. The provided files need to be in one of the formats that Arriba accepts.

Missing values can be specified by empty columns or by writing NA. Lines can be commented out with #.

unit sheet

For each sample, add one or more sequencing units (runs, lanes or replicates) to the unit sheet config/units.tsv.

  • Each unit has a unit_name. This can be a running number, or an actual run, lane or replicate id.

  • Each unit has a sample_name, which associates it with the biological sample it comes from. This information is used to merged all the units of a sample before read mapping and duplicate marking.

  • For each unit, you need to specify either of these columns:

    • fq1 only for single end reads. This can point to any FASTQ file on your system

    • fq1 and fq2 for paired end reads. These can point to any FASTQ files on your system

Missing values can be specified by empty columns or by writing NA. Lines can be commented out with #.

Workflow parameters

The following table is automatically parsed from the workflow’s config.schema.y(a)ml file.

Parameter

Type

Description

Required

Default

samples

string

path to sample-sheet TSV file

yes

units

string

path to unit-sheet TSV file

yes

params

yes

. arriba

yes

. . custom_blacklist

string

custom blacklist of known false positive fusions

yes

. . custom_known_fusions

string

custom list of known / common fusions

yes

. . extra

string

yes

. star

yes

. . index

yes

. . . extra

string

yes

. . . sjdbOverhang

integer

yes

. . align

yes

. . . extra

string

yes

Linting and formatting

Linting results
All tests passed!
Formatting results
All tests passed!