IMS-Bio2Core-Facility/single_snake_sequencing
A Snakemake workflow for standardised sc/snRNAseq analysis
Overview
Topics: bioinformatics snakemake scrna-seq reproducible-science conda singularity
Latest release: v0.1.2, Last update: 2021-11-02
Linting: linting: passed, Formatting:formatting: passed
Deployment
Step 1: Install Snakemake and Snakedeploy
Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.
When using Mamba, run
mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy
to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via
conda activate snakemake
Step 2: Deploy workflow
With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:
mkdir -p path/to/project-workdir
cd path/to/project-workdir
In all following steps, we will assume that you are inside of that directory. Then run
snakedeploy deploy-workflow https://github.com/IMS-Bio2Core-Facility/single_snake_sequencing . --tag v0.1.2
Snakedeploy will create two folders, workflow
and config
. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.
Step 3: Configure workflow
To configure the workflow, adapt config/config.yml
to your needs following the instructions below.
Step 4: Run workflow
The deployment method is controlled using the --software-deployment-method
(short --sdm
) argument.
To run the workflow with automatic deployment of all required software via conda
/mamba
, use
snakemake --cores all --sdm conda
To run the workflow using apptainer
/singularity
, use
snakemake --cores all --sdm apptainer
To run the workflow using a combination of conda
and apptainer
/singularity
for software deployment, use
snakemake --cores all --sdm conda apptainer
Snakemake will automatically detect the main Snakefile
in the workflow
subfolder and execute the workflow module that has been defined by the deployment in step 2.
For further options such as cluster and cloud execution, see the docs.
Step 5: Generate report
After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using
snakemake --report report.zip
Configuration
The following section is imported from the workflow’s config/README.md
.
Configuration
The configuration keys that are expected are given below. Don't worry about typos, etc. These are all enforced with Snakemake's brilliant schema validation.
config.yaml
samplesheet
Path to the samplesheet.
This defaults to config/samples.yaml
.
get_cellranger
- url: str, required. Url to retrieve Cellranger
get_reference
- url: str, required. Url to retrieve Cellranger reference
counts
- introns: bool, required. Whether introns should be included (ie. sn- vs sc-RNAseq)
- n_cells: int, required. The expected number of cells to recover per sample
- mem: int, required. The memory, in Gb, available to each instance of cell ranger
filter_empty
- niters: int, required. The number of iterations to run
DropletUtils::emptyDrops
qc
- pct_counts_mt: int, required. Cells with more than this value will be discarded
- total_counts: int, required. Cells with more than this value will be discarded
- n_genes_by_counts: int, required. Cell with less than this value will be discarded
dim_reduc
This top level key is optional.
- nHVGs: int, optional. The number of variable genes to use. Defaults to 1/2 the number of cells or 10,000, whichever is less, but will never be below 1000.
- var_thresh: float, optional. Keep PCs explaining upto this fraction of variance. Between 0 and 1, defaults to 0.85.
cluster
- res: float, required. The resolution to use for Leiden clustering.
- markers: list of strings, optional. Marker to plot expression for. No default.
densities
This top level key is optional.
- features: list of strins, optional. Categorical features to calculate embeddings for. Defaults to "sample".
samples.yaml
The top level keys are the lanes from the sequencer. The second level keys are the samples from that lane. The third level keys are the paths to the R1 and R2 data. This schema also validated.
Linting and formatting
Linting results
None
Formatting results
None