epigen/unsupervised_analysis
A general purpose Snakemake workflow and MrBiomics module to perform unsupervised analyses (dimensionality reduction & cluster analysis) and visualizations of high-dimensional data.
Overview
Latest release: v3.0.3, Last update: 2026-03-13
Share link: https://snakemake.github.io/snakemake-workflow-catalog?wf=epigen/unsupervised_analysis
Quality control: linting: failed formatting: failed
Topics: data-science high-dimensional-data snakemake workflow unsupervised-learning principal-component-analysis umap pca visualization clustering data-visualization dimensionality-reduction heatmap densmap cluster-analysis cluster-validation clustering-algorithm clustree leiden-algorithm
Deployment
Step 1: Install Snakemake and Snakedeploy
Snakemake and Snakedeploy are best installed via the Conda package manager. It is recommended to install conda via Miniforge. Run
conda create -c conda-forge -c bioconda -c nodefaults --name snakemake snakemake snakedeploy
to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via
conda activate snakemake
For other installation methods, refer to the Snakemake and Snakedeploy documentation.
Step 2: Deploy workflow
With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:
mkdir -p path/to/project-workdir
cd path/to/project-workdir
In all following steps, we will assume that you are inside of that directory. Then run
snakedeploy deploy-workflow https://github.com/epigen/unsupervised_analysis . --tag v3.0.3
Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.
Step 3: Configure workflow
To configure the workflow, adapt config/config.yml to your needs following the instructions below.
Step 4: Run workflow
The deployment method is controlled using the --software-deployment-method (short --sdm) argument.
To run the workflow with automatic deployment of all required software via conda/mamba, use
snakemake --cores all --sdm conda
Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.
For further options such as cluster and cloud execution, see the docs.
Step 5: Generate report
After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using
snakemake --report report.zip
Configuration
The following section is imported from the workflow’s config/README.md.
Configuration
You need one configuration file to configure the analyses and one annotation file describing the data to run the complete workflow. If in doubt read the comments in the config and/or try the default values. We provide a full example including data and configuration in test/ as a starting point.
project configuration (
config/config.yaml): Different for every project and configures the analyses to be performed.sample annotation (annotation): CSV file consisting of four mandatory columns.
name: A unique name for the dataset (tip: keep it short but descriptive).
data: Path to the tabular data as a comma-separated table (CSV).
metadata: Path to the metadata as a comma-separated table (CSV) with the first column being the index/identifier of each observation/sample and every other column metadata for the respective observation (either numeric or categorical, not mixed). No NaN or empty values allowed, and no special characters (all except a-z, 0-9,
_) in the index.samples_by_features: Boolean indicator if the data matrix is observations/samples (rows) x features (columns): 0==no, 1==yes.
Set workflow-specific resources or command line arguments (CLI) in the workflow profile workflow/profiles/default.config.yaml, which supersedes global Snakemake profiles.
Linting and formatting
Linting results
1Using workflow specific profile workflow/profiles/default for setting default command line arguments.
2RuleException in file "/tmp/tmpdbl79n3l/epigen-unsupervised_analysis-9e600f9/workflow/rules/dimred.smk", line 2:
3Standard resource specified with invalid type, got error:
4Resource 'mem_mb' must be assigned an int. Got '32000' (type <class 'str'>)
Formatting results
1[DEBUG]
2[DEBUG] In file "/tmp/tmpdbl79n3l/epigen-unsupervised_analysis-9e600f9/workflow/rules/cluster_validation.smk": Formatted content is different from original
3[DEBUG]
4[DEBUG] In file "/tmp/tmpdbl79n3l/epigen-unsupervised_analysis-9e600f9/workflow/rules/dimred.smk": Formatted content is different from original
5[DEBUG]
6[DEBUG] In file "/tmp/tmpdbl79n3l/epigen-unsupervised_analysis-9e600f9/workflow/rules/common.smk": Formatted content is different from original
7[DEBUG]
8[DEBUG] In file "/tmp/tmpdbl79n3l/epigen-unsupervised_analysis-9e600f9/workflow/Snakefile": Formatted content is different from original
9[DEBUG]
10[DEBUG] In file "/tmp/tmpdbl79n3l/epigen-unsupervised_analysis-9e600f9/workflow/rules/envs_export.smk": Formatted content is different from original
11[DEBUG]
12[DEBUG] In file "/tmp/tmpdbl79n3l/epigen-unsupervised_analysis-9e600f9/workflow/rules/visualization.smk": Formatted content is different from original
13[DEBUG]
14[DEBUG] In file "/tmp/tmpdbl79n3l/epigen-unsupervised_analysis-9e600f9/workflow/rules/clustering.smk": Formatted content is different from original
15[INFO] 7 file(s) would be changed 😬
16
17snakefmt version: 0.11.4