SchlossLab/mikropml-snakemake-workflow

Snakemake template for building reusable and scalable machine learning pipelines with mikropml

Overview

Topics: snakemake machine-learning rstats

Latest release: v1.3.0, Last update: 2025-02-26

Linting: linting: passed, Formatting:formatting: passed

Deployment

Step 1: Install Snakemake and Snakedeploy

Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.

When using Mamba, run

mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy

to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via

conda activate snakemake

Step 2: Deploy workflow

With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:

mkdir -p path/to/project-workdir
cd path/to/project-workdir

In all following steps, we will assume that you are inside of that directory. Then run

snakedeploy deploy-workflow https://github.com/SchlossLab/mikropml-snakemake-workflow . --tag v1.3.0

Snakedeploy will create two folders, workflow and config. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.

Step 3: Configure workflow

To configure the workflow, adapt config/config.yml to your needs following the instructions below.

Step 4: Run workflow

The deployment method is controlled using the --software-deployment-method (short --sdm) argument.

To run the workflow with automatic deployment of all required software via conda/mamba, use

snakemake --cores all --sdm conda

To run the workflow using apptainer/singularity, use

snakemake --cores all --sdm apptainer

To run the workflow using a combination of conda and apptainer/singularity for software deployment, use

snakemake --cores all --sdm conda apptainer

Snakemake will automatically detect the main Snakefile in the workflow subfolder and execute the workflow module that has been defined by the deployment in step 2.

For further options such as cluster and cloud execution, see the docs.

Step 5: Generate report

After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using

snakemake --report report.zip

Configuration

The following section is imported from the workflow’s config/README.md.

General configuration

To configure this workflow, modify config/config.yaml according to your needs.

Configuration options:

  • dataset_csv: the path to the dataset as a csv file.
  • dataset_name: a short name to identify the dataset.
  • outcome_colname: column name of the outcomes or classes for the dataset. If blank, the first column of the dataset will be used as the outcome and all other columns are features.
  • ml_methods: list of machine learning methods to use. Must be supported by mikropml or caret.
  • kfold: k number for k-fold cross validation during model training.
  • ncores: the number of cores to use for preprocess_data(), run_ml(), and get_feature_importance(). Do not exceed the number of cores you have available.
  • nseeds: the number of different random seeds to use for training models with run_ml(). This will result in nseeds different train/test splits of the dataset.
  • find_feature_importance: whether to calculate feature importances with permutation tests (true or false). If false, the plot in the report will be blank.
  • hyperparams: override the default model hyperparameters set by mikropml for each ML method (optional). Leave this blank if you'd like to use the defaults. You will have to set these if you wish to use an ML method from caret that we don't officially support.

We also provide config/test.yaml, which uses a smaller dataset so you can first make sure the workflow runs without error on your machine before using your own dataset and custom parameters.

The default and test config files are suitable for initial testing, but we recommend using more cores (if available) and more seeds for model training. A more robust configuration is provided in config/robust.yaml.

SLURM

  1. If you plan to run the workflow on an HPC with Slurm, you will need to edit your email (YOUR_EMAIL_HERE), Slurm account (YOUR_ACCOUNT_HERE), and other Slurm parameters as needed in config/slurm/config.yaml.

  2. Create a slurm submission script (workflow/scripts/submit_slurm.sh) with the following contents:

    #!/bin/bash
    #SBATCH --job-name=mikropml # sbatch options here only affect the overall job
    #SBATCH --nodes=1
    #SBATCH --ntasks-per-node=1
    #SBATCH --cpus-per-task=1
    #SBATCH --mem-per-cpu=100MB
    #SBATCH --time=96:00:00
    #SBATCH --output=log/hpc/slurm-%j_%x.out 
    #SBATCH --account=YOUR_ACCOUNT_HERE      # your account name
    #SBATCH --partition=standard             # the partition
    #SBATCH --mail-user=YOUR_EMAIL_HERE      # your email
    #SBATCH --mail-type=BEGIN,END,FAIL
    

    # Load any required modules for your HPC. module load singularity

    # Run snakemake snakemake –profile config/slurm –latency-wait 90 –use-singularity –use-conda –configfile config/test.yaml

    Edit the slurm options as needed. Run snakemake --help to see descriptions of snakemake's command line arguments.

  3. Submit the snakemake workflow with:

    sbatch workflow/scripts/submit_slurm.sh

    The main job will submit all other snakemake jobs using the default resources specified in config/slurm/config.yaml. This allows independent steps of the workflow to run on different nodes in parallel. Slurm output files will be written to log/hpc/.

Out of memory or walltime

When using slurm, if any of your jobs fail because it ran out of memory, you can increase the memory for the given rule with the resources directive in the Snakefile. For example, if the combine_hp_performance rule fails, you can increase the memory from 16GB to say 24GB in workflow/rules/combine.smk:

rule combine_hp_performance:
    input:
        ...
    resources:
        mem_mb = MEM_PER_GB * 24
    ...

The new mem_mb value then gets passed on to the slurm configuration.

To specify more cores for a rule, use the threads directive:

rule combine_hp_performance:
    input:
        ...
    resources:
        mem_mb = MEM_PER_GB * 24
    threads: 8
    ...

You can also change other slurm parameters that are defined in config/slurm/config.yaml

Linting and formatting

Linting results

None

Formatting results

None