epigen/fetch_ngs
Workflow to Fetch Public Sequencing Data and Metadata Using iSeq and MrBiomics Module.
Overview
Topics: bam database fastq genomics next-generation-sequencing ngs repository
Latest release: v1.0.5, Last update: 2025-04-03
Linting: linting: failed, Formatting: formatting: failed
Deployment
Step 1: Install Snakemake and Snakedeploy
Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.
When using Mamba, run
mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy
to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via
conda activate snakemake
Step 2: Deploy workflow
With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:
mkdir -p path/to/project-workdir
cd path/to/project-workdir
In all following steps, we will assume that you are inside of that directory. Then run
snakedeploy deploy-workflow https://github.com/epigen/fetch_ngs . --tag v1.0.5
Snakedeploy will create two folders, workflow
and config
. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.
Step 3: Configure workflow
To configure the workflow, adapt config/config.yml
to your needs following the instructions below.
Step 4: Run workflow
The deployment method is controlled using the --software-deployment-method
(short --sdm
) argument.
To run the workflow with automatic deployment of all required software via conda
/mamba
, use
snakemake --cores all --sdm conda
Snakemake will automatically detect the main Snakefile
in the workflow
subfolder and execute the workflow module that has been defined by the deployment in step 2.
For further options such as cluster and cloud execution, see the docs.
Step 5: Generate report
After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using
snakemake --report report.zip
Configuration
The following section is imported from the workflow’s config/README.md
.
You only need one configuration file to run the complete workflow. You can use the provided example as starting point. If in doubt read the comments in the configuration file, the documentation of the respective methods and/or try the default values.
configuration (config/config.yaml
): Different for every project/dataset and configures the datasets to be fetched and how they should be processed. The fields are described within the file.
Set workflow-specific resources
or command line arguments (CLI) in the workflow profile workflow/profiles/default/config.yaml
, which supersedes global Snakemake profiles.
Example Configurations
Metadata-only
project_name: ExploratoryProject
result_path: results/
metadata_only: 1
accession_ids:
- GSE122139
Full download with BAM output
threads: 16
mem: 32000
project_name: BAMProject
result_path: results/
metadata_only: 0
output_format: bam
accession_ids:
- GSE122139
- SRP123456
Full download with FASTQ output
threads: 16
mem: 32000
project_name: FastqProject
result_path: results/
metadata_only: 0
output_format: fastq
accession_ids:
- ERS5684710
Linting and formatting
Linting results
Using workflow specific profile workflow/profiles/default for setting default command line arguments.
Lints for rule iseq_download (line 3, /tmp/tmpyseuv5jq/epigen-fetch_ngs-257f432/workflow/rules/fetch.smk):
* No log directive defined:
Without a log directive, all output will be printed to the terminal. In
distributed environments, this means that errors are harder to discover.
In local environments, output of concurrent jobs will be mixed and become
unreadable.
Also see:
https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#log-files
Lints for rule fastq_to_bam (line 39, /tmp/tmpyseuv5jq/epigen-fetch_ngs-257f432/workflow/rules/fetch.smk):
* No log directive defined:
Without a log directive, all output will be printed to the terminal. In
distributed environments, this means that errors are harder to discover.
In local environments, output of concurrent jobs will be mixed and become
unreadable.
Also see:
https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#log-files
Lints for rule fetch_file (line 55, /tmp/tmpyseuv5jq/epigen-fetch_ngs-257f432/workflow/rules/fetch.smk):
... (truncated)
Formatting results
[DEBUG]
[DEBUG] In file "/tmp/tmpyseuv5jq/epigen-fetch_ngs-257f432/workflow/Snakefile": Formatted content is different from original
[DEBUG]
[DEBUG] In file "/tmp/tmpyseuv5jq/epigen-fetch_ngs-257f432/workflow/rules/fetch.smk": Formatted content is different from original
[DEBUG]
[DEBUG] In file "/tmp/tmpyseuv5jq/epigen-fetch_ngs-257f432/workflow/rules/metadata.smk": Formatted content is different from original
[DEBUG]
[DEBUG] In file "/tmp/tmpyseuv5jq/epigen-fetch_ngs-257f432/workflow/rules/export.smk": Formatted content is different from original
[INFO] 4 file(s) would be changed 😬
snakefmt version: 0.10.2