tdayris/fair_fastqc_multiqc
Perform basic QC over sequenced data
Overview
Topics: fair fastqc fastqscreen reproducible-workflows snakemake
Latest release: 2.5.1, Last update: 2024-12-13
Linting: linting: failed, Formatting:formatting: passed
Deployment
Step 1: Install Snakemake and Snakedeploy
Snakemake and Snakedeploy are best installed via the Mamba package manager (a drop-in replacement for conda). If you have neither Conda nor Mamba, it is recommended to install Miniforge. More details regarding Mamba can be found here.
When using Mamba, run
mamba create -c conda-forge -c bioconda --name snakemake snakemake snakedeploy
to install both Snakemake and Snakedeploy in an isolated environment. For all following commands ensure that this environment is activated via
conda activate snakemake
Step 2: Deploy workflow
With Snakemake and Snakedeploy installed, the workflow can be deployed as follows. First, create an appropriate project working directory on your system and enter it:
mkdir -p path/to/project-workdir
cd path/to/project-workdir
In all following steps, we will assume that you are inside of that directory. Then run
snakedeploy deploy-workflow https://github.com/tdayris/fair_fastqc_multiqc . --tag 2.5.1
Snakedeploy will create two folders, workflow
and config
. The former contains the deployment of the chosen workflow as a Snakemake module, the latter contains configuration files which will be modified in the next step in order to configure the workflow to your needs.
Step 3: Configure workflow
To configure the workflow, adapt config/config.yml
to your needs following the instructions below.
Step 4: Run workflow
The deployment method is controlled using the --software-deployment-method
(short --sdm
) argument.
To run the workflow with automatic deployment of all required software via conda
/mamba
, use
snakemake --cores all --sdm conda
To run the workflow using a combination of conda
and apptainer
/singularity
for software deployment, use
snakemake --cores all --sdm conda apptainer
Snakemake will automatically detect the main Snakefile
in the workflow
subfolder and execute the workflow module that has been defined by the deployment in step 2.
For further options such as cluster and cloud execution, see the docs.
Step 5: Generate report
After finalizing your data analysis, you can automatically generate an interactive visual HTML report for inspection of results together with parameters and code inside of the browser using
snakemake --report report.zip
Configuration
The following section is imported from the workflow’s config/README.md
.
This pipeline requires two configuration file:
A standard Snakemake
configuration, yaml-formatted file containing a list of
all parameters accepted in this workflow:
-
samples
: Path to the file containing link between samples and their fastq file(s) -
params
: Per-tool list of optional parameters
Example:
samples: config/samples.csv
Optional parameters
params:
Path to configuration file
fair_fastqc_multiqc_fastq_screen_config: “/mnt/beegfs/database/bioinfo/Index_DB/Fastq_Screen/0.14.0/fastq_screen.conf”
A complete list of accepted keys is available in schemas, with their default value, expected type, and human readable description.
A CSV-formatted text file containing the following mandatory columns:
-
sample_id
: Unique name of the sample -
upstream_file
: Path to upstream fastq file -
species
: The species name, according to Ensembl standards. -
build
: The corresponding genome build, according to Ensembl standards. -
release
: The corresponding genome release, according to Ensembl standards. -
downstream_file
: Path to downstream fastq file, leave empty in case of Single ended library.
A complete list of accepted keys is available in schemas, with their default value, expected type, and human readable description.
Example:
sample_id,upstream_file,downstream_file,species,build,release
sac_a,data/reads/a.scerevisiae.1.fq,data/reads/a.scerevisiae.2.fq,saccharomyces_cerevisiae,R64-1-1,110
While CSV
format is tested and recommended, this workflow uses python
csv.Sniffer()
to detect column separator. Tabulation and semicolumn are
also accepted as field separator. Remember that only comma-separator is
tested.
This file is fully optional. When missing, the genome sequences will be downloaded from Ensembl and indexed.
A CSV-formatted text file containing the following mandatory columns:
-
species
: The species name, according to Ensembl standards -
build
: The corresponding genome build, according to Ensembl standards -
release
: The corresponding genome release, according to Ensembl standards
Example:
species,build,release
homo_sapiens,GRCh38,105
mus_musculus,GRCm38,99
mus_musculus,GRCm39,110
A complete list of accepted keys is available in schemas, with their default value, expected type, and human readable description.
Note:
While CSV
format is tested and recommended, this workflow uses python
csv.Sniffer()
to detect column separator. Tabulation and semicolumn are
also accepted as field separator. Remember that only comma-separator is
tested.
Linting and formatting
Linting results
FileNotFoundError in file /tmp/tmpjju01v5q/tdayris-fair_fastqc_multiqc-5d7c2a0/workflow/rules/common.smk, line 362:
Could not find fqscreen_config='/mnt/beegfs/database/bioinfo/Index_DB/Fastq_Screen/0.14.0/fastq_screen.conf'
File "/tmp/tmpjju01v5q/tdayris-fair_fastqc_multiqc-5d7c2a0/workflow/rules/multiqc.smk", line 63, in <module>
File "/tmp/tmpjju01v5q/tdayris-fair_fastqc_multiqc-5d7c2a0/workflow/rules/common.smk", line 362, in use_fqscreen
Formatting results
None