Snakemake storage plugin: deeporigin
Warning
No documentation found in repository https://github.com/formiclabs/snakemake-storage-plugin-deeporigin. The plugin should provide a docs/intro.md with some introductory sentences and optionally a docs/further.md file with details beyond the auto-generated usage instractions presented in this catalog.
Installation
Install this plugin by installing it with pip or mamba, e.g.:
pip install snakemake-storage-plugin-deeporigin
Usage
Queries
Queries to this storage should have the following format:
Query type |
Query |
Description |
---|---|---|
any |
|
A file in an S3 bucket |
As default provider
If you want all your input and output (which is not explicitly marked to come from another storage) to be written to and read from this storage, you can use it as a default provider via:
snakemake --default-storage-provider deeporigin --default-storage-prefix ...
with ...
being the prefix of a query under which you want to store all your
results.
You can also pass custom settings via command line arguments:
snakemake --default-storage-provider deeporigin --default-storage-prefix ... \
--storage-deeporigin-max-requests-per-second ... \ --storage-deeporigin-endpoint-url ... \ --storage-deeporigin-access-key ... \ --storage-deeporigin-secret-key ... \ --storage-deeporigin-token ... \ --storage-deeporigin-signature-version ... \ --storage-deeporigin-retries ...
Within the workflow
If you want to use this storage plugin only for specific items, you can register it inside of your workflow:
# register storage provider (not needed if no custom settings are to be defined here)
storage:
provider="deeporigin",
# optionally add custom settings here if needed
# alternatively they can be passed via command line arguments
# starting with --storage-deeporigin-..., see
# snakemake --help
# Maximum number of requests per second for this storage provider. If nothing is specified, the default implemented by the storage plugin is used.
max_requests_per_second=...,
# S3 endpoint URL (if omitted, AWS S3 is used)
endpoint_url=...,
# S3 access key (if omitted, credentials are taken from .aws/credentials as e.g. created by aws configure)
access_key=...,
# S3 secret key (if omitted, credentials are taken from .aws/credentials as e.g. created by aws configure)
secret_key=...,
# S3 token (usually not required)
token=...,
# S3 signature version
signature_version=...,
# S3 API retries
retries=...,
rule example:
input:
storage.deeporigin(
# define query to the storage backend here
...
),
output:
"example.txt"
shell:
"..."
Using multiple entities of the same storage plugin
In case you have to use this storage plugin multiple times, but with different settings (e.g. to connect to different storage servers), you can register it multiple times, each time providing a different tag:
# register shared settings
storage:
provider="deeporigin",
# optionally add custom settings here if needed
# alternatively they can be passed via command line arguments
# starting with --storage-deeporigin-..., see below
# Maximum number of requests per second for this storage provider. If nothing is specified, the default implemented by the storage plugin is used.
max_requests_per_second=...,
# S3 endpoint URL (if omitted, AWS S3 is used)
endpoint_url=...,
# S3 access key (if omitted, credentials are taken from .aws/credentials as e.g. created by aws configure)
access_key=...,
# S3 secret key (if omitted, credentials are taken from .aws/credentials as e.g. created by aws configure)
secret_key=...,
# S3 token (usually not required)
token=...,
# S3 signature version
signature_version=...,
# S3 API retries
retries=...,
# register multiple tagged entities
storage foo:
provider="deeporigin",
# optionally add custom settings here if needed
# alternatively they can be passed via command line arguments
# starting with --storage-deeporigin-..., see below.
# To only pass a setting to this tagged entity, prefix the given value with
# the tag name, i.e. foo:max_requests_per_second=...
# Maximum number of requests per second for this storage provider. If nothing is specified, the default implemented by the storage plugin is used.
max_requests_per_second=...,
# S3 endpoint URL (if omitted, AWS S3 is used)
endpoint_url=...,
# S3 access key (if omitted, credentials are taken from .aws/credentials as e.g. created by aws configure)
access_key=...,
# S3 secret key (if omitted, credentials are taken from .aws/credentials as e.g. created by aws configure)
secret_key=...,
# S3 token (usually not required)
token=...,
# S3 signature version
signature_version=...,
# S3 API retries
retries=...,
rule example:
input:
storage.foo(
# define query to the storage backend here
...
),
output:
"example.txt"
shell:
"..."
Settings
The storage plugin has the following settings (which can be passed via command line, the workflow or environment variables, if provided in the respective columns):
CLI setting |
Workflow setting |
Envvar setting |
Description |
Default |
Choices |
Required |
Type |
---|---|---|---|---|---|---|---|
|
|
Maximum number of requests per second for this storage provider. If nothing is specified, the default implemented by the storage plugin is used. |
|
✗ |
str |
||
|
|
S3 endpoint URL (if omitted, AWS S3 is used) |
|
✗ |
str |
||
|
|
|
S3 access key (if omitted, credentials are taken from .aws/credentials as e.g. created by aws configure) |
|
✓ |
str |
|
|
|
|
S3 secret key (if omitted, credentials are taken from .aws/credentials as e.g. created by aws configure) |
|
✓ |
str |
|
|
|
|
S3 token (usually not required) |
|
✗ |
str |
|
|
|
S3 signature version |
|
✗ |
str |
||
|
|
S3 API retries |
|
✗ |
int |