Command-Line
High-level components of the functionality described in the Usage section are also accessible via the esp
command-line entry point. Using the entry point, you can import workflows/protocols, seed ESP with new data, create samples, and update metadata for entries in the ESP database.
Status
To check the status of a running ESP instance, you can use the status
entry point:
~$ python -m esp status Connecting to esp with following credentials: host: localhost port: 8002 email: admin@localhost cookies: None Connection established!
Import
As mentioned in the Usage section of the documentation, you can import YAML configuration for each model defined in the python client. Below is an example of how to import a Workflow
via the command-line (see the Usage section for an example of the config file format):
~$ python -m esp import workflow /path/to/workflow.yml
You can also use the same entry point for importing experiment data into the ESP application. Here’s an example config definition for a project and related experiments (with data):
Miseq Sequencing: desc: Project container for MiSeq runs. tags: - illumina - miseq experiments: - MS001: submit: true workflow: Illumina Sequencing tags: - illumina - miseq samples: - ESP000001 - ESP000002 protocols: - Set Samples: complete: true data: Note: Ready to go! - Create Illumina Library: complete: true cols: - Index Type - I7 Index ID - I5 Index ID data: - ['Nextera DNA', 'N701', 'N501'] - ['Nextera DNA', 'N702', 'N502'] - Analyze Sequencing Results: run: true complete: true data: - ESP000001: Reference: GRCh38 - ESP000002: Reference: GRCh37 - MS002: submit: false workflow: Illumina Sequencing tags: - illumina - miseq samples: - ESP000001
To import this project, you can use an entry-point call with the same flavor:
~$ esp import project /path/to/project.yml
This will import the entire Project with related Experiment, Sample, and SampleSheet objects.
For more information on the import
entry point, you can use the -h
flag:
~$ python -m esp import -h
Seed
To do a bulk import, you can use the seed entry point:
~$ python -m esp seed /path/to/content.yml
For this type of import, all of your content can be defined in the same file if the models are explicitly declared:
--- - model: workflow data: My Workflow: desc: Definition for workflow. ... - model: sample_type data: /path/to/sample_type.yml - model: user data: ${CWD}/users.yml
Any type of importable model from the client can be included in this config. To see the list of models supported by the client, use the python -m esp import -h
command.
Watch
During development, it’s useful to have real-time updates of changes you make to content config files. This is possible with the watch entrypoint. To use the watch entrypoint, include watch before your seed or import command:
~$ # with seed ~$ python -m esp watch seed /path/to/content.yml ~$ # with import ~$ python -m esp watch import workflow /path/to/workflow.yml
This will monitor the directory containing that seed file for changes and will run the command any time a change occurs. Notably, this helps shorten the develop/test cycle for content in the SDK, where make import isn’t necessary to re-seed an instance with content. With this the watch entrypoint will watch for changes made by developers and update their esp instance in real-time.
You can also include additional directories into the search path for the watch entrypoint. For example, to watch a content directory in the SDK and run seed each time a change is made in that directory, you can use:
~$ python -m esp watch --include=./content seed ./roles/content.yml
This makes the process of iterating on multi-part configuration for an instance more manageable (especially since you can comment out items in the seed file to cut down on imports during development/testing).
Export
Along with imports, you can also export specific data. To export individual models, you can use:
~$ esp export workflow 'Illumina Sequencing' > Illumina-Sequencing.yml
Similarly, to export all data for a project, use:
~$ esp export project -n 'My Project' > My-Project.yml
For more information on the export
entry point, you can use the -h
flag:
~$ python -m esp export -h
Note
Exported files won’t contain any information about UUIDs created in the system. Only information that can be used to seed the system from scratch will be exported. This is intentional and meant to not keep unnecessary history from propagating across installs. For a full export, use the export utilities from within the application.
Dump
Along with exporting a specific model, you can also do a full dump (export) of all content models in your instance. For example, if you use the client to run:
~$ esp dump
You will find a folder called content/
in your current directory with content models organized like the following:
. ├── chains │ └── Whole-Genome-Sequencing.yml ├── files │ ├── Spectramax_Template_field_descriptions.pdf │ └── pn_010156.pdf ├── inventory │ ├── Item-Types.yml │ └── Sample-Types.yml ├── pipelines │ ├── ABI-Zip-Process.yml │ ├── Generate-NovaSeq-Runsheet.yml ├── reports │ ├── flow_cell_layout.html │ ├── held_samples.html ├── tasks │ └── import_nanodrop.py └── workflows ├── Bioanalyzer.yml ├── DNA-Isolation.yml ├── DNA-Quantification.yml ├── Library-Prep.yml ├── Whole-Genome-Bioinformatics.yml
This process will also create a seed
file (see above) with a manifest of content definitions in content/seed/content.yml
. You can change the default seed file location using the --seed
argument. The root folder for the content dump can be set with the -r
argument.
Similarly, you can dump all content for a specific model using the --model
command-line argument:
~$ esp dump --model SampleType
Behavior is controlled by properties of conf, where conf has properties root_directory, model, seed, and modeldest.
root_directory: all other output paths are relative this path. Default: content
model: A list of model names to export. Default:
config
vendor
container_type
container
item_type
item
sample_type
sample
task
pipeline
signatureflow
protocol
workflow
workflow_chain
applet
report
workflowable_class
execution_plan
seed: A path (relative to the caller’s working directory) to the seed file to create. Default: seed/content.yml
overwrite: True/False. If False, model configs are appended to existing files, otherwise, existing files are overwritten. In both cases, models dumped to the same file in a single call to
dump
(CLI) ordump_models
(API) will be appended to the file. Ie, if “samples” and “containers” are both set to write to “inventory/things.yml”, all samples and all containers will be written to “things.yml”.modeldest: a dictionary mapping from a model type to an output location for the model data. The mapped value is a python format string with the following available format variables:
model - the model type
model_plural - plural form of the model type
name - the name of the model
name_normalized - the normalized name of the model
type - For models with corresponding “type”, the type name
type_normalized - the normalized name of the type
class - For entity-type models (Sample), the entity class name.
class_normalized - For entity-type models (Sample), the normalized entity class name.
The
normalized
version of a string is the value, lower-case, with any character not in the set [.] replaced with _. Multiple adjacent invalid characters are collapsed into a single _.For instance, exporting a sample “SAM000001” of sample type “Whole Blood”, the format variables would be:
model - sample
model_plural - samples
name: SAM000001
name_normalized: sam000001
type: Whole Blood
type_normalized: whole_blood
class - Sample
class_normalized - sample
Note that all values are relative “root_directory”.
The default
modeldest
values are as follows:Sample:
inventory/{class_normalized}.yml
Sample Type:
inventory/entity_types.yml
Container Type:
inventory/container_types.yml
Container:
inventory/containers.yml
Vendor:
inventory/vendors.yml
Pipeline:
{model_plural_normalized}/{name_normalized}.yml
Protocol:
{model_plural_normalized}/{name_normalized}.yml
Applet:
{model_plural_normalized}/{name_normalized}.yml
Applet content:
reports/{applet name_normalized}.html
Report:
{model_plural_normalized}/{name_normalized}.yml
Report content:
reports/{report name_normalized}.html
Task:
{model_plural_normalized}/{name_normalized}.yml
Workflow:
{model_plural_normalized}/{name_normalized}.yml
User Role:
admin/userroles.yml
Users:
admin/users.yml
Config:
admin/{model_plural_normalized}/{name_normalized}.yml
Note that applets and reports are handled specially. First, although applets are technically “reports”, the model name to export them in “applet” rather than “report”. Second, by default, two outputs are produced. The first output is the the report/applet yaml. The second is the report/applet content.
For more specific information on the dump
command, you can use the -h
flag:
~$ python -m esp dump -h