L7|ESP Backend API¶
L7|ESP extensions, such as custom endpoints and expressions, make use of the backend python API. This API is imported from the lab7 namespace. The backend API follows several conventions for consistency.
The public api methods are declared in
lab7.<module>.api
. For instance, API methods relating to projects are inlab7.project.api
.Within the modules, each L7|ESP object generally has
create_<type>
,update_<type>
,get_<type>
, andquery_<type>s
functions availalbe. For instance,create_project
,update_project
,get_project
, andquery_projects
. Some modules (notablylab7.lis.api
) have additional functions available.Most API calls deal with “Resource” objects, which have a UUID and an entry in the
resource
table. Functions that operate on existing resource objects will accept either a instantiated object or an object UUID. For instance,lab7.sample.api.update_sample
accepts as a first argument either alab7.sample.models.Sample
object or the UUID one.Common arguments: most API calls have a common set of arguments; some may have additional, API-specific options. Common arguments include:
return_dict: A boolean that, if
True
, means the the data will be returned in a JSON-serializable format, such as a dictionary (create/update/get) or list of dictionaries (query). IfFalse
, the data will be returned as a a SQLAlchemy object or a list of SQLAlchemy objects.session: The SQLAlchemy session, which should normally be the request-associated session.
agent: A Resource or UUID of a resource that is the “actor” in the API call. This is usually the authenticated user for the request.
params: A list of strings or a dictionary with string keys and list, dictionary, or None values. See below for more details
deep_copy: This is an older API argument that is preserved for backwards compatibility. It influences the default set of params.
ignore_deleted: If true, archived objects will not be returned. If false, archived objects may be returned. For instance, if sample “X” has been archived,
get_sample("uuid of X")
will raise aResourceNotFound
exception butget_sample("uuid of X", ignore_deleted=False)
will return the object.
Note
When writing extensions that require use of the L7|ESP backend APIs, it is best to avoid importing the backend APIs at the extension module top-level. This is because extensions are initialized relatively early in the ESP process lifecycle. Instead, import at the top of an extension function, as:
@expression
def my_expression():
import lab7.project.api as project_api
The params argument¶
The params
argument influences the query which
is executed and the shape of the return data. For instance, passing
params=["uuid"]
will return a minimal set of information, typically
restricted to fields available on the core resource
table in the database.
All endpoints and objects have a default set of params. params
may be specified
as:
list - the list of properties of interest, such as
params=["uuid", "values"]
dict - the keys are the properties of interest, the values may be None, list, or dict. For instance:
params={"uuid": None}
is equivalent toparams=["uuid"]
. A dict is used for nested parameters. For instance, a workflow accepts params; so do the protocols within the workflows, so you might pass:params={"uuid": None, "steps": {"uuid": None}}
orparams={"uuid": None, "steps": ["uuid"] }
which would fetch the minimal data for a workflow plus minimal data for the protocols contained by the workflows.
Note that params
is not currently an exact descriptor of the return data structure (ie not like graphql).
Rather, it provides a set of “hints” to the API endpoint about the type of data you will be accessing so data
can be fetched more efficiently.
Note
New in ESP 3.0, some APIs now accept special forms of the property names, where the name is
prefixed by either -
or ~
. These prefixes are used to specify a delta against the
default properties rather than replacing the defaults outright, with -
used to remove
a property and ~
used to add one (these also work at the REST API level. +
is a
reserved character in http
, hence the use of ~
).
For instance, params=["-steps"]
indicates “Use the default properties except steps” and
params=["~steps"]
indicates “Use the default properties plus steps”.
Analysis¶
The analysis module handles generating pipeline reports.
- file_details_element(el, fmap, info, report)¶
Return html showing the details of the element’s file
- fixed_html_element(el, fmap, info, report)¶
Return element’s html content
- iframe_file_element(el, fmap, info, report)¶
Return the element’s file wrapped in an iframe
- image_file(el, fmap, info, report)¶
Return the element’s file as an image
- raw_file_element(el, fmap, info, report)¶
Dump raw data between <pre></pre> tags
- table_element(el, fmap, info, report)¶
Return html table reflecting the underlying col/row data
Concierge¶
The concierge module handles configuration objects and handle some ESP service-related functionality. Note that configuration objects are not resources.
- get_language_by_code(session, code)¶
Gets the a dictionary for a given language code.
- get_language_id_by_code(session, code)¶
Gets the language_id for a given language code. Returns 0 if no code
- get_languages(session)¶
Gets the all the language records
Container¶
The container API handles L7|ESP containers/locations.
- add_item_to_container(container, item_uuid, loc, fields=None, wi_uuid=None, overwrite=False, session=None, agent=None, return_dict=True)¶
- apply_container_filters(query, session=None, **filters)¶
- container_descendants(uuid, depth=None, session=None, agent=None, uuids=None)¶
Retrieve a container’s descendants
- Parameters:
uuid – uuid of container
depth – depth of tree to fetch
- create_container(def_uuid, return_dict=True, session=None, agent=None, **values)¶
- create_container_type(session=None, agent=None, **values)¶
- delete_container(container, session=None, agent=None)¶
- delete_container_type(container_type, session=None, agent=None)¶
- export_container(uuid, session=None, agent=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: str desc: str tags: list variables: [dict]
- export_container_type(uuid, session=None, agent=None, format: str = None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: str fixed_id: str desc: str tags: list axes: [object] contains: [string] label_format: string slot_capacity: string variables: [dict]
- get_container(container, return_dict=True, ignore_deleted=True, session=None, agent=None)¶
- get_container_roots(session=None, agent=None, uuids=None)¶
Gets all conainers in ESP which themselves are not contained in any other container resources
- get_container_type(uuid, ignore_deleted=True, session=None, agent=None, params=None)¶
- get_container_type_definition(uuid, ignore_deleted=True, session=None, agent=None)¶
- get_generic_container_class(session)¶
- import_container(config, overwrite=False, session=None, agent=None)¶
Import configuration for sample.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Create container:
name: Freezer 1
Create container with note and tags:
name: Freezer 1 desc: Freezer in lab A tags: [freezer, lab] type: Freezer barcode: 12345
Container with samples to fill:
name: Freezer 1 desc: Freezer in lab A tags: [freezer, lab] type: Freezer barcode: 12345 fill: Shelf 1: - ESP0001 - name: ESP0002 desc: special new sample tags: [one, two]
Configuration Notes:
Upon creation, samples can be used to fill container slots via the
fill
parameter.Default container values can be set for samples using the
variables
parameter.
- import_container_type(config, overwrite=False, session=None, agent=None)¶
Import configuration for model.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Single-element container:
name: 96-Well Plate desc: 96-Well Plate for sample aliquots tags: [plate, lab] label_format: '%s%s' slot_capacity: element axes: - Rows: [A, B, C, D, E, F, G, H] - Cols: [01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 11, 12] contains: - Sample
Multi-element container:
name: Freezer desc: Freezer for samples and racks tags: [freezer, lab] label_format: 'Shelf %s' slot_capacity: list axes: - Shelf: [1, 2, 3, 4, 5] contains: - Sample - Container
Type with default containers:
name: Freezer desc: Freezer for samples and racks tags: [freezer, lab] label_format: 'Shelf %s' slot_capacity: list axes: - Shelf: [1, 2, 3, 4, 5] contains: - Sample - Container create: - Freezer 1 - Freezer 2
Nested config with containers and samples to fill:
name: Freezer desc: Freezer for samples and racks tags: [freezer, lab] label_format: 'Shelf %s' slot_capacity: list axes: - Shelf: [1, 2, 3, 4, 5] contains: - Sample - Container create: - Freezer 1 - name: Freezer 2 barcode: 1234 fill: Shelf 1: [ESP0001, ESP0002]
Configuration Notes:
Variables specified for samples can take the same format as variables defined for protocols within ESP.
Container creation can be nested in ContainerType configuration using the
create
parameter.
- migrate_containers(container_dicts: list[dict], new_container_type_def: ContainerTypeDefinition, session: Session = None, agent: Resource = None) list[str] ¶
Migrate containers to a different container type version.
- Parameters:
container_dicts – List of container dictionaries to migrate
new_container_type_def – the new version of container type to migrate to
session – SQLAlchemy session
agent – The actor performing this action, usually the request-bound user
- Returns:
A list of migrated container UUIDs.
- query_container_type_definitions(filters=None, deep_copy=False, params=None, ignore_deleted=True, return_dict=True, sort=None, session=None, agent=None)¶
- query_container_types(filters=None, deep_copy=False, params=None, ignore_deleted=True, return_dict=True, sort=None, session=None, agent=None)¶
- query_containers(filters=None, deep_copy=False, params=None, ignore_deleted=True, return_dict=True, sort=None, session=None, agent=None)¶
- remove_item_from_container(container, item=None, loc=None, session=None, agent=None)¶
- undelete_container(container, session=None, agent=None, return_dict=True)¶
- undelete_container_type(container_type, session=None, agent=None, return_dict=True)¶
- update_container(container, session=None, agent=None, return_dict=True, **values)¶
- update_container_type(ct, session=None, agent=None, **values)¶
TODO: Until we allow versioning of container types, name is the only value can be updated safely
Expression¶
The expression API handles L7|ESP expression functionality, notably evaluating expressions.
- evaluate_expression(expression, context=None, session=None, agent=None)¶
Evaluate an expression with a given context.
Inventory¶
The Inventory API handles inventory-related functionality, including:
Customers
Vendors
Services
Service Types
Inventory Item Types
Inventory Items
L7FS¶
The l7fs module handles ESP’s File registry.
Python API for the Lab7 file system.
- add_file_dependency(uuid, prior_uuid, label=None, session=None, agent=None)¶
Add a dependency to the file.
- add_file_to_group(fg_uuid, file_uuid, label, as_head=False, session=None, agent=None)¶
Add a File to a FileGroup.
- add_file_version(version, file, agent=None, session=None)¶
- b64_file(uuid, session=None, agent=None)¶
- create_file(filename, contents, name=None, desc=None, meta=None, augment=None, tags=None, deps=None, versions=None, session=None, agent=None)¶
- create_file_group(name, desc=None, meta=None, augment=None, tags=None, session=None, agent=None)¶
Create a new FileGroup.
FileGroup.name refers to a unique FileGroup group.
Note that this function is enforced at the API level (here), not at the model level. This API call checks to see if a FileGroup with the specified name param already exists; if it does, that FileGroup is used.
- create_file_version_group(name, version_policy='append', desc=None, meta=None, augment=None, tags=None, session=None, agent=None)¶
Creates a new version history, represented by a FileVersionGroup. See FileVersionGroup documentation for details.
- delete_file(uuid, session=None, agent=None)¶
Delete a file.
As with all resources in ESP, deleting a file marks it as deleted, but leaves it in the database for referential integrity.
URL Pattern: DELETE /api/files/:uuid
- delete_file_version_group(version_uuid, session=None, agent=None)¶
- destroy_file(uuid, session=None, agent=None)¶
NOTE: You should probably use “delete_file” instead! Destroys the file, freeing up space in the filesystem and removing it from the database entirely. This is a dangerous function causing loss of data, unlike most “delete” operations in ESP. It should only be used in very specific situations, like the “overwrite” mode of file versioning, where it is necessary to prevent the system from quickly taking up large amounts of space (this can happen, for example, through frequent annotation of large image files).
- export_file(uuid, session=None, agent=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: string desc: string tags: [string] uri: string type: string
- file_from_uuid(uuid, session, ignore_deleted=True)¶
- get_file(uuid, ignore_deleted=True, session=None, agent=None)¶
Get a file from its uuid.
Supported query arguments: None
Supported params: None
- get_file_group(uuid, label=None, head=False, ignore_deleted=True, session=None, agent=None)¶
Return the complete FileGroup, including all of its Files, or:
If head is True, return the head of the FileGroup.
Otherwise, if label is not None, return the File that corresponds to the label, relative to the uuid of the FileGroup.
- get_file_version_group(uuid, session, agent)¶
- get_version_groups_by_uuid(uuids, session=None, agent=None)¶
Retrieves version group objects based on their uuids.
- import_file(config, overwrite=False, session=None, agent=None)¶
Import configuration for file.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Create reference to relative file:
name: My Protocol SOP desc: An SOP file tags: [sop, protocol] uri: relative/path/to/instructions.pdf upload: true type: application/pdf
Create reference for explicit local file:
name: Large Sequencing File desc: An SOP file tags: [sop, protocol] uri: file:///path/to/local/file/instructions.bam upload: true type: text/plain
Configuration Notes:
Due to current backend limitations, uri inputs can only take the formats: “relative/path”, “/absoluate/path”, and “file:///absolute/path”.
upload is not a component of the import format, because the uploading process must happen via external client. This import scheme always references local files.
- list_file_actions(uuid, session=None, agent=None)¶
Get the list of actions performed against the file.
- list_file_dependencies(uuid, session=None, agent=None)¶
Get the file’s dependencies.
- list_tag_files(name, session=None, agent=None)¶
Given a tag, return all the files associated with that tag.
- open_file_from_uuid(uuid, session, agent, mode='r', buffering=None)¶
- query_file(filters=None, limit=0, ignore_deleted=True, session=None, agent=None)¶
DEPRECATED - use lab7.l7fs.api.query_files()
Find a file based on its path or name.
- query_file_actions(uuid, filters=None, session=None, agent=None)¶
Find actions based on agent, desc, and/or start/end times.
- query_file_dependecies(uuid, filters=None, session=None, agent=None)¶
- query_file_group(filters=None, ignore_deleted=True, session=None, agent=None)¶
- query_files(filters=None, ignore_deleted=True, return_dict=True, session=None, agent=None, sort=None, limit=None, offset=None)¶
Query for files
Supported filters:
- name
Type: String
Exact match to File name
Parameters: None
- read_binary_file(uuid, offset=0, session=None, agent=None)¶
Return the contents of the file, starting offset bytes into the file.
- read_file(uuid, offset=0, session=None, agent=None)¶
Return the contents of the file, starting offset bytes into the file.
- read_l7fs_config(session: Session)¶
Read L7 file system config.
To maintain backwards compatibility this function also reads and adds paths to allowed directories from l7fs_compat config.
- read_lines_file(uuid, start=0, nlines=1, session=None, agent=None)¶
Read nlines from the file starting at the start line.
- register_file(url, name=None, desc=None, meta=None, augment=None, tags=None, deps=None, uuid=None, versions=None, trusted=False, session=None, agent=None)¶
Register new file.
- remove_file_from_group(fg_uuid, file_uuid=None, label=None, session=None, agent=None)¶
Remove a file from a FileGroup. Either file_uuid or label must be present to determine which File to remove.
- stream_file(uuid, offset=0, buffer_size=10485760, session=None, agent=None)¶
Stream the contents of the file buffer_size bytes at a time starting offset bytes into the file.
Default buffer size is 10MB.
- stream_lines_file(uuid, start=0, nlines=1, session=None, agent=None)¶
Stream the contents of a file nlines at a time starting at the start line.
- stream_tabix_file(uuid, reference=None, start=None, end=None, file_type='vcf', session=None, agent=None)¶
Stream the contents of a vcf file that has been indexed using tabix.
If reference is None, all records are returned.
- tag_file(uuid, tags, session=None, agent=None)¶
DEPRECATED - use lab7.resource.api.tag_resource()
Add one or more tags to a file.
- undelete_file(uuid, session=None, agent=None, return_dict=True)¶
Un-un-register a file
- untag_file(uuid, tags, session=None, agent=None)¶
DEPRECATED - use lab7.resource.api.untag_resource()
Remove one or more tags from a file.
- update_file(uuid, session=None, agent=None, return_dict=True, **values)¶
Update a file.
- validate_local_file_url(url: str, session: Session) bool ¶
Validate if file is within allowed directories from l7fs config to register.
This function needs to be called when registering files on the server by user through /api/files endpoint, or changing filenames to prevent arbitrary file read.
LIMS¶
The LIMS module handles the LIMS-related objects, including:
SampleSheet (front-end: Worksheet)
Workflow
WorkflowInstance (front-end: Experiment tied to a workflow)
WorkflowChain
WorkflowChainInsance (front-end: Experiment tied to a worflow chain)
Protocol
ProtocolInstance
Main¶
The main module handles common functionality that doesn’t fit other places, including executing named queries.
- exception HealthcheckFailedException¶
Healthcheck failed
- create_saved_view(name: str, route: str, settings: Dict[str, Any], shared: bool = False, default: bool = False, session: Session = None, agent=None) SavedView ¶
Creates a SavedView. Note that “shared” is currently ignored: All views created at this point in time are NOT shared as we will not introduce sharing views until a future version of ESP.
- Parameters:
name – The user-defined name of this saved view
route – The application-defined “route” this view applies to.
settings – The view settings and configuration.
shared – whether this view is shared across all users.
session – SQLAlchemy session
agent – User creating the view.
- find_querydef(queryname)¶
Return the query definition for the provided queryname.
If no matching query is found, returns None. If a matching query is found, returns:
Where the value for “query_def” is the result of yaml.load on the query file.
- get_banner_message(session=None)¶
Get the value of the site specific identified by “key”; if the key doesn’t exist, return the value specified by “default”.
- get_esp_config(session=None)¶
Get the value of the config column in the site_specific table.
- get_esp_license(session=None)¶
Get the value of the site specific identified by “key”; if the key doesn’t exist, return the value specified by “default”.
- get_site_specific(key, default=None, session=None)¶
Get the value of the site specific identified by “key”; if the key doesn’t exist, return the value specified by “default”.
- is_User_Model_an_Admin(user)¶
This method will validate a User model (not a UUID) as being an admin
- list_queries() list[dict[str, Union[int, str, float, dict, list]]] ¶
List all known external queries
- Returns:
A list of dicts with keys as below.
name: the name of the file that holds the query
path: the partial path on the FS where the file is stored
definition: the query definition: description, parameters, and name.
- run_named_query(queryname: str, session: Session, agent: Resource, parameters: dict[str] = None, commit: bool = False) dict[str, Any] ¶
Generic query execution facility.
Runs a named query, where the query is loaded from one of:
LAB7_DATA_DIR/content/queries
LAB7_DATA_DIR/common/queries
LAB7_DATA_DIR/queries
Paths are searched in turn for files matching “{queryname}.yaml” and the first discovered file is used to run the query.
- Parameters:
queryname – name of the query to run.
session – sqlalchemy session
agent – Instance of lab7.user.models.user
parameters – parameters to bind to the query, if any.
commit – If true, session.commit is called after the query is executed.
- Returns:
A dictionary with these keys.
path (str): path of the executed query
name (str): name of the query
roles (list[str]): query-specific role restrictions in place when the query was executed
description (str): query description
- error (str): error message if an error occurred executing the query.
If no error occurred, this key will not be present.
parameters (dict[str, Any]): provided query parameters
results (list[dict]): Query results.
- set_banner_message(user, banner_message, session=None)¶
Get the value of the site specific identified by “key”; if the key doesn’t exist, return the value specified by “default”.
- set_esp_config(user, esp_config, session=None)¶
Set the value of the config column in the site_specific table.
- set_esp_license(user, esp_license, session=None)¶
Get the value of the site specific identified by “key”; if the key doesn’t exist, return the value specified by “default”.
- set_site_specific(key, value, session=None)¶
Set the site specific identified by “key” to the specified “value”; returns the old value of the site specific (None if there was’t one).
Notification¶
The notification module handles sending L7|ESP notifications. Note that notifications are not resources.
- class Message¶
For static type checking of notification payloads.
- class MessageLink¶
For static type checking of notification payloads. Represents the notification link
- class MessageSection¶
For static type checking of notification payloads. Represents one notification section.
- class NotificationData¶
For static type checking of notification payloads.
- simple_message(title: str, body: str, userSpec: Dict[str, Union[List[str], str]], url: Optional[str] = None, anchortext: Optional[str] = None, type_='System', severity='info') NotificationData ¶
Utility function for constructing simple notification messages.
Param¶
The param API handles param groups.
- del_param(param_group, key, session=None, agent=None)¶
Delete a parameter in a ParamGroup.
- export_param_groups(filters=None, ignore_deleted=True, session=None, agent=None)¶
Export param_groups from the database.
By default, all (readable) param_groups in the system are exported. Specific param_groups can be exported by providing the “filters” argument, which will be passed to query_param_groups() to get the param_groups to export.
- get_param_value(param_group, key, session=None, agent=None)¶
Lookup a parameter from a ParamGroup.
param_group can be either a name or a uuid. TODO: This likely bypasses ACLs
- set_param_value(param_group, key, value, create=False, session=None, agent=None)¶
Set a parameter in a ParamGroup. If create is true, create the ParamGroup if it doesn’t already exist.
Format is: param_group:param_key
Pipeline¶
The pipeline API handles pipeline-related functionality including:
Pipeline
PipelineInstance
Task
TaskInstance
Python functions for manipulating Task, Pipeline, and their instances.
- create_pipeline(tasks=None, session=None, agent=None, return_dict=True, **values)¶
- Create a new pipeline. task_ids is a list of task UUIDs or tuples:
task1_uuid, (task2_uuid, {options…}),
- export_pipeline(uuid, nested=False, session=None, agent=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: string desc: string tags: [string] tasks: [object] deps: object report: object failure_report: object
- export_pipeline_definition(uuid, nested=False, tasks_snapshot=None, session=None, agent=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: string desc: string tags: [string] tasks: [object] deps: object report: object failure_report: object
- export_task(uuid: Union[str, Task], session: Session = None, agent: Resource = None)¶
Export configuration for model.
- Parameters:
uuid – UUID for model for task object. UUID will be auto-converted to Task object.
Export Format:
name: string desc: string tags: [string] cmd: string files: [{ name: string file_type: string filename_template: string }]
- export_task_definition(uuid, session=None, agent=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: string desc: string tags: [string] cmd: string files: [{ name: string file_type: string filename_template: string }]
- get_pipeline_definition(uuid, deep_copy=False, params=None, return_dict=True, session=None, agent=None)¶
Fetch a single Pipeline Definition by uuid.
- get_service_socket(context=None, host=None, port=None, linger=0)¶
Return a new socket
- import_pipeline(config, overwrite=False, session=None, agent=None)¶
Import configuration for pipeline.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Create pipeline with report:
name: Generate Illumina Runsheet desc: Run script to generate illumina runsheet and generate runsheet report report: Runsheet Report: desc: Report showing details of runsheet generation elements - type: file_details depends: file: Illumina Runsheet tasknumber:1 tasks: - Create Illumina Runsheet
Create multi-step pipeline with report:
name: Generate Illumina Runsheet desc: Run script to generate illumina runsheet and generate runsheet report report: Runsheet Report: desc: Report showing details of runsheet generation elements: - type: file_details depends: file: Illumina Runsheet tasknumber:1 - type: raw_file depends: file: Runsheet Upload Report tasknumber:2 - type: html contents: |+ <h1>Report Header</h1> <p>Report Body</p> tasks: - Create Illumina Runsheet - Upload Illumina Runsheet deps: Upload Illumina Runsheet: Create Illumina Runsheet
Configuration Notes:
- Report configuration will generally happen in the context
of a pipeline,
- Accordingly, this documentation references report generation
in that context.
- import_pipeline_definition(config, overwrite=False, pipeline_snapshot=None, session=None, agent=None)¶
Import configuration for pipeline definition.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Create pipeline with report:
name: Generate Illumina Runsheet desc: Run script to generate illumina runsheet and generate runsheet report report: Runsheet Report: desc: Report showing details of runsheet generation elements - type: file_details depends: file: Illumina Runsheet tasknumber:1 tasks: - Create Illumina Runsheet
Create multi-step pipeline with report:
name: Generate Illumina Runsheet desc: Run script to generate illumina runsheet and generate runsheet report report: Runsheet Report: desc: Report showing details of runsheet generation elements: - type: file_details depends: file: Illumina Runsheet tasknumber:1 - type: raw_file depends: file: Runsheet Upload Report tasknumber:2 - type: html contents: |+ <h1>Report Header</h1> <p>Report Body</p> tasks: - Create Illumina Runsheet - Upload Illumina Runsheet deps: Upload Illumina Runsheet: Create Illumina Runsheet
Configuration Notes:
- Report configuration will generally happen in the context
of a pipeline,
- Accordingly, this documentation references report generation
in that context.
- import_task(config, overwrite=False, session=None, agent=None)¶
Import configuration for task.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Simple task (no variables):
name: Ping Internal Server desc: Run command to ping internal server. cmd: curl http://internal-server
Simple task with variables:
name: Ping Internal Server desc: Run command to ping internal server. cmd: curl '{{ server }}'
Create task and track outputs:
name: Create Illumina Runsheet desc: Run script to generate illumina runsheet. cmd: /path/to/generate_runsheet.py {{project}} > runsheet.xml files: - Illumina Runsheet: file_type: xml filename_template: '{{ "runsheet.xml" }}'
Create task with inline code:
name: Report DNA Type desc: Run bash script to generate simple report. cmd: |+ # print simple report based on parameter if [ "{{ type }}" = "DNA" ]; then echo "<font color='green'>DNA</font>" > result.html elif "{{ type }}" = "DNA" ]; then echo "<font color='green'>RNA</font>" > result.html fi files: - Type Report: file_type: html filename_template: '{{ "report.html" }}'
- import_task_definition(config, overwrite=False, task_snapshot=None, session=None, agent=None)¶
Import configuration for task definition.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Simple task (no variables):
name: Ping Internal Server desc: Run command to ping internal server. cmd: curl http://internal-server
Simple task with variables:
name: Ping Internal Server desc: Run command to ping internal server. cmd: curl '{{ server }}'
Create task and track outputs:
name: Create Illumina Runsheet desc: Run script to generate illumina runsheet. cmd: /path/to/generate_runsheet.py {{project}} > runsheet.xml files: - Illumina Runsheet: file_type: xml filename_template: '{{ "runsheet.xml" }}'
Create task with inline code:
name: Report DNA Type desc: Run bash script to generate simple report. cmd: |+ # print simple report based on parameter if [ "{{ type }}" = "DNA" ]; then echo "<font color='green'>DNA</font>" > result.html elif "{{ type }}" = "DNA" ]; then echo "<font color='green'>RNA</font>" > result.html fi files: - Type Report: file_type: html filename_template: '{{ "report.html" }}'
- kill_pipeline(socket, pipeline, session=None, agent=None)¶
Kill the specified “pipeline” by sending a kill pipeline message to the pipeline service available through “socket”.
TODO: should we convert this to a full-blown @resource_op?
- pause_pipeline(socket, pipeline, session=None, agent=None)¶
Pause the specified “pipeline” by sending a pause pipeline message to the pipeline service available through “socket”.
TODO: should we convert this to a full-blown @resource_op?
- query_pipeline_definitions(filters=None, deep_copy=False, params=None, ignore_deleted=True, sort=None, return_dict=True, session=None, agent=None)¶
Query for Pipeline definitions.
- query_task_definitions(filters=None, deep_copy=False, params=None, ignore_deleted=True, sort=None, session=None, agent=None, return_dict=True)¶
Query for Task definitions.
- query_tasks(filters=None, ignore_deleted=True, deep_copy=False, params=None, sort=None, session=None, agent=None, return_dict=True)¶
These query methods should become fairly powerful over time. Right now, they’re just simple wrappers around basic column based queries. Some things going forward to consider:
Tag based queries
Full text search (for fields that support it)
Joins (?)
Lazy loading (e.g., return only uuid/name)
- restart_pipeline(socket, pipeline, session=None, agent=None)¶
Restart the specified “pipeline” by sending a restart pipeline message to the pipeline service available through “socket”.
TODO: should we convert this to a full-blown @resource_op?
- start_pipeline(socket: Socket, pipeline_id: str, env: Optional[dict[str, Any]] = None, id_type: Optional[str] = None, blocking: bool = True, dependencies=None, snapshot: Optional[dict[str, Any]] = None, agent: Resource = None, session: Session = None) Optional[list[str]] ¶
Start one or more pipeline instances using the pipeline service
Note
id_type is deprecated and may be removed in future versions of ESP. If/when it is removed, the pipeline_id will treated as a UUID. Version for removal has not been determined yet.
- Parameters:
socket – An already-connected DEALER socket for sending requests to the ESP pipeline service. The easiest way of configuring such a socket is to call this module’s get_service_socket function.
pipeline_id – Either the name or UUID of the Pipeline to run; see
id_type
and deprecation warning above for details.env – {“param”: <value>} key-value pairs defining the evaluation environment for expressions in the pipeline’s tasks.
id_type – If None, guess whether pipeline_id is a name or UUID based on its format. If not None, must be either “uuid” or “name”. Refer to the deprecation warning above regarding the removal of this parameter.
blocking – If True, wait until the pipeline service acknowledges it has started the pipeline instances before returning. If False, return immediately without waiting for such confirmation messages.
dependencies – Who knows?
snapshot – Has keys “uuid” and “tasks”. Provides a “snapshot” of a pipeline to execute for pinned-process execution.
agent – User running the pipeline. The User must have a Role with Pipeline “execute” permissions, or a PermissionDeniedError will be raised.
- Returns:
List of UUIDs for the started pipeline instances if
blocking
is true, otherwiseNone
.None is returned when
blocking
is False because we did not wait for acknowledgement messages from the pipeline service. In this case, the caller is responsible for checking whether the pipeline instances were actually started, either by using this module’s query_pipeline_instances function or by subscribing to the pipeline service’s “status” port.
- update_pipeline(pipeline, session=None, agent=None, return_dict=True, **values)¶
Update a pipeline
Projects¶
The projects API handles project objects and related functionality.
Python functions for manipulating Projects.
- create_project(return_dict=True, session=None, agent=None, **values)¶
- delete_project(project, session=None, agent=None)¶
- expand_project(project_uuid, agent=None, session=None)¶
Retrieve Experiments and WorkflowChain data for Projects app main page when Project is expanded/clicked.
- export_projects(filters=None, ignore_deleted=True, as_json=True, session=None, agent=None)¶
Export Projects from the database.
By default, all (readable) Projects in the system are exported. Specific Projects can be exported by providing the “filters” argument, which will be passed to query_projects() to get the Projects to export.
If “as_json” is True (default), the return value will be a JSON-encoded dict that can be passed directly to import_projects(). If “as_json” is False, then the dict itself will be returned.
- get_project(project, return_dict=True, deep_copy=False, params=None, session=None, agent=None)¶
- get_projects(agent=None, filters=None, return_dict=True, session=None)¶
Retrieve Project data for Projects app main page.
- get_workflow_chain_rollups(project, ignore_deleted=True, session=None, agent=None)¶
- import_project(session=None, agent=None, **values)¶
Create a new Project based on a dict.
- query_projects(filters=None, return_dict=True, deep_copy=False, params=None, ignore_deleted=True, sort=None, session=None, agent=None)¶
- undelete_project(project, session=None, agent=None, return_dict=True)¶
- update_project(project, return_dict=True, session=None, agent=None, **values)¶
Update a Project.
Report¶
The report API handles reports and deriviative object types, including Applets and Doclets.
- create_report(name, elements, uuid=None, desc=None, tags=None, deps=None, augment=None, report_type=None, icon_svg=None, report_groups=None, agent=None, session=None)¶
Create an ad-hoc report
- create_report_from_template(template, instances, name=None, desc=None, agent=None, session=None)¶
Generate a report from the specified template.
“instances” is a {template_uuid: instance_uuid} dict.
- delete_report(report, session=None, agent=None)¶
Delete the Report specified by the uuid.
- delete_report_template(report_template, session=None, agent=None)¶
Delete the ReportTemplate specified by the uuid.
- export_report(uuid, session=None, agent=None, format=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: string, desc: string, tags: [string], contents: string
- export_report_template(uuid, session=None, agent=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: string, desc: string, tags: [string], pipeline: string, report_type: string, elements: [object]
- generate_accessioning_form(form_config, db_session)¶
Generate accessioning form HTML based on the given form config.
- import_report(config, overwrite=False, session=None, agent=None)¶
Import configuration for ReportTemplate.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Create report with embedded html:
name: My Report desc: An example html report. tags: [html, demo] contents: <h1>My Report</h1>
Create report with local html file:
name: My Report desc: An example html report. tags: [html, demo] contents: $LAB7DATA/contents/reports/my-report.html
Create applet from local html file:
name: My Report desc: An example html applet. tags: ['esp:applet'] contents: $LAB7DATA/contents/reports/my-report.html
Configuration Notes:
To promote an ESP report to an ESP applet, include
esp:applet
in the set of report tags.The contents parameter can either take a local file or raw html contents to use as the report data. If no file is found from the string, the data are assumed to be raw html data to include as contents.
- import_report_template(config, overwrite=False, session=None, agent=None)¶
Import configuration for ReportTemplate.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Simple report:
name: Runsheet Report desc: Report showing details of runsheet generation elements - type: file_details depends: file: Illumina Runsheet tasknumber: 1
Create multi-element report:
name: Runsheet Report desc: Report showing details of runsheet generation elements: - type: file_details depends: file: Illumina Runsheet tasknumber:1 - type: raw_file depends: file: Runsheet Upload Report tasknumber:2 - type: html contents: |+ <h1>Report Header</h1> <p>Report Body</p>
Configuration Notes:
Report configuration will generally happen in the context of a pipeline. Accordingly, this documentation references report generation in that context.
- indent(s, shiftwidth=2)¶
Indent each line of string
s
byshiftwidth
spaces.
- js_var_name(s)¶
Take a string s and remove any characters unsuitable for JS var names.
- migrate_dashboard_config(config_location=None, session=None, agent=None)¶
Find legacy dashboard config files and convert them to >2.5 format
- mutex_lock(mutex_lock_filename)¶
Create a semaphore to lock down a directory while sensitive actions are being performed.
- Parameters:
mutex_lock_filename – (str) a fully-qualified file name to be used as mutex.
- mutex_unlock(mutex_lock_filename)¶
Unlocks a semaphore file.
- Parameters:
mutex_lock_filename – (str) a fully-qualified file name to be used as mutex.
- undelete_report(report, session=None, agent=None, return_dict=True)¶
Undelete/unarchive the Report specified by the uuid.
- undelete_report_template(report_template, session=None, agent=None, return_dict=True)¶
Undelete/unarchive the ReportTemplate specified by the uuid.
Resource¶
API¶
The Resource API handles core resource operations. Note that many (most) other modules rely on
functionality provided by the core Resource module. Most, if not all, create_x
, update_x
, and query_x
functionlities will call into base Resource-level functions to handle common operations. For instance,
filtering by an object’s name is normally handled by apply_resource_filters
offered by the core resource API.
API for performing “generic” resource manipulations.
With a few possible exceptions, these functions are meant to support other functions found in the Resource-subclass API modules and therefore should not be called directly from the web or CLI applications.
- add_dependency(resource, prior, label=None, system_generated=True, return_dict=False, session=None, agent=None, resource_message=None, prior_message=None)¶
Add “prior” as a dependency of “resource”, with provenance information
- apply_pagination(query: Query, limit: Optional[int] = None, offset: Optional[int] = None) Query ¶
Apply pagination to a query.
- Parameters:
query – query that the sort is applied to
limit – number of results to return
offset – index of first result to return
- Returns
query with the pagination applied
- apply_resource_filters(query, target, session=None, **filters)¶
Apply the generic filters to a Resource query.
- apply_resource_group_filters(query, target, head, session=None, **filters)¶
Apply the generic filters to a ResourceGroup query where some of the filters should be run on the head element of the group. This is useful for versioned Resources that may not have their own name or desc attributes.
We’re assuming that the query has already joined head to the query and is providing us the link
- apply_resource_pagination(query: Query, **filters) Query ¶
Apply pagination to a Resource query. Separate from apply_resource_filters() to accommodate sorting, which must occur before pagination. Pagination only really makes sense if the results are sorted.
- Parameters:
query – query that the sort is applied to
filters – dict (or kv-pairs) with valid keys: limit: number of results to return offset: index of first result to return
- Returns:
query with the pagination applied
- apply_resource_sort(query: Query, sort: str, direction: str = 'asc', group_head: Optional[AliasedClass] = None) Query ¶
Apply a generic sort to a Resource query.
- Parameters:
query – query that the sort is applied to
sort – string name of the Resource field used to sort
direction – direction for the sort: ‘asc’ for ascending, ‘desc’ for descending
group_head – AliasedClass alias of ResourceGroup member class for sorting on ResourceGroup head; query must have already set up the join to group_head
- Returns:
query with the sort applied
- apply_resource_sorts(query: Query, sorts: Union[str, list[str]], group_head: AliasedClass = None) Query ¶
Apply generic sorts to a Resource query.
- Parameters:
query – query that the sorts are applied to
sorts – string names of the Resource fields used to sort; default sort direction is ascending but descending direction can be specified by prepending ‘-’ to the sort string
group_head – alias of ResourceGroup member class for sorting on ResourceGroup head; query must have already set up the join to group_head
- Returns:
query with the sorts applied
- build_resource_vars(resource_vars, session, agent)¶
Check ResourceVar list for existing and new ResourceVars, build any new ResourceVars that are needed, and return a complete list of the ResourceVar models.
- bulk_add_dependency(resource, prior, label=None, system_generated=True, session=None, agent=None, resource_message=None, prior_message=None)¶
Add “prior” as a dependency of “resource”, with provenance information
- bulk_insert_resource_actions(resource_action_inserts: list[dict[str, object]], session: Session = None)¶
Take a list of dicts containing Resource column data and execute a bulk DB INSERT for the corresponding ResourceActions
- Parameters:
resource_action_inserts – the list of actions (audit logs0 to bulk-insert
- bulk_insert_resource_vals(resource_val_inserts, session=None)¶
Take a list of dicts containing ResourceVal column data, combine it with newly retrieved resource_ids, and execute a bulk DB INSERT for the corresponding Resources and ResourceVals.
- bulk_set_resource_val(resource_vals, session=None, agent=None)¶
Fast value setting. Use this for bulk updates. Note that this does not update sample provenance. The caller should make a note of this update directly.
resource_vals is a list of resource_val dicts: { ‘uuid’: uuid, ‘value’: new_value }
- get_dependencies(uuid, priors=True, dependents=True, label=None, uuids_only=False, params=None, return_dict=True, session=None, agent=None)¶
Get dicts for all “priors” and “dependencies” of “resource”
- get_dependencies_by_class(uuid, classes=None, params=None, return_dict=True, session=None, agent=None)¶
For a given Resource, get dependencies that have the provided classes.
- get_resource(uuid, ignore_deleted=True, deep_copy=False, params=None, session=None, agent=None)¶
Get a resource from its UUID.
- get_resource_ids(count, session=None)¶
Given a count, get that many unused resource_ids. Returns an ordered list of IDs.
We give ourselves a buffer of 100 to allow for the possibility that we partially lost our race condition.
- get_resource_uuid_from_name(name, session=None, agent=None)¶
Find a resource guid based on its name.
- next_hop(start, end)¶
Returns the next class on the path from start to end. Classes are referred to by their string names.
- query_resource(filters=None, limit=0, ignore_deleted=True, return_dict=True, deep_copy=False, params=None, session=None, agent=None)¶
Find a resource based on its path or name.
- remove_dependency(resource, prior, label=None, session=None, agent=None)¶
Remove “prior” as a dependency of “resource”, with provenance information
- set_resource_deps(resource, deps, session=None, agent=None)¶
Update resource’s depedency set to the provided set of dependencies; the dependencies can be supplied either as Resource objects or uuid strings.
- set_resource_tags(resource, tags, session=None, agent=None)¶
Update resource’s tag set to the provided group of tags.
- set_resource_val(resource_val_uuid, value, return_dict=False, session=None, agent=None)¶
Fast value setting. Use this for bulk updates. Note that this does not update sample provenance. The caller should make a note of this update directly.
- tag_resource(resource, tag, session=None, agent=None, record_action=True)¶
Add “tag” to “resource”, with provenance information
- untag_resource(resource, tag, session=None, agent=None, record_action=True)¶
Remove “tag” from “resource”, with provenance information
- update_resource(resource: Resource, insert_actions: bool = True, session: Session = None, agent: Resource = None, **values) tuple[lab7.resource.models.Resource, list[str]] ¶
Update basic Resource fields. This function provides a shared way of updating Resource-level info and is meant to be used within the update functions over child classes.
- Parameters:
resource – Resource to be updated
insert_actions – handle the insertion of new ResourceActions into the DB; if False, ResourceAction INSERT strings will be returned. This allows for higher-level operations to collect all of the resource action inserts and bulk insert them all at once.
session – sqlalchemy DB session
agent – acting User used for ACL checks and provenance
values –
Dictionary or kv-pairs with the following supported keys:
name (str): new value for Resource.name
desc (str): new value for Resource.desc
barcode_type (str): new value for Resource.barcode_type; must be one of lab7.resource.BarcodeType
barcode (str): new value for Resource.barcode
fixed_id (str): new value for Resource.barcode; overwrites barcode when both are provided
view_template (str): new value for Resource.view_template
meta (dict): new value for Resource.meta
- augment (dict): new value for Resource.meta[‘augment’]; namespaced metadata that Users can
modify without fear of clobbering meta fields used by ESP
- Returns:
A two-component tuple. The first component is the newly-updated Resource object. The second component is a list of resource_action_insert strings. This list will be empty if insert_actions is True.
- update_resource_val(uuid, session=None, agent=None, **values)¶
Update a ResourceVal
Utils¶
In addition to the standard api module, the lab7.resource
package provides some generally useful utility functions
such as resource_from_uuid
.
- class DirectedGraph(nodes: Iterable[NodeType], edges: Iterable[tuple[NodeType, NodeType]])¶
DirectedGraph is a graph with directed edges. Nodes can be any object, and their edges are held as upstream ‘parents’ and downstream ‘children’ lists.
- property heads: set[NodeType]¶
Nodes with no parents.
- property nodes: set[NodeType]¶
All nodes in the graph.
- shortest_path(node1: NodeType, node2: NodeType, directed: bool = True) list[NodeType] ¶
Get list of nodes in a shortest path from node1 to node2. If no path exists, returns []. If directed==True only directed paths are considered.
- Parameters:
node1 – Path start node.
node2 – Path end node
directed – Whether or not the returned path needs to respect edge direction (default: True).
- Returns:
The list of nodes connecting node 1 to node 2 (inclusive) node2. If no path exists, returns [].
- class VariableFormatter¶
Container for formatting methods related to ResourceVar/Val import and export.
- static exporter(data)¶
Parse export configuration for resource var or val.
- Parameters:
data (dict) – as_dict result from var/val
- static importer(data)¶
Parse import configuration for resource var or val.
- Parameters:
data (dict) – user-friendly export format to parse
- check_user_perm(session, user, cls, perm, resource=None, force_exception=True)¶
Verify that “user” has the required “perm(ission)” to perform the corresponding action on Resource class “cls”.
- order_resources_by_uuid(resources: list[lab7.resource.models.Resource], uuids: list[str]) list[lab7.resource.models.Resource] ¶
- Parameters:
resources – List of Resources to reorder.
uuids – List of Resource UUIDs.
- Returns:
list of Resources in
uuids
order.
- resource_from_uuid(uuid, session, cls=None, ignore_deleted=True)¶
Get a Resource from the database using its UUID; relies on SQLAlchemy magic to correctly cast the object to the appropriate Resource sub-class.
An _optional_ check for the type correctness can be triggered by providing a value for the “cls” argument.
Returns Resource object or raises a ValueError if no Resource with the provided UUID is found.
- resource_op(*perms_spec)¶
Decorator for functions that access Lab7 ESP Resources.
The decorated function MUST have at least the following two parameters:
“agent”: Lab7 Resource (usually a User) invoking this function
“session”: SQLAlchemy session for handling the Lab7 Resources used by the decorated function.
“perms_spec” is a set of (argname, action, cls, query) tuples. The value of “action” governs how @resource_op behaves; if “action” is:
None: No ACL checks are performed (a security warning will be issued). If “argname” is provided (i.e., not None), the corresponding argument will be converted from a UUID to a Lab7 Resource object when the decorated function is called; if “cls” is provided, a type check will also be performed on the Resource object.
“create”, “import”: “argname” is ignored; instead, an ACL check will be performed to verify that “agent” has “create” or “import” permissions on the Resource type specified by “cls”.
“query”: Basic checks are performed and objects in queried lists that are not public or do not belong to a workgroup the user also belongs to are filtered from the result list.
Other string literal: “argname” will be converted into a Resource object (see notes for None “action” above). An ACL check will then be performed to verify that “agent” has “action” permissions on the Resource identified by “argname”.
The value of “argname” determines the function argument that will be passed the loaded model. cls is the type of model to load. Query is an optional fourth argument. If supplied, it is a function that will be passed the uuid + session in order to load the object in question. This allows
specific API calls to pre-fetch all the data they need as part of the resource_op action. The function must accept the arguments: uuid (str|UUID), session (SQLAlchemy session), cls (class to load), ignore_deleted (bool).
Sample¶
Provides entity-related functionality, including:
WorkflowableResource (UI: Entity)
WorkflowableResourceType (UI: EntityType)
WorkflowableResourceClass (UI: EntityClass)
Historically, WorkflowableResource was called Sample
and the APIs have retained the nomenclature for backwards compatibility.
Python API for manipulating Samples and SampleTypes.
- class BulkWorkflowableResourceCreator¶
This is a helper class for creating multiple WorkflowableResources through bulk insertion into the DB.
The logic has been collected here and broken down into several methods so that it can more easily be used as a base for other bulk creators. Children of WorkflowableResource (i.e. Item and Container) will need bulk creation endpoints so that they can use the same UI elements that Samples use. These include registration from Entities apps, registration through the New Experiment modal in the Projects app, and child creation in Sample Protocols. Children of this class will need to override __init__() and create_workflowable_resources().
To use:
bulk_creator = BulkWorkflowableResourceCreator() workflowable_resources = bulk_creator.create_workflowable_resources( sample_specs, session, agent )
- create_workflowable_resources(sample_specs: list[dict], session: Session, agent: Resource, generate_names: bool = False, lab7_id_sequence: str = None, columns: dict = None, return_dict: bool = True) Union[list[dict[str, Union[int, str, float, dict, list]]], list[lab7.sample.models.Sample]] ¶
- Parameters:
sample_specs – list of dicts of data to create
session – SQLAlchemy session object
agent – Actor for this action, typically the request-bound User.
generate_names – Whether or not to autogenerate names. If False, names are to be provided within sample_specs.
lab7_id_sequence – globally specify naming scheme to use for all samples.
columns – Custom field data.
return_dict – whether to return a dicts or WorkflowableResources
- Returns:
list of newly created Samples in dict representation or as Sample objects.
- add_sample_dependencies(sample: Union[Sample, str], parents: Optional[Union[list[str], list[lab7.resource.models.Resource]]] = None, children: Optional[Union[list[str], list[lab7.resource.models.Resource]]] = None, return_dict: bool = True, session: Session = None, agent: Resource = None) Sample ¶
Add set of parent and or child dependencies to a given Sample.
- Parameters:
sample – The sample (or uuid of the sample) to add parents/children too.
parents – List of parent resources (or resource uuids) that will be added to the sample
children – List of child resources (or resource uuids) that will be added to the sample
return_dict – Whether you would like to return the dictionary of the sample, or the Object Instance.
session – SQLAlchemy session object
agent – User object initiating the session.
- Returns:
The hydrated sample
Examples
To add a parent:
import lab7.sample.api as sample_api sample_api.add_sample_dependencies( sample_uuid, parents=[parent_sample_uuid], agent=agent, session=session )
To add a child:
import lab7.sample.api as sample_api sample_api.add_sample_dependencies( children=[child_uuid], sample_uuid, agent=agent, session=session )
- ancestor_tree(sample_uuid, depth, session=None, agent=None)¶
Resolves a particular generation relative the provided sample UUIDs. Note that the return structure is just a list of dicts - it is up to downstream callers to resolve the list into sample objects.
- apply_sample_filters(query, target, session=None, **filters)¶
- apply_sample_sorts(query, sorts)¶
Apply sorts to a Sample query.
- apply_sample_type_filters(query, target, session=None, **filters)¶
- bulk_insert_samples(sample_inserts, session)¶
Take a list of dicts containing Sample column data, combine it with newly retrieved resource_ids, and execute a bulk DB INSERT for the corresponding Resources and Samples.
- create_generic_sample_type(session=None, agent=None)¶
- create_generic_workflowable_resource_class(session=None, agent=None)¶
- create_sample(session=None, agent=None, return_dict=True, columns=None, **values)¶
- create_sample_insert(generate_name=True, session=None, agent=None, **values)¶
- create_sample_type(session=None, agent=None, **values)¶
- create_samples(sample_specs: list[dict], generate_names: bool = False, lab7_id_sequence: Optional[str] = None, columns: Optional[dict] = None, return_dict: bool = True, agent: Resource = None, session: Session = None) Union[list[dict[str, Union[int, str, float, dict, list]]], list[lab7.sample.models.Sample]] ¶
Create multiple Samples.
- Parameters:
sample_specs – List of Sample specification dicts.
generate_names – Whether or not to autogenerate names. If False, names are to be provided within sample_specs.
lab7_id_sequence – Name of ID sequence to use if generate_names is True.
columns – Default ResourceVal values specified by a Sample ProtocolDefinition or its containing WorkflowDefinition. Used when creating child Samples from within a SampleSheet. If columns is not None, sample_specs should not include the
resource_vals
field.return_dict – whether to return obects as
dict
orSample
.agent – requesting User
session – SQLAlchemy session object
- Returns:
list of newly created Samples in dict representation or as Sample objects.
- create_workflowable_resource_class(return_dict=True, session=None, agent=None, **values)¶
- data_query_samples(filters=None, deep_copy=False, raw=False, params=None, ignore_deleted=True, session=None, agent=None)¶
- delete_sample(sample, message=None, session=None, agent=None)¶
- delete_sample_type(sample_type, session=None, agent=None)¶
- delete_workflowable_resource_class(workflowable_resource_class, session=None, agent=None)¶
- export_sample(uuid, session=None, agent=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: str desc: str tags: list variables: [dict]
- export_sample_type(uuid, session=None, agent=None, format: str = None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: str desc: str tags: list sequences: [str] variables: [dict]
- export_sample_type_definitions(filters=None, ignore_deleted=True, as_json=True, session=None, agent=None)¶
Export SampleTypeDefinitions from the database.
By default, all (readable) SampleTypeDefinitions in the system are exported. Specific SampleTypeDefinitions can be exported by providing the “filters” argument, which will be passed to query_sample_type_definitions() to get the SampleTypeDefinitions to export.
If “as_json” is True (default), the return value will be a JSON-encoded dict that can be passed directly to import_sample_types(). If “as_json” is False, then the dict itself will be returned.
- export_workflowable_resource_class(uuid, nested=False, session=None, agent=None, format: str = None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: str plural_name
- fetch_entity_experiment_data(entity_uuid: str, extended_format: bool, session: Session = None, agent: Resource = None)¶
Fetch experimental data for a single sample.
- Parameters:
sample_uuid – the sample uuid to fetch data for
extended_format – If true, additional data will be returned and the return format is different.
session – SQLAlchemy session
agent – Requesting user
- Returns:
{ "Experiment Name": { "Protocol Name": { "Column Name": Value } } }
If
extended_format
isTrue
, the returned data structure is:[ { "experiment": "Experiment Name", "experiment_uuid": "Experiment UUID", "experiment_created_date": "Experiment Created Date", "workflow": "Workflow Name" "worksheet": "Worksheet Name", "worksheet_uuid": "Worksheet UUID", "worksheet_created_date": "Worksheet Created Date", "protocol": "Protocol Name", "field": "Column Name", "value": "Stored Value", "modified_date": "most recent date modified for the value" } ]
- Return type:
If
extended_format
isFalse
, the returned data structure is
- get_generic_sample_class(session)¶
- get_sample(sample, deep_copy=False, params=None, ignore_deleted=True, return_dict=True, session=None, agent=None)¶
- get_sample_dependencies(sample_uuid, parents=True, children=True, uuids_only=False, session=None, agent=None)¶
- get_sample_type(st: Union[str, SampleType], ignore_deleted=True, session=None, agent=None, params=None)¶
- get_sample_type_definition(sample_type_def: str, ignore_deleted: bool = True, return_dict: bool = True, session: Session = None, agent: Resource = None) Union[SampleTypeDefinition, dict[str, Union[int, str, float, dict, list]]] ¶
Get a sample type definition (sample type) by UUID.
- Parameters:
sample_type_def – UUID of the sample type definition to get.
ignore_deleted – whether to return archived obects
return_dict – Determines the return type.
session – SQLAlchemy session
agent – Resource performing the action, usually the active-request-associated User.
- get_sample_uuid_by_name(name, session=None, agent=None)¶
- get_session()¶
- get_workflowable_resource_class(workflowable_resource_class, ignore_deleted=True, return_dict=True, session=None, agent=None)¶
- hid_for_uuid(uuid: str, sequence: str = 'lab7_sample_auto_id_seq', name: str = None, session: Session = None, agent: Resource = None)¶
Generate a human id (hid) for a resource. The hid is stored in
resource.meta['lab7_hid_<sequence_name>']
If an hid already exists, return it.
- Parameters:
uuid – UUID to generate an HID for (e.g.: a sample sheet UUID).
sequence – name of a DB sequence.
name – Name of this HID. This allows users to generate multiple HIDs for a single UUID. For instance:
hid_for_uuid(sample_sheet_uuid, name='plate_id')
hid_for_uuid(sample_sheet_uuid, name='batch_id')
Would generate two different HIDs.session – SQLAlchemy Session
agent – Resource performing the action, almost always the active request User.
This function was developed to support IDs associated with samples in a sample sheet where the same hid is used for all samples in the sample sheet. For example, the Biolog Run ID is only generated once per sample sheet, but associated with each sample in the sheet:
hid_for_uuid(ss_uuid, sequence='biolog_batch_seq')
- import_sample(config, overwrite=False, session=None, agent=None)¶
Import configuration for sample.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Simple sample:
name: LIB0001
Sample of specific type with note and tags:
name: LIB0001 desc: Library sample tags: [library, special-sample-1] type: Library
Sample with variable defaults:
name: LIB0001 type: Library variables: Sample Type: Illumina Library Numeric Value: 10
Configuration Notes:
Either single samples or batches of samples can be created with the Sample model
create
method.Default sample values can be set for samples using the
variables
parameter.
- import_sample_type(config, overwrite=False, session=None, agent=None)¶
Import configuration for sample type.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Simple sample type:
name: Library desc: Sample type for Library tags: [library, demo]
Sample type with auto-naming sequence:
name: Library desc: Sample type for Library tags: [library, demo] sequences: - LIBRARY SEQUENCE
Sample type with variables and sequence:
name: Library desc: Sample type for Library tags: [library, demo] sequences: - LIBRARY SEQUENCE variables: - Sample Type: rule: string value: Illumina Library - Numeric Value: rule: numeric value: 0
Create Sample type and new samples:
name: Library variables: - Sample Type: rule: string value: Illumina Library - Numeric Value: rule: numeric value: 0 create: - Library 1 - name: Library 2 desc: My special library. variables: Sample Type: Non-Illumina Library Numeric Value: 2
Configuration Notes:
Variables specified for samples can take the same format as variables defined for protocols within ESP.
Sample creation can be nested in SampleType configuration using the
create
parameter.For any new SampleType object, corresponding information MUST be included in the
lab7.conf
configuration file. Here are lines in that file that’s relevant to the examples above:lims: auto_sample_id_format: "ESP{sample_number:06}" sample_id_sequences: - name: "ESP SEQUENCE" format: ESP{sample_number:06} - name: "LIBRARY SEQUENCE" format: LIB{sample_number:03} sequence: library_seq sequences: lab7_sample_auto_id_seq: 1 library_seq: 1
- import_samples_bulk(config, overwrite=False, session=None, agent=None)¶
Import configuration for samples in bulk.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Sample batch:
count: 10 type: Illumina Library
Sample batch with variable defaults:
count: 10 type: Illumina Library variables: Sample Type: Illumina Library Numeric Value: 10
Configuration Notes:
Either single samples or batches of samples can be created with the Sample model
create
method.Default sample values can be set for samples using the
variables
parameter.
- import_samples_from_text(text, metadata, column_map, tags=None, session=None, agent=None)¶
Import Samples from a delimited-text stream.
- import_samples_with_pipeline(import_file, file_meta, pipeline_uuid, container_uuid=None, container_slot=None, tags=None, session=None, agent=None, project_uuid=None)¶
Import Samples from a file by passing the file to a Pipeline that is responsible for creating the Samples.
- import_workflowable_resource_class(config: dict[str, Union[int, str, float, dict, list]], overwrite: bool = False, session: Session = None, agent: Resource = None)¶
Import configuration for a workflowable resource class.
- Parameters:
config (dict) – Configuration definition.
Example Configuration for Item type for stock reagent:
DemoClass: desc: Entity type for kit. class: DemoClass Class: plural_name: DemoClasses compiled: true view_template: |+ <bootstrap /> <grid /> <panel header="Demo Info"> <entity-variable-form mirror="Demo Number"/> </panel>
- lookup_generation(sample_uuids: list[str], generation: Union[int, str], session: Session = None, agent: Union[str, Resource] = None, labels: list[str] = None) dict[str, list[dict]] ¶
Resolves a particular generation relative the provided sample UUIDs.
- Parameters:
sample_uuids – List of uuids that are the root of the lookup
generation – Generation to resolve. -1 is parents, 1 is children, etc. May also be a string descriptor matching the extended regex:
((all|closest|furthest)(up|down))?[^,]+(,[^,]+)*
For instance:closestup:Illumina Library
would find the nearest Illumina Library.session – SQLAlchemy session
agent – UUID or Resource (usually User) of the acting agent.
labels – list of valid resource dependency labels to traverse. If unspecified, [‘begat’] will be used.
- Returns
A dictionary mapping from UUID -> list[dict], where each dict has properties that roughly correspond to Sample.as_dict(), with the differences of (1) having additional keys not in Sample.as_dict; (2) owner is a simple string (as was formerly the case for Sample.as_dict()) and (3) resource_vals is a simple key:value dictionary instead of a list of dictionaries.
Note
the return structure is just a list of dicts - it is up to downstream callers to resolve the list into sample objects. Note that if generation == 0, this call is roughly equivalent to a bulk operation query to create sample JSON dicts from sample UUIDs
- migrate_samples(sample_dicts: List[dict], new_sample_type_def: SampleTypeDefinition, session=None, agent=None)¶
- next_sequence_value(name, session=None, agent=None)¶
- profiled()¶
- query_sample_type_definitions(filters: dict[str, Any] = None, deep_copy: bool = False, params: Union[list, dict[str, Union[bool, str, dict]]] = None, ignore_deleted: bool = True, return_dict: bool = True, sort: Union[str, list[str]] = None, session: Session = None, agent: Resource = None) Union[list[dict[str, Union[int, str, float, dict, list]]], list[lab7.sample.models.SampleType]] ¶
Query a sample type definition (sample type).
- Parameters:
filters – Filters to apply
deep_copy – influences default params
params – Influences returned data shape
ignore_deleted – Whether to return archived objects
return_dict – Whether to return JSONifiable dictionaries or SampleType models
sort – result sorting criteria
session – SQLAlchemy session
agent – Resource performing the action, usually the active-request-associated User
Supported filters: see
lab7.resource.api.apply_resource_filters
. Supported sort: seelab7.resource.api.apply_resource_sorts
Supported params: None
- query_sample_types(filters: dict[str, Any] = None, deep_copy: bool = False, params: Optional[Union[list, dict[str, Union[bool, str, dict]]]] = None, ignore_deleted: bool = True, return_dict: bool = True, sort: Optional[Union[str, list[str]]] = None, session: Session = None, agent: Resource = None) Union[list[lab7.sample.models.SampleType], list[dict[str, Union[int, str, float, dict, list]]]] ¶
Query all sample types in ESP
- Parameters:
filters – filters the sample type objects based on ESP’s filter criteria (i.e name, uuid, tags).
deep_copy – Doesn’t actually matter if you set this value
params – Used to return resource vars and resource vals on the object.
ignore_deleted – Determine whether you, the user, would want to return archived sample types.
return_dict – Return the sample type objects as a list of dictionaries
session – Session Object for SQLAlchemy call to the DB
agent – User that is governing the session.
- Returns:
A list of SampleType objects or SampleType dictionaries, depending on the value of return_dict.
Examples
Fetching a SampleType by name:
import lab7.sample.api as sample_api sample_type = sample_api.query_sample_types( {'name': sample_type_name}, return_dict=False, agent=agent, session=session )
- query_samples(filters=None, deep_copy=False, raw=False, params=None, timer=None, ignore_deleted=True, return_dict=True, session=None, agent=None, sort=None, limit=None, offset=None)¶
- query_workflowable_resource_classes(filters=None, deep_copy=False, params=None, ignore_deleted=True, return_dict=True, sort=None, session=None, agent=None)¶
- undelete_sample(sample, session=None, agent=None, return_dict=True)¶
- undelete_sample_type(sample_type, session=None, agent=None, return_dict=True)¶
- undelete_workflowable_resource_class(workflowable_resource_class, session=None, agent=None, return_dict=True)¶
- update_sample(sample, session=None, agent=None, return_dict=True, should_flush=True, insert_actions=True, **values)¶
- update_sample_type(st, session=None, agent=None, **values)¶
- update_samples(sample_dicts, session=None, agent=None, return_dict=True)¶
- update_samples_meta(samples, agent=None, session=None, return_dict=True)¶
- update_workflowable_resource_class(workflowable_resource_class, return_dict=True, session=None, agent=None, **values)¶
Search¶
Provides access to ESP’s global search facility.
User¶
Provides access to ESP’s IAM functionality, including:
User
Role
Lab (UI: Workgroup)
- add_user_breadcrumb(user_uuid: str, session_id: str, session: Session = None, agent: Resource = None, **kwargs)¶
Add user breadcrumb information.
- Parameters:
user_uuid – UUID for current user.
session_id – string current user session.
session – Connection to the database
**kwargs – Validated payload for endpoint.
- authenticate_user(username, password, ip, expire_hours=None, expire_minutes=None, session=None, openid_code=None, openid_access_token=None, openid_verifier=None, openid_refresh_token=None, api_key=None, auto_create_sso_user=False, ignore_password_reset=False, browser=None, os=None, sso_esp_sites=None, sso_esp_roles=None, sso_name=None, sso_email=None, esp_site_names=None, tenant=None)¶
Authenticates a user using the provided username and (plaintext) password. Returns a 3-value tuple consisting of:
The dictionary representation of the user object. This will be None if the authentication failed.
A boolean indicating authentication success or failure
A boolean indicating whether the password needs to be reset.
This funky set of return values is needed because we (currently) can’t simply raise an exception, as doing so would rollback the database session and cause the data loss in the user provenance log.
Above notwithstanding, this method will raise a PasswordResetRequiredError (with status code 205) if the user profile is explicitly set with “password_reset_required” and the system configuration is not set to ignore password reset. Note that this error will _not_ be raised for circumstances where the password is merely expired.
- create_lab(name, desc=None, members=None, uuid=None, session=None, agent=None, **values)¶
Create a new Lab.
- create_role(name, desc=None, members=None, permissions=None, uuid=None, tags=None, applications=None, default_application=None, hidden_apps=None, session=None, agent=None, **values)¶
Create a new role.
- create_user(name, username, password, email=None, roles=None, labs=None, password_expires=None, uuid=None, session=None, agent=None, valid_hours=None, return_dict=True, **values)¶
Create a new user.
- deauthenticate_all_user_sessions(user_uuid, session)¶
Finds and deauthenticates all user’s sessions
- delete_lab(lab, session=None, agent=None)¶
Delete a lab.
- delete_role(role, session=None, agent=None)¶
Delete a role.
- delete_user(user, session=None, agent=None)¶
Delete a user. This is a virtual delete, i.e. the record is marked ‘archived’, but not removed from the database.
- export_lab(uuid, session=None, agent=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: str desc: str tags: list permissions: dict
- export_role(uuid, session=None, agent=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: str desc: str tags: list permissions: dict
- export_user(uuid, session=None, agent=None)¶
Export configuration for model.
- Parameters:
uuid (dict) – UUID for model.
Export Format:
name: str desc: str tags: list email: str roles: [str] workgroups: [str]
- get_usersession(session_id, ip, session=None, expire_hours=None, expire_minutes=None, ignoreSessionTimeoutReset=False, is_sso_session=False, sso_enabled=False)¶
Get a UserSession by uuid (from a session token) & client IP.
- grant_all_perms(role, msg, allow_env=False)¶
Grant the selected role permissions across all Resources and record that fact in the Role’s provenance log with the supplied msg.
- import_lab(config, overwrite=False, session=None, agent=None)¶
Import configuration for model.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Create workgroup:
name: Lab A desc: Workgroup for all users in Lab A
Create workgroup with default users:
name: Lab A desc: Workgroup for all users in Lab A members: - User 1 - User 2
- import_role(config, overwrite=False, session=None, agent=None)¶
Import configuration for role.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Create role:
name: my-role desc: My Role permissions: Project: [create, update] Sample: [create, read, update, delete]
Create role with default users:
name: my-role desc: My Role members: - User 1 - User 2 tags: - esp:__ADMIN__ - esp:__LIMS__
Create role with specific app permissions:
name: my-role desc: My Role tags: - esp:__ADMIN__ - esp:__LIMS__
Configuration Notes:
Default role permissions are access to all actions for each type of model. Permissions need to be overridden to restrict access.
- import_user(config, overwrite=False, session=None, agent=None)¶
Import configuration for role.
- Parameters:
config (dict) – Configuration definition.
Configuration:
Create user:
name: New User email: user@localhost password: password
Create user with roles:
name: New Role User email: user-role@localhost password: password roles: - Lab Manager - Admin
- lab_name_exists(labname, session)¶
Returns True if provided “name” is the name of an existing role
- query_lab(filters=None, deep_copy=False, params=None, sort=None, session=None, agent=None)¶
Search for a set of users that meet the criteria in filters.
- query_role(filters=None, sort=None, session=None, agent=None)¶
Search for a set of roles that meet the criteria in filters.
- query_user(filters=None, sort=None, session=None, agent=None, as_dict=True)¶
Search for a set of users that meet the criteria in filters.
- query_user_breadcrumb(user_uuid, session=None, **kwargs)¶
Query user breadcrumb information.
- Parameters:
user_uuid (str) – UUID for current user.
**kwargs (str) – Validated payload for endpoint.
- retrieve_all_users_for_role(role_name, session)¶
Retrieves all the users for a supplied role and returns them as a list
- retrieve_all_users_for_workgroup(workgroup_name, session)¶
Retrieves all the users for given workgroup and returns them as a list
- role_name_exists(name, session, include_labs=True)¶
Returns True if provided “name” is the name of an existing role
- update_lab(lab, name=None, members=None, session=None, agent=None, **values)¶
Update attributes for a lab. Note that adding or removing members is a separate function.
- update_role(role, name=None, members=None, permissions=None, applications=None, hidden_apps=None, default_application=None, session=None, agent=None, **values)¶
Update attributes for a role. Note that adding or removing members is a separate function.
- update_user(user, name=None, username=None, email=None, roles=None, labs=None, last_seen=None, password=None, password_expires=None, password_reset_required=None, can_create=None, session=None, agent=None, sso_only=None, **values)¶
Update the values on a user.
- update_user_prefs(user, prefs, session=None, agent=None)¶
Add or update the user’s preferences, which should be provided as a dict using the “prefs” argument. Returns the user’s complete set of preferences.
- NOTE: This function cannot be used to delete a preference; i.e., current
preferences that are not keys in “prefs” are left unmodified.
- username_exists(username, session)¶
Returns True if provided string is username for user already in the DB.
- verify_password(user, password, session=None, agent=None)¶
Return True if the supplied password for the user is correct