Skip to content

Managing Analyses

This guide covers submitting, monitoring, and managing pipeline analyses using the CLI.

Tip

The iflow runs command still works as an alias for iflow analyses.

Prerequisites

  • CLI installed and authenticated (iflow login)
  • Default project selected (iflow config select-project)
  • Pipeline available to run (iflow pipelines list)

Submitting Analyses

Basic Analysis

iflow analyses submit --pipeline wdl-minimal --watch

The --watch flag monitors progress until completion.

Analysis with Parameters

File paths are relative to your project bucket:

iflow analyses submit --pipeline hereditary-mock \
  -P case_id=case-001 \
  -P child_fastq=data/R1.fastq.gz \
  -P child_fastq=data/R2.fastq.gz \
  --watch

Analysis with Specific Pipeline Version

iflow analyses submit --pipeline hereditary-mock -V 1.0.0 --watch

Analysis Linked to Sample

Link an analysis to a sample so it appears in the sample's Analyses tab:

# By sample display ID (resolved to UUID via miner-service)
iflow analyses submit --pipeline hereditary-mock \
  --sample-id NA12878 \
  -P subject_id=Patient-001 \
  -P snv_vcf_gz=samples/na12878/sample.vcf.gz \
  --watch

# By sample UUID (bypasses resolution)
iflow analyses submit --pipeline hereditary-mock \
  --sample-uuid 550e8400-e29b-41d4-a716-446655440000 \
  -P subject_id=Patient-001 \
  -P snv_vcf_gz=samples/na12878/sample.vcf.gz \
  --watch

You can also pass --subject-id explicitly, though linking to a sample is usually sufficient.

Analysis Associated with Order

Link analyses to clinical orders for traceability:

iflow analyses submit --pipeline hereditary-mock \
  --order-id ORDER_ID \
  -P case_id=patient-001 \
  -P child_fastq=data/R1.fastq.gz \
  -t clinical -t hereditary \
  --watch

Monitoring Analyses

List Analyses

# List all analyses
iflow analyses list

# List analyses for specific order
iflow analyses list --order-id ORDER_ID

# Limit results
iflow analyses list --limit 10

Output:

ID                                    NAME                           STATUS     PIPELINE           CREATED
--------------------------------------------------------------------------------------------------------
abc123-def456                         hereditary-mock-20260113       succeeded  hereditary-mock    2026-01-13T10:00:00Z
def456-ghi789                         wdl-minimal-20260113           running    wdl-minimal        2026-01-13T11:00:00Z

Check Analysis Status

iflow analyses status RUN_ID

Output:

Analysis: hereditary-mock-20260113-100000
  Analysis ID: hereditary-mock-20260113-100000
  Status: succeeded
  Pipeline ID: pipeline-id-...
  Order ID: order-id-...
  Profile: local
  Created: 2026-01-13T10:00:00Z
  Started: 2026-01-13T10:01:00Z
  Finished: 2026-01-13T10:05:00Z
  Output: gs://bucket/outputs/hereditary-mock-20260113-100000/
  Parameters:
    case_id: case-001
    child_fastq: ['gs://bucket/R1.fastq.gz', 'gs://bucket/R2.fastq.gz']
  Tags: clinical, hereditary

Watch Analysis Progress

iflow analyses watch RUN_ID

Output:

[10:00:15] Status: queued
[10:01:00] Status: running
           Started: 2026-01-13T10:00:55Z
[10:05:00] Status: succeeded
           Finished: 2026-01-13T10:04:50Z

Analysis completed successfully!
Output: gs://bucket/outputs/hereditary-mock-20260113-100000/

Analysis Statuses

Status Description
pending Analysis created, waiting to be queued
queued Queued on GCP Batch
running Pipeline executing
succeeded Completed successfully
failed Completed with errors
cancelled Cancelled by user

Cancelling Analyses

iflow analyses cancel RUN_ID

Prompts for confirmation before cancelling.


File Paths

Paths without gs:// prefix are resolved relative to your project bucket:

# These are equivalent:
-P child_fastq=data/R1.fastq.gz
-P child_fastq=gs://your-project-bucket/data/R1.fastq.gz

Absolute Paths

Use gs:// prefix for files in other buckets:

-P reference=gs://public-references/hg38.fa

Tags

Add tags to organize and filter analyses:

iflow analyses submit --pipeline hereditary-mock \
  -P case_id=case-001 \
  -t production \
  -t hereditary \
  -t urgent

LIS Integration

For automated integrations, use callback URLs to receive notifications:

iflow analyses submit --pipeline hereditary-panel \
  -P vcf_file=data/sample.vcf.gz \
  --callback-url https://lis.example.com/api/callback

The callback URL receives: - On success: PDF report and JSON results - On failure: Error code and message

See LIS Integration for details.


Scripting

Get Last Analysis ID

Use analyses last for scripting workflows:

# Get last analysis ID
RUN_ID=$(iflow analyses last --id)

# Get last analysis name
RUN_NAME=$(iflow analyses last --name)

# Get output path of last succeeded analysis
OUTPUT=$(iflow analyses last --output --status-filter succeeded)

# Full workflow script
iflow analyses submit --pipeline wdl-minimal --watch
RUN_ID=$(iflow analyses last --id)
iflow analyses outputs $RUN_ID

Command Reference

analyses submit

iflow analyses submit --pipeline SLUG [OPTIONS]
Option Description
--pipeline Pipeline slug (required)
-V, --pipeline-version Pipeline version (default: latest)
-p, --project Project ID (uses default if not specified)
-o, --order-id Order ID to associate
--sample-id Sample display ID to link (resolved to UUID via miner-service)
--sample-uuid Sample UUID to link (bypasses display_id resolution)
--subject-id Subject UUID to link (stored in run properties)
-P, --param Parameter KEY=VALUE (repeatable)
-t, --tag Tag (repeatable)
--profile Override Nextflow profile
--callback-url URL for completion webhook
--watch Watch status after submission
--curl Output curl command

analyses list

iflow analyses list [OPTIONS]
Option Description
-p, --project Project ID
-o, --order-id Filter by order ID
--limit Max results (default: 20)

analyses status

iflow analyses status <RUN_ID>

analyses watch

iflow analyses watch <RUN_ID> [--interval SECONDS]

analyses last

iflow analyses last [OPTIONS]
Option Description
-p, --project Project ID
--id Output only analysis ID (for scripting)
--name Output only analysis name
--output Output only output path
--status-filter Filter by status (succeeded, failed, etc.)

analyses outputs

Get semantic output names and paths for a completed analysis:

iflow analyses outputs <RUN_ID>

Example with hereditary-mock pipeline:

iflow analyses outputs 74e9ff75-7d99-41d2-8881-7ba23f5c215a

Output:

NAME                 TYPE       PATH
--------------------------------------------------------------------------------
reports_pdf          File       gs://bucket/wdl-results/.../reports_pdf
top20_tsv            File       gs://bucket/wdl-results/.../top20_tsv
top20_vcf_gz         File       gs://bucket/wdl-results/.../top20_vcf_gz
annotated_vcf_gz     File       gs://bucket/wdl-results/.../annotated_vcf_gz
reports_docx         File       gs://bucket/wdl-results/.../reports_docx

Download by name:

# Download a specific output by semantic name
iflow analyses outputs $RUN_ID -d annotated_vcf_gz

# Download with custom output filename
iflow analyses outputs $RUN_ID -d top20_tsv -o results.tsv
Option Description
-d, --download Download specific output by name
-o, --output Output file path (with --download)
--curl Output curl command

analyses cancel

iflow analyses cancel <RUN_ID>

Next Steps