Skip to content

LIS Integration

This guide covers integrating your Laboratory Information System (LIS) with the iFlow platform for automated genomic analysis workflows.

Integration Architecture

sequenceDiagram
    participant LIS as LIS System
    participant iFlow as iFlow Platform
    participant GCS as Cloud Storage
    participant Pipeline as Analysis Pipeline

    LIS->>GCS: 1. Upload input files (VCF/FASTQ)
    LIS->>iFlow: 2. Submit run with callback_url
    iFlow->>Pipeline: 3. Execute pipeline
    Pipeline->>GCS: 4. Write results
    iFlow->>LIS: 5. Callback with signed URLs
    LIS->>GCS: 6. Download results

Data Submission

Patient and sample metadata is submitted to iFlow via REST API or CLI. The platform stores metadata in a flexible properties system and automatically generates pipeline input files.

Step 1: Create Subject (Patient)

curl -X POST https://miner.flow.labpgx.com/api/v1/subjects \
  -H "Authorization: Bearer $TOKEN" \
  -H "X-Project-ID: $PROJECT_ID" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "SUB-2026-001",
    "sex": "female",
    "date_of_birth": "1985-03-15",
    "external_id": "ACC-12345",
    "properties": {
      "first_name": "Jane",
      "surname": "Doe",
      "ordering_physician": "Dr. Brown"
    }
  }'
iflow subjects create -n "SUB-2026-001" \
  -P sex=female \
  -P date_of_birth=1985-03-15 \
  -P external_id=ACC-12345 \
  -P first_name=Jane \
  -P surname=Doe

Step 2: Create Sample (Specimen)

curl -X POST https://miner.flow.labpgx.com/api/v1/samples \
  -H "Authorization: Bearer $TOKEN" \
  -H "X-Project-ID: $PROJECT_ID" \
  -H "Content-Type: application/json" \
  -d '{
    "subject_id": "SUBJECT_ID",
    "name": "SAM-2026-001",
    "sample_type": "somatic",
    "properties": {
      "specimen_type": "FFPE",
      "specimen_id": "SPEC-001",
      "collection_date": "2026-01-15",
      "sequencing_date": "2026-01-20"
    }
  }'
iflow samples create --subject-id SUBJECT_ID -n "SAM-2026-001" \
  -P sample_type=somatic \
  -P specimen_type=FFPE \
  -P specimen_id=SPEC-001

Step 3: Create Order and Add Samples

# Create order
curl -X POST https://miner.flow.labpgx.com/api/v1/orders \
  -H "Authorization: Bearer $TOKEN" \
  -H "X-Project-ID: $PROJECT_ID" \
  -H "Content-Type: application/json" \
  -d '{"name": "Case-2026-001", "priority": "urgent"}'

# Add sample to order
curl -X POST https://miner.flow.labpgx.com/api/v1/orders/ORDER_ID/samples/SAMPLE_ID \
  -H "Authorization: Bearer $TOKEN" \
  -H "X-Project-ID: $PROJECT_ID"
iflow orders create -n "Case-2026-001" --priority urgent
iflow orders add-sample ORDER_ID SAMPLE_ID

Step 4: Generate Pipeline Input

The platform generates pipeline-ready JSON from subject + sample metadata:

curl https://miner.flow.labpgx.com/api/v1/samples/SAMPLE_ID/sampleinfo \
  -H "Authorization: Bearer $TOKEN" \
  -H "X-Project-ID: $PROJECT_ID"
iflow samples sampleinfo SAMPLE_ID
# Output: {"name": "Jane", "surname": "Doe", "sex": "female", ...}

For nf-core compatible samplesheets across all samples in an order:

iflow orders samplesheet ORDER_ID > samplesheet.csv

Property Promotion

Model-column keys sent inside properties (e.g., {"properties": {"sex": "female"}}) are automatically promoted to the correct database field. This simplifies LIS integration — you can send all metadata in properties without worrying about the schema.

Supported Metadata Fields

Category Fields
Patient name, sex, date_of_birth, external_id, diagnosis, clinical_notes, first_name, surname, gender, hpo_terms, phenotype_description
Sample name, sample_type, tissue, barcode, collection_date, notes, specimen_type, specimen_id, sequencing_date, sequencing_type, source
Order name, priority, accession_number, indication, test_type, ordering_provider, ordering_facility

For high-volume laboratories processing 500+ samples per month, a batch import capability can be enabled. This will allow submission of multiple samples in a single CSV or Excel file, with validation of all records before import and detailed error reporting.

Callback Workflow

iFlow supports push notifications via webhooks when pipeline runs complete or fail. This eliminates the need for polling and enables real-time integration with your LIS.

Submitting a Run with Callback

iflow analyses submit --pipeline hereditary-panel \
    -P vcf_file=gs://bucket/samples/patient001.vcf.gz \
    -P panel=cardio \
    --callback-url https://your-lis.example.com/api/iflow/callback
curl -X POST "https://compute.iflow.intelliseq.com/api/v1/runs" \
    -H "Authorization: Bearer $TOKEN" \
    -H "X-Project-ID: $PROJECT_ID" \
    -H "Content-Type: application/json" \
    -d '{
        "pipeline_id": "pipe-abc123",
        "params": {
            "vcf_file": "gs://bucket/samples/patient001.vcf.gz",
            "panel": "cardio"
        },
        "callback_url": "https://your-lis.example.com/api/iflow/callback"
    }'

Callback Payload - Success

When a pipeline completes successfully, iFlow sends a POST request to your callback URL:

{
    "run_id": "run-abc123",
    "run_name": "run-20240115-143022",
    "status": "succeeded",
    "project_id": "proj-xyz",
    "order_id": "order-456",
    "pipeline": {
        "id": "pipe-789",
        "name": "Hereditary Panel",
        "slug": "hereditary-panel"
    },
    "outputs": {
        "report_pdf": "https://storage.googleapis.com/bucket/results/report.pdf?X-Goog-Signature=...",
        "results_json": "https://storage.googleapis.com/bucket/results/data.json?X-Goog-Signature=..."
    },
    "timestamps": {
        "created_at": "2024-01-15T14:30:22Z",
        "started_at": "2024-01-15T14:31:00Z",
        "finished_at": "2024-01-15T14:45:33Z"
    },
    "callback_timestamp": "2024-01-15T14:45:35Z"
}

Signed URLs

Output URLs are pre-signed and valid for 1 hour. Download files immediately upon receiving the callback.

Callback Payload - Failure

When a pipeline fails, the callback includes error details:

{
    "run_id": "run-abc123",
    "run_name": "run-20240115-143022",
    "status": "failed",
    "project_id": "proj-xyz",
    "order_id": "order-456",
    "pipeline": {
        "id": "pipe-789",
        "name": "Hereditary Panel",
        "slug": "hereditary-panel"
    },
    "error": {
        "code": "E001",
        "message": "Pipeline failed: Input VCF file not found"
    },
    "timestamps": {
        "created_at": "2024-01-15T14:30:22Z",
        "started_at": "2024-01-15T14:31:00Z",
        "finished_at": "2024-01-15T14:32:15Z"
    },
    "callback_timestamp": "2024-01-15T14:32:17Z"
}

Error Codes

Code Description
E001 Input file not found or inaccessible
E002 Invalid input file format
E003 Pipeline execution error
E004 Insufficient resources (memory/disk)
E005 Timeout exceeded
E006 Authentication/permission error

Callback Response Requirements

Your endpoint must respond within 30 seconds:

Response iFlow Action
2xx Success - no retry
4xx Client error - no retry
5xx Server error - retry up to 3 times

Retry schedule: 1 minute, 5 minutes, 15 minutes (exponential backoff).

Testing Your Callback Endpoint

Use the echo endpoint to verify connectivity:

curl -X POST "https://compute.iflow.intelliseq.com/api/v1/test/echo" \
    -H "Content-Type: application/json" \
    -d '{"test": "data"}'

This returns your request payload, confirming the connection works.

Example: GCP Cloud Function Receiver

For testing, you can deploy a simple Cloud Function to receive callbacks:

import functions_framework
from flask import Request
import json

@functions_framework.http
def receive_callback(request: Request):
    payload = request.get_json()

    run_id = payload.get("run_id")
    status = payload.get("status")

    if status == "succeeded":
        # Download files from signed URLs
        outputs = payload.get("outputs", {})
        for name, url in outputs.items():
            print(f"Downloading {name} from {url}")
            # Your download logic here

    elif status == "failed":
        error = payload.get("error", {})
        print(f"Run failed: {error.get('message')}")
        # Your error handling here

    return {"status": "received", "run_id": run_id}, 200

Deploy with:

gcloud functions deploy iflow-callback-receiver \
    --gen2 --runtime python311 \
    --trigger-http --allow-unauthenticated \
    --entry-point receive_callback \
    --region us-central1

Security Recommendations

  1. Use HTTPS - All callback URLs must use HTTPS
  2. Validate Source - Check the X-iFlow-Signature header (coming soon)
  3. Authenticate Downloads - Signed URLs are time-limited; download immediately
  4. Network Security - Consider IP whitelisting or VPN for production