Workflows
Workflows are Flo’s durable orchestration layer. They compose Actions into multi-step business processes defined in YAML, with built-in resilience: retries, circuit breakers, health-weighted routing, signal handling, timeouts, polling, cron scheduling, stream triggers, and idempotency.
A workflow definition is a directed graph of steps. Each step either runs a target (an action, an inline plan, or a child workflow) or waits for an external signal. Every step declares transitions that map outcomes to the next step or to a terminal state. The engine walks the graph from the start step until it reaches a terminal.
Client ──start──▸ validate ──success──▸ charge ──success──▸ ship ──success──▸ flo.Completed │ │ │ failure failure failure ▼ ▼ ▼ flo.Failed PaymentFailed flo.FailedWhy Workflows?
Section titled “Why Workflows?”| Concern | Without Flo Workflows | With Flo Workflows |
|---|---|---|
| State | Your app manages a state machine in an external DB, guarding against crashes | Flo persists every state transition to the UAL. Recovery replays the log. |
| Retries | Hand-written retry loops with ad-hoc backoff | Declarative retry: block with exponential, linear, constant, or jittered backoff |
| Failover | If the orchestrating process dies, work stalls | Another node picks up from the last committed state |
| Routing | Manual circuit breaking, health scoring, fallback chains | Inline plans with health-weighted selection, circuit breakers, and fallback values |
| Human-in-the-loop | Polling, webhooks, custom queues | waitForSignal with typed signals, timeouts, and approval flows |
| Scheduling | External cron daemon | Embedded schedule: block with 5-field cron or interval |
| Event-driven | Wiring consumer groups to trigger logic | trigger: block listens on a stream and starts runs per event |
Quick Start
Section titled “Quick Start”1. Define the workflow in a YAML file:
kind: Workflowname: process-orderversion: "1.0.0"idempotency: required
start: run: "@actions/validate-order" transitions: success: charge failure: flo.Failed
steps: charge: run: "@actions/charge-payment" retry: max_attempts: 3 backoff: exponential initial_delay_ms: 1000 max_delay_ms: 30000 transitions: success: ship failure: flo.Failed
ship: run: "@actions/create-shipment" transitions: success: flo.Completed failure: flo.Failed2. Register the actions the workflow calls:
flo action register validate-order --wasm ./validate.wasmflo action register charge-payment --wasm ./charge.wasmflo action register create-shipment --wasm ./ship.wasm3. Deploy the workflow:
flo workflow create -f process-order.yaml4. Start a run:
flo workflow start process-order '{"order_id": "ORD-123", "amount": 99.99}'# → wfrun-15. Check status:
flo workflow status wfrun-1# → {"run_id":"wfrun-1","workflow":"process-order","version":"1.0.0","status":"completed",...}Core Concepts
Section titled “Core Concepts”Workflow Definition
Section titled “Workflow Definition”Every workflow YAML file has this top-level structure:
| Field | Required | Description |
|---|---|---|
kind | yes | Must be Workflow |
name | yes | Unique workflow name |
version | yes | Arbitrary version string (clients can request specific versions) |
idempotency | no | none (default), optional, or required |
start | yes | The entry-point step |
steps | no | Named steps (map of step name → step definition) |
terminals | no | Custom terminal states (map of name → {status: ...}) |
plans | no | Inline plan definitions (map of plan name → plan config) |
schedule | no | Cron or interval schedule for automatic runs |
trigger | no | Stream trigger — starts a run per event |
searchAttributes | no | Custom queryable fields extracted from input |
A step is the smallest unit of work. There are two kinds:
Run Step
Section titled “Run Step”Executes a target and transitions based on the outcome:
charge: run: "@actions/charge-payment" # target inputMapping: '{"amount": "$.input.amount"}' # optional JSONPath transform retry: # optional retry policy max_attempts: 3 backoff: exponential poll: # optional pending-outcome polling maxAttempts: 10 backoff: exponential transitions: success: ship failure: flo.Failed pending: check_status # only if poll is configured target_not_found: flo.Failed # execution-level outcomeTargets use a prefix to indicate what to invoke:
| Prefix | Invokes | Example |
|---|---|---|
@actions/ | A registered action (WASM or user-hosted) | @actions/charge-stripe |
@plan/ | An inline plan defined in the same workflow | @plan/payment |
@workflow/ | A child workflow (starts a nested run) | @workflow/fulfillment |
Wait For Signal Step
Section titled “Wait For Signal Step”Pauses the workflow until an external signal arrives or a timeout expires:
await_approval: waitForSignal: type: approval # signal type to match timeoutMs: 3600000 # 1 hour timeout (optional) onTimeout: flo.Failed # transition on timeout (optional) transitions: success: fulfill # after signal receivedWhen a signal of the matching type is delivered, the engine follows the success transition. If timeoutMs is configured and the timeout expires before a signal arrives, the engine either follows onTimeout (if set) or transitions the run to timed_out.
Step Outcomes
Section titled “Step Outcomes”Each step produces an outcome string that maps to a transition target.
Business outcomes (from your action’s return value):
| Outcome | Meaning |
|---|---|
success | Action completed successfully |
failure | Action reported a failure |
timeout | Action timed out |
pending | Action is still running (async/polling) |
Execution-level outcomes (from the runtime, before your logic runs):
| Outcome | Meaning |
|---|---|
target_not_found | The action or plan doesn’t exist |
target_disabled | The action is disabled |
execution_failure | Internal error during dispatch |
Execution-level outcomes cascade through a fallback chain: the engine tries target_not_found → execution_failure → failure in order, using the first transition that matches. Business outcomes match exactly — no fallback.
Transitions
Section titled “Transitions”Transitions map outcomes to the next step or to a terminal:
transitions: success: next_step # go to named step failure: flo.Failed # go to built-in terminal timeout: PaymentFailed # go to custom terminalTerminal States
Section titled “Terminal States”Terminals end the workflow run. Four are built-in:
| Terminal | Status | Description |
|---|---|---|
flo.Completed | completed | Workflow succeeded |
flo.Failed | failed | Workflow failed |
flo.Cancelled | cancelled | Cancelled by user |
flo.TimedOut | timed_out | Signal timeout |
You can define custom terminals that map to a base status:
terminals: PaymentFailed: status: failed FraudDetected: status: failed OrderCompleted: status: completedCustom terminals appear in history events, giving you finer-grained tracking than the four base statuses.
Run Lifecycle
Section titled “Run Lifecycle”A workflow run passes through these states:
pending → running ⇄ waiting → completed → failed → cancelled → timed_out| Status | Description |
|---|---|
pending | Created but not yet started |
running | Actively executing steps |
waiting | Blocked on a signal, async action, or poll timer |
completed | Reached a terminal with completed status |
failed | Reached a terminal with failed status |
cancelled | Cancelled by a user via flo workflow cancel |
timed_out | Signal wait timed out with no timeout target |
Retries
Section titled “Retries”Any run step can have a retry: block:
start: run: "@actions/flaky-service" retry: max_attempts: 5 # total attempts including first backoff: exponential_jitter # constant | linear | exponential | exponential_jitter initial_delay_ms: 500 # delay before first retry max_delay_ms: 30000 # cap on backoff delay within_ms: 120000 # total time budget (optional) transitions: success: flo.Completed failure: flo.FailedWhen the step outcome is failure or execution_failure and retries remain, the engine re-executes the same step immediately (the retry counter increments). Retries reset when the step transitions to a different step.
Backoff Strategies
Section titled “Backoff Strategies”| Strategy | Delay formula |
|---|---|
constant | initial_delay_ms always |
linear | initial_delay_ms × (attempt + 1) |
exponential | initial_delay_ms × 2^attempt, capped at max_delay_ms |
exponential_jitter | Exponential + up to 25% random jitter |
Inline Plans
Section titled “Inline Plans”Plans let you define multiple executors for the same logical step — a failover chain with smart routing. Plans are defined inline in the workflow YAML under the plans: key.
plans: payment: selection: health-weighted
executors: - name: stripe-primary action: "@actions/charge-stripe" priority: 100 retry: max_attempts: 3 backoff: exponential initial_delay_ms: 1000 max_delay_ms: 30000 breaker: failure_threshold: 5 cooldown_ms: 30000 half_open_max_calls: 3 rate_limit: max_per_second: 100 max_per_minute: 5000 tracking: mode: async timeout_ms: 300000
- name: adyen-fallback action: "@actions/charge-adyen" priority: 50 retry: max_attempts: 2 backoff: exponential_jitter initial_delay_ms: 500 max_delay_ms: 10000
health: window_ms: 300000 decay: 0.9 min_samples: 10
cache: ttl_ms: 3600000 key: "charge:{input.customer_id}:{input.idempotency_key}" invalidate_on: ["payment.refunded", "customer.deleted"]
fallback: value: '{"status": "declined", "reason": "all_providers_unavailable"}' condition: exhausted
errors: retryable: ["timeout", "rate_limited", "temporary_failure"] fatal: ["invalid_card", "fraud_detected", "insufficient_funds"]Reference the plan from a step:
start: run: "@plan/payment" transitions: success: notify failure: flo.FailedSelection Strategies
Section titled “Selection Strategies”| Strategy | Behavior |
|---|---|
static-order | Always try executors in priority order (highest first) |
round-robin | Rotate the starting executor across requests |
random | Random selection for even load distribution |
health-weighted | Prefer executors with higher health scores; requires health: config |
Circuit Breakers
Section titled “Circuit Breakers”Each executor can have an independent circuit breaker:
breaker: failure_threshold: 5 # consecutive failures before opening cooldown_ms: 30000 # how long to stay open before half-open half_open_max_calls: 3 # probe requests allowed in half-open stateStates:
- Closed — Normal operation, requests flow through
- Open — Executor is skipped (after
failure_thresholdconsecutive failures) - Half-open — After
cooldown_ms, allowshalf_open_max_callsprobe requests. If probes succeed → closed. If probes fail → open again.
closed ──(N consecutive failures)──▸ open ──(cooldown expires)──▸ half_open ▲ │ └──────────(probe succeeds)───────────────────────────────────────┘ open ◂──────────(probe fails)─────────────────────────────────────┘When a circuit breaker is open, the engine emits a plan_breaker_skip history event and moves to the next executor without attempting the action. This means a failing executor is taken out of rotation within seconds rather than consuming retries on every request.
Rate Limiting
Section titled “Rate Limiting”rate_limit: max_per_second: 100 max_per_minute: 5000 max_per_hour: 50000 # all three are optionalAsync Tracking
Section titled “Async Tracking”Some executors (e.g., user-hosted HTTP actions) report their outcome asynchronously via webhook:
tracking: mode: async # sync (default) | async timeout_ms: 300000 # timeout for async outcomeWhen mode: async, the workflow parks in waiting until the action result arrives or the timeout expires.
Health Tracking
Section titled “Health Tracking”When selection: health-weighted, the engine tracks per-executor success rates and uses them to prefer healthier executors:
health: window_ms: 300000 # rolling window for health calculation decay: 0.9 # exponential decay factor (0–1] min_samples: 10 # minimum samples before health-weighted kicks inUntil min_samples is reached, the engine falls back to priority-based ordering.
Health Score (0.0–1.0) is computed as:
score = (api_success_rate × 0.7) + (business_success_rate × 0.3)With circuit breaker penalties:
- Open breaker → score × 0.1
- Half-open breaker → score × 0.5
At each plan invocation, executors are sorted by descending health score and tried in that order. A healthy executor with no failures scores 1.0; an executor with an open breaker scores ≤ 0.1.
Runtime Behavior
Section titled “Runtime Behavior”Health state is in-memory only — it resets to defaults on server restart. This is intentional:
- Fast convergence: With a
failure_thresholdof 5, the engine rediscovers a broken executor within a handful of requests. - No stale data: An executor that was down before restart may have recovered. Starting from a clean slate avoids unnecessary penalization.
- Restart = recovery: Every executor gets a fresh chance. This matches how circuit breaker libraries like Hystrix, resilience4j, and Polly behave.
Health metrics are tracked per executor and updated on every action completion (synchronous or async). The engine emits history events for observability:
| Event | Meaning |
|---|---|
plan_executor_start | Beginning an attempt on this executor |
plan_executor_success | Executor returned success |
plan_executor_retry | Retrying the same executor |
plan_executor_exhausted | Executor’s retries exhausted, moving to next |
plan_breaker_skip | Skipped executor because circuit breaker is open |
Result Caching
Section titled “Result Caching”cache: ttl_ms: 3600000 # cache TTL key: "charge:{input.customer_id}:{input.idempotency_key}" invalidate_on: - payment.refunded - customer.deletedFallback Values
Section titled “Fallback Values”When all executors fail, the plan can return a static fallback:
fallback: value: '{"status": "declined", "reason": "all_providers_unavailable"}' condition: exhausted # exhausted (all executors failed) | any_errorError Classification
Section titled “Error Classification”Classify error codes to control retry behavior:
errors: retryable: - timeout - rate_limited - temporary_failure fatal: - invalid_card - fraud_detectedRetryable errors trigger the executor’s retry policy. Fatal errors skip retries and immediately try the next executor (or return failure).
Signals
Section titled “Signals”Signals are typed external events delivered to a running workflow. They’re used for human-in-the-loop approvals, webhooks, callbacks, or any scenario where the workflow must wait for an asynchronous external event.
Defining a Signal Wait
Section titled “Defining a Signal Wait”steps: await_approval: waitForSignal: type: manager_approval # signal type to match timeoutMs: 86400000 # 24 hour timeout onTimeout: OrderRejected # step or terminal on timeout transitions: success: fulfill_order # after signal receivedSending a Signal
Section titled “Sending a Signal”flo workflow signal <run-id> --type manager_approval '{"decision": "approved"}'When the signal arrives:
- It’s stored in the run’s signal history
- If the run is
waitingfor this signal type, the engine resumes and follows thesuccesstransition - If the run is not waiting, the signal is stored for future matching
Signal Timeout Behavior
Section titled “Signal Timeout Behavior”If timeoutMs is set and the timeout expires:
- If
onTimeoutis set → transition to that step or terminal - If
onTimeoutis not set → the run transitions totimed_outstatus
The engine periodically checks waiting runs for timeouts and resumes them automatically.
Polling
Section titled “Polling”When an action returns a pending outcome (e.g., a payment that’s still processing), you can configure the step to poll with backoff until a terminal outcome arrives:
steps: check_status: run: "@actions/check-payment-status" poll: initialDelayMs: 2000 # wait before first poll maxAttempts: 30 # max poll attempts backoff: exponential # constant | linear | exponential | exponential_jitter baseDelayMs: 2000 # base delay between polls maxDelayMs: 30000 # cap on poll delay transitions: success: notify failure: payment_failed timeout: payment_timeout # max attempts exceededEach poll re-executes the action. If the outcome is still pending, the engine parks the run and schedules the next poll. If the outcome becomes success or failure, the engine follows the corresponding transition.
JSONPath Data Flow
Section titled “JSONPath Data Flow”Steps can transform their input using JSONPath expressions. This lets you wire data from the workflow input or from previous step outputs into the current step’s input.
Input Mapping
Section titled “Input Mapping”steps: enrich: run: "@actions/enrich-customer" inputMapping: '{"customer_id": "$.input.customer_id", "domain": "$.input.company_domain"}' transitions: success: charge failure: flo.Failed
charge: run: "@actions/charge-payment" inputMapping: '{"email": "$.steps.enrich.output.email", "amount": "$.input.amount"}' transitions: success: flo.Completed failure: flo.FailedAvailable Paths
Section titled “Available Paths”| Path Pattern | Resolves To |
|---|---|
$.input | The workflow run’s input JSON |
$.input.field.subfield | A nested field from the input |
$.steps.{name}.output | The full output of a previously completed step |
$.steps.{name}.output.field | A nested field from a step’s output |
$.steps.{name}.outcome | The outcome string of a step (success, failure, etc.) |
$.flo.run_id | The current workflow run ID |
$.flo.timestamp | Current epoch timestamp in milliseconds |
String values in the input mapping that start with $. are treated as path references and resolved at runtime. Non-path values are passed through as-is.
Idempotency
Section titled “Idempotency”Workflows support idempotency keys to prevent duplicate processing:
kind: Workflowname: process-orderversion: "1.0.0"idempotency: required # none | optional | required| Mode | Behavior |
|---|---|
none | No idempotency checking (default) |
optional | Idempotency key accepted but not required |
required | Every workflow start must include an idempotency key |
When an idempotency key is provided and a run with the same key already exists for this workflow, the existing run ID is returned instead of creating a new run.
flo workflow start process-order '{"order_id":"ORD-123"}' --idempotency-key order-123# → wfrun-1
flo workflow start process-order '{"order_id":"ORD-123"}' --idempotency-key order-123# → wfrun-1 (same run, no duplicate)Scheduling
Section titled “Scheduling”Workflows can run on a recurring schedule using cron expressions or fixed intervals:
Cron Schedule
Section titled “Cron Schedule”kind: Workflowname: reconcile-accountsversion: "1.0.0"
schedule: cron: "0 */6 * * *" # every 6 hours max_concurrent: 1 # at most 1 run at a time input: '{"mode": "full"}' # input for scheduled runs
start: run: "@actions/reconcile" transitions: success: generate_report failure: flo.Failed
steps: generate_report: run: "@actions/reconcile-report" transitions: success: flo.Completed failure: flo.FailedInterval Schedule
Section titled “Interval Schedule”schedule: interval: 30000 # every 30 seconds max_concurrent: 1 input: '{"mode": "incremental"}'Cron Expression Syntax
Section titled “Cron Expression Syntax”Standard 5-field cron: minute hour day-of-month month day-of-week
| Field | Range | Special Characters |
|---|---|---|
| Minute | 0–59 | * , - / |
| Hour | 0–23 | * , - / |
| Day of month | 1–31 | * , - / |
| Month | 1–12 | * , - / |
| Day of week | 0–7 (0 and 7 = Sunday) | * , - / |
Examples:
| Expression | Meaning |
|---|---|
*/5 * * * * | Every 5 minutes |
0 */6 * * * | Every 6 hours |
0 9 * * 1-5 | 9 AM weekdays |
0 0 1 * * | Midnight on the 1st of each month |
30 2 * * 0 | 2:30 AM every Sunday |
Schedule Options
Section titled “Schedule Options”| Field | Default | Description |
|---|---|---|
cron | — | Cron expression (mutually exclusive with interval) |
interval | — | Interval in milliseconds (mutually exclusive with cron) |
max_concurrent | 1 | Maximum concurrent runs from this schedule |
input | "{}" | Input JSON override for scheduled runs |
paused | false | Whether the schedule starts paused |
Disabling a workflow (flo workflow disable) pauses the schedule. Re-enabling resumes it.
Stream Triggers
Section titled “Stream Triggers”Stream triggers start a workflow run for each event on a Flo stream. This turns a workflow into an event-driven processor.
kind: Workflowname: order-processorversion: "1.0.0"
trigger: stream: orders # source stream name namespace: prod # source namespace (optional) consumer_group: wf-orders # consumer group (optional, auto-generated) mode: shared # shared | exclusive | key_shared batch_size: 1 # events per run (1 = single, >1 = array)
start: run: "@actions/process-order" transitions: success: flo.Completed failure: flo.FailedThe event payload becomes $.input inside the workflow and is accessible via JSONPath.
| Field | Default | Description |
|---|---|---|
stream | — | Source stream name (required) |
namespace | workflow’s namespace | Source stream namespace |
consumer_group | wf-{workflow_name} | Consumer group name |
mode | shared | shared (competing consumers), exclusive (single consumer), key_shared (partition-key affinity) |
batch_size | 1 | Events per workflow run |
Search Attributes
Section titled “Search Attributes”Search attributes let you extract queryable fields from workflow input for filtering and discovery:
searchAttributes: - name: customer_id type: string from: input.customer_id - name: order_amount type: number from: input.amount - name: created_at type: timestamp from: input.timestamp| Type | Description |
|---|---|
string | Text value |
number | Numeric value |
timestamp | Epoch timestamp |
Child Workflows
Section titled “Child Workflows”A step can start a child workflow using the @workflow/ prefix:
start: run: "@workflow/payment-flow" transitions: success: flo.Completed failure: flo.FailedThe parent workflow waits for the child to reach a terminal state, then follows the corresponding transition. The child’s completion maps to success; the child’s failure maps to failure.
You can also specify a version:
start: run: "@workflow/payment-flow:2.0.0" transitions: success: flo.Completed failure: flo.FailedDisable / Enable
Section titled “Disable / Enable”Workflows can be disabled at runtime to prevent new runs from starting:
flo workflow disable reconcile-accountsWhen disabled:
- The cron schedule is paused
- Manual starts via
flo workflow startare blocked - Existing running instances are not affected
To re-enable:
flo workflow enable reconcile-accountsValidation
Section titled “Validation”Workflow definitions are validated on create. The validator checks:
| Category | Checks |
|---|---|
| Structure | kind, name, version, and start are present |
| References | All @actions/*, @plan/*, and @workflow/* targets are well-formed |
| Transitions | Every transition target is a valid step name, custom terminal, or built-in terminal |
| Reachability | All steps are reachable from start (warnings for unreachable steps) |
| Duplicates | No duplicate step names, terminal names, or executor names within a plan |
| Plans | Each plan has at least one executor; executor configs are valid |
| Health | Decay in (0, 1], min_samples > 0, window_ms > 0 |
| Signals | waitForSignal steps warn if no timeout is configured |
Validation errors are reported with error codes (E1xx–E5xx) and human-readable messages.
Execution Model
Section titled “Execution Model”How Steps Execute
Section titled “How Steps Execute”- The engine starts at the
startstep - For
runsteps:- If
inputMappingis set, resolve JSONPath references against$.inputand$.steps.* - Invoke the target (action, plan, or child workflow)
- WASM actions complete synchronously; user-hosted actions may complete asynchronously
- If the action returns
pendingandpoll:is configured, schedule the next poll - On outcome, check retries (if
failureand retries remain, re-execute) - Follow the matching transition to the next step or terminal
- If
- For
waitForSignalsteps:- Check if a matching signal was already received before parking
- If yes, follow the
successtransition immediately - If no, set status to
waitingand record the timeout deadline
- Repeat until a terminal state is reached or
MAX_ADVANCE_STEPS(256) is hit
Async Action Handling
Section titled “Async Action Handling”When a user-hosted action doesn’t complete immediately, the workflow parks:
- Run status →
waiting - The action run ID and step name are stored
- The engine periodically checks (
checkPendingActions) for completed action results - When the action completes, the engine resumes the workflow from that step and continues advancing
History Events
Section titled “History Events”Every significant state change is recorded as a history event:
| Event Type | Detail |
|---|---|
workflow_started | Input JSON |
step_started | Step name |
step_completed | Step name |
step_retry | Step name |
waiting_for_signal | Signal type |
signal_received | Signal type |
signal_matched | Signal type |
signal_timeout | Timeout target |
action_not_found | Action name |
action_disabled | Action name |
awaiting_action | Action name |
action_completed | Outcome |
workflow_completed | Terminal name |
workflow_failed | Terminal name |
workflow_cancelled | Reason |
workflow_timed_out | Detail |
Persistence
Section titled “Persistence”Workflow definitions and run state are persisted to the Unified Append Log (UAL). On node restart, the engine replays persisted entries to rebuild in-memory state:
workflow_createentries restore the definition registryworkflow_startentries restore the run registry
This ensures workflows survive node restarts without external databases.
Complete Example: Payment Processing
Section titled “Complete Example: Payment Processing”This example demonstrates inline plans, health-weighted routing, circuit breakers, polling, and caching:
kind: Workflowname: payment-processingversion: "1.0.0"idempotency: required
plans: charge-payment: selection: health-weighted executors: - name: stripe-primary action: "@actions/charge-stripe" priority: 100 retry: max_attempts: 3 backoff: exponential initial_delay_ms: 1000 max_delay_ms: 30000 within_ms: 120000 breaker: failure_threshold: 5 cooldown_ms: 30000 half_open_max_calls: 3 rate_limit: max_per_second: 100 tracking: mode: async timeout_ms: 300000 - name: adyen-fallback action: "@actions/charge-adyen" priority: 50 retry: max_attempts: 2 backoff: exponential_jitter health: window_ms: 300000 decay: 0.9 min_samples: 10 cache: ttl_ms: 3600000 key: "charge:{input.customer_id}:{input.idempotency_key}" invalidate_on: ["payment.refunded"] fallback: value: '{"status": "declined", "reason": "all_providers_unavailable"}' condition: exhausted errors: retryable: ["timeout", "rate_limited"] fatal: ["invalid_card", "fraud_detected"]
start: run: "@plan/charge-payment" inputMapping: '{"customer_id": "$.input.customer_id", "amount": "$.input.amount"}' transitions: success: notify_customer pending: check_payment_status failure: flo.Failed
steps: check_payment_status: run: "@actions/check-payment-status" inputMapping: '{"payment_id": "$.steps._start.output.payment_id"}' poll: initialDelayMs: 2000 maxAttempts: 30 backoff: exponential baseDelayMs: 2000 maxDelayMs: 30000 transitions: success: notify_customer failure: flo.Failed
notify_customer: run: "@actions/send-notification" inputMapping: '{"customer_id": "$.input.customer_id", "template": "payment_success"}' transitions: success: flo.Completed failure: flo.Completed # notification failure doesn't fail the workflowComplete Example: Order Processing with Signals
Section titled “Complete Example: Order Processing with Signals”kind: Workflowname: order-processingversion: "1.0.0"idempotency: required
terminals: OrderCompleted: status: completed OrderRejected: status: failed PaymentFailed: status: failed
start: run: "@actions/validate-order" transitions: success: process_payment failure: flo.Failed
steps: process_payment: run: "@actions/charge-payment" retry: max_attempts: 3 backoff: exponential initial_delay_ms: 1000 transitions: success: check_approval failure: PaymentFailed
check_approval: run: "@actions/check-approval-needed" transitions: success: fulfill # no approval needed failure: await_approval # high-value order, needs approval
await_approval: waitForSignal: type: approval timeoutMs: 3600000 # 1 hour onTimeout: OrderRejected transitions: success: fulfill
fulfill: run: "@actions/ship-order" transitions: success: OrderCompleted failure: flo.FailedTrigger the approval:
flo workflow signal wfrun-42 --type approval '{"decision": "approved", "approver": "manager@co.com"}'Related Docs
Section titled “Related Docs”- Actions — Actions are the building blocks that workflows compose
- Stream Processing — For continuous, stateless data pipelines (filter/map/aggregate)
- Streams — Source data for stream triggers
- KV Store — Used by actions for state lookups