Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install kineticdata-kinetic-platform-ai-skills-skills-concepts-workflow-enginegit clone https://github.com/kineticdata/kinetic-platform-ai-skills.gitcp kinetic-platform-ai-skills/SKILL.MD ~/.claude/skills/kineticdata-kinetic-platform-ai-skills-skills-concepts-workflow-engine/SKILL.md---
name: workflow-engine
description: Kinetic Platform workflow engine concepts, execution model, Task API v2 reference, observed response formats, run status derivation, tree type classification, stuck run repair, and lessons learned for building workflow UIs.
---
# Kinetic Platform Workflow Engine
## Overview
Workflows in Kinetic represent "everything that happens after a form is submitted." They are the automation engine for enforcing business rules, automating actions, and integrating with external systems. The workflow engine is called **Kinetic Task** and uses a visual, low-code builder.
**Key properties:**
- **Transparency** — complete visibility into process status and bottlenecks
- **Reusability** — shared routines and integrations across workflows
- **Modularity** — decoupled forms, workflows, and UI enabling independent evolution
- **Self-documentation** — process structure serves as inherent documentation
---
## Core Concepts
### Trees
A **tree** is a collective process of work units called nodes. Trees are top-level workflow definitions triggered by form submissions (via webhooks). Trees are identified by **title/name**, not slugs.
**Scope levels:**
| Scope | Application |
|-------|-------------|
| Space | Cross-Kapp logic, global notifications, system logging |
| Kapp | Department-specific shared processes |
| Form | Submission-specific logic |
**Tree metadata:**
- `name` — Tree name
- `title` — Tree title (used as identifier in API paths)
- `sourceName` — The source (kapp) this tree belongs to
- `sourceGroup` — The group within the source (typically the form slug)
- `type` — Tree type
- `status` — Active/Inactive
- `notes`, `ownerEmail`, `description`
### Nodes
A **node** is a unit of work within a workflow. Each node is created from a **handler** and accepts **parameters** as inputs. When the engine processes a node, it runs the handler's code with the given parameters.
**Pre-installed system handlers (built-in nodes):**
- Join, Junction, Echo, Loop Head, Loop Tail
- Create Trigger, Defer, No Operation, Wait
### Connectors
A **connector** links two nodes together. Each connector has a **type**, an optional **label** (human-readable description), and an optional **condition** (`value`) — a Ruby expression that must evaluate truthy for the path to execute (empty = unconditional).
**Three connector types:**
| Type | Visual | Fires When | Example Use Case |
|------|--------|------------|-----------------|
| **Complete** | Solid line | Source node finishes executing | Normal sequential flow — the default |
| **Create** | Dotted line | A deferrable node enters its deferral state (before completion) | Start an SLA timer when an email is sent, ensuring follow-up even if no response arrives |
| **Update** | Dashed line | A deferred node receives an update action | Log or process each reply to an outbound email — fires once per update, so may execute multiple times |
**Complete** is the standard connector for sequential workflows. **Create** and **Update** only apply to deferrable nodes (Email, Wait, or any handler that defers). A single node can have all three types going to different downstream nodes simultaneously.
**Timing:** The Create-connected node fires immediately (~12ms) when the deferrable node enters deferral. The Complete-connected node waits until the node actually completes (e.g., 60 seconds for a Wait node).
**Connector expressions** do NOT use ERB tags — the engine evaluates the expression directly as Ruby (e.g., `@results['Node']['Status'] == 'approved'`).
### Parameters
Each node accepts inputs called parameters. Parameters support:
- Plain text entries
- Values from preconfigured lists
- Ruby code expressions using ERB tags: `<%= ... %>`
**Variable access in parameters:**
- `@values` — Input data from forms (e.g., `@values['Status']`) — available in event-triggered trees
- `@results` — Output from previously executed nodes (e.g., `@results['Node Name']['Field Name']`)
- `@variables` — Same as `@results` (alias for accumulated node outputs)
- `@inputs` — Data passed to routines
**Full ERB context (verified by debug dump on live engine):**
Event-triggered workflows (Submission Created/Updated/Submitted):
| Variable | Type | Keys | Description |
|----------|------|------|-------------|
| `@submission` | Hash | `Created By`, `Submitted By`, `Updated By`, `Id`, `Core State`, `Handle`, `Created At`, `Submitted At`, `Updated At`, `Closed At`, `Closed By`, `Type`, `Origin Id`, `Parent Id` | The submission that triggered the workflow |
| `@form` | Hash | `Name`, `Slug`, `Description`, `Status`, `Type`, `Created At`, `Created By`, `Updated At`, `Updated By` | The form the submission belongs to |
| `@kapp` | Hash | `Name`, `Slug` | The kapp the form belongs to |
| `@space` | Hash | `Name`, `Slug` | The space |
| `@event` | Hash | `Action` (Created/Updated), `Type` (Submission), `Timestamp` | What triggered this workflow |
| `@values` | Hash | All form field names | Current field values |
| `@values_previous` | Hash | All form field names | Previous field values (empty on Created) |
| `@values_changes` | Hash | All form field names | Tracks which fields changed |
| `@submission_previous` | Hash | Same keys as `@submission` | Previous submission state |
| `@submission_changes` | Hash | Same keys as `@submission` | Tracks which submission properties changed |
| `@results` | Hash | Keyed by node name | Results from completed upstream tasks |
| `@variables` | Hash | Same as `@results` | Alias for `@results` |
| `@run` | Hash | `Id` | Current run |
| `@source` | Hash | `Name`, `Group`, `Id`, `Data` | Source metadata |
| `@task` | Hash | `Id`, `Status`, `Name`, `Deferral Token`, `Task Definition Id`, `Node Id`, `Tree Id`, `Tree Name`, `Source`, `Source Id`, `Return Variables`, `Deferred Variables`, `Loop Index`, `Parent Loop Index`, `Visible`, `Execution Duration` | Current node metadata |
| `@trigger` | Hash | `Id`, `Engine Identification`, `Status`, `Action`, `Execution Type`, `Tree Id`, `Node Id`, `Source`, `Source Id`, `Loop Index`, `Deferral Token`, `Deferred Variables`, `Message`, `Management Action`, `Selection Criterion`, `Flags` | Engine trigger metadata |
| `@kapp_attributes` | Hash | Kapp attribute names | Kapp-level attributes |
| `@form_attributes` | Hash | Form attribute names | Form-level attributes |
| `@space_attributes` | Hash | Space attribute names | Space-level attributes |
WebAPI trees have additional variables: `@request`, `@request_body_params`, `@request_headers`, `@request_query_params`, `@requested_by`.
Common expressions:
```ruby
# WHO created/submitted/updated
<%= @submission['Created By'] %>
<%= @submission['Submitted By'] %>
<%= @submission['Updated By'] %>
# WHAT form and kapp
<%= @form['Slug'] %> # e.g. "leads"
<%= @form['Name'] %> # e.g. "Leads"
<%= @kapp['Slug'] %> # e.g. "crm"
# FIELD values
<%= @values['Status'] %>
<%= @values['Priority'] %>
# ALL values as string
<%= @values.map{|k,v| "#{k}=#{v}"}.join(', ') %>
# DETECT field changes (Submission Updated only)
<%= @values['Status'] != @values_previous['Status'] %>
```
**Debugging technique — dump all ERB variables:**
Create an Echo node right after Start with this input to see every variable available:
```erb
<% vars = instance_variables.map { |v|
name = v.to_s
val = instance_variable_get(v)
keys = val.respond_to?(:keys) ? val.keys.join(', ') : val.to_s[0..200]
"#{name} [#{val.class}] = #{keys}"
}; %><%= vars.join(' || ') %>
```
This dumps variable names, types, and hash keys. Check the Echo node's `output` result in Activity Monitor.
**ERB Hash access pitfall:** In the Task engine ERB context, Ruby Hash `[]` raises `IndexError` for missing keys (unlike standard Ruby which returns `nil`). The standard Ruby `.dig()` safe-access method is **also unavailable** — calling it on `@results` or other Hash-like proxies raises `UnknownVariableError` at evaluation time, even on the simplest case. Use `.fetch(key, default)` exclusively for safe missing-key access; for nested access, chain `.fetch` calls:
```ruby
# BAD — raises IndexError if key missing:
<%= @request_query_params['personId'] %>
# BAD — raises UnknownVariableError; .dig() is unavailable in this ERB context:
<%= @results.dig('Some Node', 'Some Field') %>
# GOOD — returns empty string if missing:
<%= @request_query_params.fetch('personId', '') %>
# GOOD — chain .fetch for nested access:
<%= @results.fetch('Some Node', {}).fetch('Some Field', nil) %>
```
Verified May 2026 (vendor-risk-test): `@results.dig('Create Compliance Approval', 'Decision') || @results.dig('Create Procurement Approval', 'Decision') || ''` in an echo node's `input` parameter raised `UnknownVariableError`. Switching to `@results.fetch('Create Compliance Approval', {}).fetch('Decision', nil) || ...` resolved.
**Note:** `@values['FieldName']` does NOT raise IndexError for missing fields — all form fields are present in `@values` (with empty string for unfilled fields). The `.fetch` pattern is needed for `@request_query_params`, `@request_headers`, and other hashes where keys are not guaranteed.
### Routines
A **routine** is a reusable workflow with explicitly defined inputs and outputs. Unlike trees (which get inputs from form submissions), routines can be embedded/called from multiple trees or other routines.
**Common uses:**
- Sending standardized notifications
- Computing due dates based on SLA attributes
- Executing standard data lookups
### Common Workflow Components
Customer workflows in production Kinetic spaces vary widely in size, complexity, and idiom. Some are three or four nodes that fire a single API call; others are dozens or hundreds of nodes orchestrating multi-stage approval, notification, and fulfillment. Demo spaces don't represent that full range — they tend to optimize for clarity and pedagogy over realism. The components below name building blocks frequently observed across multiple spaces, with brief notes on what each commonly handles. Treat this section as vocabulary for talking about workflows, not a prescription for shape.
**The standard `routine_kinetic_*` library.** New Kinetic environments ship with a library of Global Routines wrapping common Core API operations: `routine_kinetic_submission_retrieve_v1`, `routine_kinetic_submission_update_v1`, `routine_kinetic_submission_update_status_v1`, `routine_kinetic_email_template_notification_send_v1`, `routine_kinetic_user_create_v1`, `routine_kinetic_finish_v1`, and many more. Customer-built routines extend this library; spaces vary in how heavily they extend it. A workflow composed primarily of `routine_kinetic_*` calls (plus glue) is a frequently-observed style — see "Routine composition" below.
**Error-handling routine.** `routine_handler_failure_error_process_v1` is the building block invoked when a handler raises an error. In observed traffic, it is wired *inside* individual routines — not in the caller's code. A typical Core-API-wrapping routine has the API node connecting to three Complete connectors with mutually-exclusive Ruby conditions on `@results['API']['Handler Error Message']`: success path, real-error path (which routes to `routine_handler_failure_error_process_v1`, then a recursive retry, then return), and special-case 404 path. Form-attached workflows that compose the standard library inherit this error handling without wiring it themselves. See `concepts/workflow-xml` for the connector-level structure.
**`utilities_echo_v1` for value storage and computed results.** Beyond debugging, echo nodes are commonly used as named result-stash points: an echo node titled "Approval Task Id" with `input` set to a computed value exposes that value downstream as `@results['Approval Task Id']['output']`. Echo can also run Ruby in its `input` parameter and surface the evaluated string for downstream use. Treat echo as a flexible utility, not strictly a debugging aid.
**Parallel work — `system_join_v1` and `system_junction_v1`.** Both reconverge multiple branches into a single downstream path. Join evaluates only its immediate incoming connectors (with `type: All`/`Any`/`Some`); Junction traces back to a common parent node and proceeds when each branch is "complete as possible" (including branches that conditionally short-circuited). Junction is observed more often in routine-composed workflows that branch on submission state and rejoin; Join is more common when the branch count is fixed and known (parallel approvals). See `concepts/workflow-xml` for parameter and connector details.
**Callback workflows on deferred subforms.** When a workflow node defers (`defers: true, deferrable: true`) and creates a subform submission carrying a deferral token, the subform's own `Submission Submitted` workflow handles the resume. These callback trees are commonly small — three nodes is frequently sufficient: `start` → `utilities_create_trigger_v1` (which reads the token from `@values['Deferral Token']` and passes any decision data back via `deferred_variables`) → close-own-submission. The shape repeats across approval forms, fulfillment subtasks, and any other deferred-handoff pattern. See `recipes/add-approval-workflow` for a worked example.
**Routine composition as a workflow style.** A frequently-observed customer pattern is a form-attached workflow built almost entirely from `routine_kinetic_*` calls plus connectors with Ruby `value` expressions for branching, plus `utilities_echo_v1` nodes to stash IDs, plus `system_junction_v1` to converge after conditional branches. The error-handling routine and Core API calls live inside the routines being called, so the customer code stays readable. This is one approach among several — direct `system_integration_v1` workflows and mixed styles are equally valid depending on what each step needs.
---
## Triggering Workflows
### Webhooks
Trees are triggered through **webhooks** — HTTP callbacks fired when predetermined actions occur.
**Supported event types:**
- **User Events:** Created, Updated, Deleted
- **Team Events:** Created, Updated, Deleted
- **Form Events:** Form Created/Updated, Submission Created/Submitted/Updated/Closed/Deleted
### WebAPIs
Custom HTTP endpoints that enable external systems to call workflows. They support custom HTTP methods, security policies, and can return responses (max 30-second synchronous timeout).
### Programmatic Triggering
`POST /app/components/task/app/api/v2/runs` creates a new run of a specified tree directly via API.
---
## Workflow Events and coreState
Workflows fire based on coreState transitions — not field value changes. The three coreStates are **Draft** (incomplete), **Submitted** (complete, ready for processing), and **Closed** (finalized, locked from edits).
| Workflow Event | When it fires | coreState after |
|----------------|---------------|-----------------|
| Submission Created | Any new submission is created (via POST) | Draft or Submitted (depends on whether `coreState:"Submitted"` was in the POST body) |
| Submission Submitted | A submission becomes Submitted — either POST with `coreState:"Submitted"` or PUT Draft → Submitted | Submitted |
| Submission Updated | Any PUT that modifies values on a Submitted record | Submitted |
| Submission Closed | coreState transitions to Closed (via PUT with `coreState:"Closed"`) | Closed |
**Both `Submission Created` and `Submission Submitted` fire on a single POST with `coreState:"Submitted"`.** Verified empirically (May 2026) — registering both event workflows on the same form and POSTing once produces one run of each. Earlier versions of this skill claimed that POSTing with `coreState:"Submitted"` only fired `Submission Created`, not `Submission Submitted`; that was wrong. If you need to suppress `Submission Submitted` until a deliberate approval action (for example, when creating an approval-form submission inside another workflow), POST with `coreState:"Draft"` and submit later via a separate PUT — both events still fire, but at the times you choose.
---
## Execution Model
### Runs
A **run** is an instance of a workflow execution. Runs record each workflow instance's input, trigger, node, and result.
### Tasks (within Runs)
Tasks represent units of work within a run with three states:
- **New** — not yet executed
- **Deferred** — awaiting external completion trigger
- **Closed** — execution complete
### Deferrals
Deferred nodes pause workflow execution while waiting for external processes to respond. Visually identified by a blue corner on the node.
**Resuming a deferred node requires:**
- **Token** — unique identifier locating the specific node instance
- **Action Type** — "Update" (fires Update connectors) or "Complete" (fires Complete connectors)
- **Results** — XML structure: `<results><result name="Key">Value</result></results>`
- **Messages** — plain text notifications
### Looping
Uses **Loop Head** and **Loop Tail** system handlers. Loop iterations execute in **parallel** (not sequentially). For sequential iteration, use recursive routines.
- **Loop Head params:** Data Source, Loop Path (XPath for XML, JSONPath for JSON), Variable Name
- **Loop Tail params:** Completion condition — All, Any, or Some
### Joins and Junctions
- **Joins** — evaluate only directly connected connectors. Types: All, Any, Some
- **Junctions** — look backward through branches to a common parent node, evaluate whether branches are "complete as possible"
### Error Management
Three strategies:
1. **Branching on error outputs** — handlers return errors as results for conditional routing
2. **Retry paths with external input** — wrap handlers in routines with error-handling
3. **Engine-level retry** — built-in resolution actions on failed nodes/connectors
**Error types and valid resolution actions:**
| Error Type | Valid Actions | Description |
|---|---|---|
| **Handler Error** | Retry Task, Skip Task, Do Nothing | Handler execution failed |
| **Node Parameter Error** | Retry Task, Skip Task, Do Nothing | ERB expression in node parameter failed to evaluate |
| **Connector Error** | Continue Branch, Cancel Branch, Do Nothing | Ruby condition on a connector failed to evaluate |
| **Missing Handler Error** | Retry Task, Skip Task, Do Nothing | Handler definition not found |
| **Source Error** | Do Nothing | Source system unavailable |
| **Tree Error** | Do Nothing | Tree definition problem |
| **Unidentified Error** | Do Nothing | Engine-level crash (e.g., java.lang.RuntimeException) |
**Connector Error resolution:**
- **Continue Branch** — treat the connector condition as `true` (proceed down this path)
- **Cancel Branch** — treat the connector condition as `false` (do not take this path)
- These are different from handler errors because a connector is not a task — it's a routing decision
**Bulk error resolution API:**
```
POST /errors/resolve
{ "ids": [1, 2, 3], "action": "Do Nothing", "resolution": "Description of fix" }
```
---
## Handlers
A **handler** is a small program that performs a unit of work. Handlers are Ruby + XML combinations that execute functions in workflows.
**Handler file structure (3 directories):**
1. **handler/init.rb** — Executable Ruby code
- `initialize` method — retrieves info from node.xml, assigns `@info_values` and `@parameters`
- `execute` method — performs API interactions, returns handler results
2. **process/node.xml, info.xml** — XML configuration
- Defines: config values (info values), parameters, results, XML input templates
3. **test/input.rb, output.xml** — Test cases
**Handler properties:**
- `definitionId` — Unique identifier (e.g., `kinetic_request_ce_submission_create_v1`)
- `definitionName` — Name without version
- `definitionVersion` — Version number
- `deferrable` — Boolean indicating deferral support
- `properties` — Configuration key-value pairs (info values)
- `parameters` — Input parameter definitions
- `results` — Output result definitions
---
## Sources
A **source** defines the application calling and getting results from a tree. The `type` field tells Kinetic Task which **consumer** file to use for that source.
Every kapp with workflows typically has a corresponding source in the task engine.
**Source properties:**
- `name` — Source identifier
- `adapter` — Source adapter class to use
- `properties` — Adapter-specific configuration key-value pairs
- `policyRules` — Access control rules
- `status` — Operational state
Available adapters discoverable via `GET /meta/sourceAdapters`.
---
## Connections (Modern Integration)
**Connections** are the newer, preferred integration method over legacy Bridges & Handlers:
- Store base URLs, credentials, and endpoint details for REST APIs or SQL databases
- **Operations** within connections define specific actions
- Support HTTP connections (REST APIs) and SQL Database connections (PostgreSQL, SQL Server)
---
## Task API v2 Reference
**Base URL:** `{serverUrl}/app/components/task/app/api/v2`
**Auth:** HTTP Basic Auth (`Authorization: Basic <base64(user:pass)>`)
**Pagination:** `limit` (default 100) + `offset` (default 0)
**Filtering:** `tree`, `source`, `start`, `end` query params on `/runs`
**Critical: Component Path vs Direct Path for Routine Creation**
| Path | Inputs/Outputs | treeJson |
|------|---------------|----------|
| `/app/components/task/app/api/v2/trees` | **Works** — saves `taskDefinition` | Works |
| `/kinetic-task/app/api/v2/trees` | **Silently dropped** | Works |
The component path (`/app/components/task/...`) is what the Kinetic Console uses internally. The direct path (`/kinetic-task/...`) does NOT support `inputs`/`outputs` on POST — they are silently ignored, producing a routine with no public interface. **Always use the component path for routine creation.**
### Tree Endpoints
| Method | Path | Description |
|--------|------|-------------|
| GET | `/trees` | Search trees |
| POST | `/trees` | Create tree (JSON fields, file upload, or URL import) |
| GET | `/trees/{title}` | Retrieve tree by title |
| PUT | `/trees/{title}` | Update tree (treeXml OR treeJson, not both) |
| DELETE | `/trees/{title}` | Delete tree |
| POST | `/trees/{title}/clone` | Clone/duplicate tree |
| GET | `/trees/{title}/export` | Export tree definition |
| POST | `/trees/{title}/restore` | Restore deleted tree |
**Tree Create** supports three methods:
1. JSON fields: `{ "sourceName": "...", "sourceGroup": "...", "name": "..." }`
2. File upload: multipart/form-data with `content` field
3. URL import: `{ "contentUrl": "..." }`
**Tree Update** accepts `treeXml` or `treeJson` (not both).
### Run Endpoints
| Method | Path | Description |
|--------|------|-------------|
| GET | `/runs` | Search runs |
| POST | `/runs` | Create run (trigger a tree) |
| GET | `/runs/{id}` | Retrieve run |
| PUT | `/runs/{id}` | Update run |
| DELETE | `/runs/{id}` | Delete run |
| GET | `/runs/{id}/tasks` | List tasks in run |
| POST | `/runs/{id}/triggers` | Create root node trigger |
| POST | `/runs/task/{token}` | Complete deferred task |
| PUT | `/runs/task/{token}` | Update deferred task |
### Handler Endpoints
| Method | Path | Description |
|--------|------|-------------|
| GET | `/handlers` | List handlers |
| POST | `/handlers` | Import handler (ZIP or URL) |
| GET | `/handlers/{definitionId}` | Retrieve handler |
| PUT | `/handlers/{definitionId}` | Update handler |
| DELETE | `/handlers/{definitionId}` | Delete handler |
### Source Endpoints
| Method | Path | Description |
|--------|------|-------------|
| GET | `/sources` | List sources |
| POST | `/sources` | Create source |
| GET | `/sources/{name}` | Retrieve source |
| PUT | `/sources/{name}` | Update source |
| DELETE | `/sources/{name}` | Delete source |
| POST | `/sources/{name}/validate` | Test connection |
### Other Endpoints
- **Categories:** CRUD + handler/routine categorization
- **Triggers:** Search, retrieve, update, delete + backlogged/paused/scheduled
- **Errors:** Search, retrieve, delete, batch resolve
- **Policy Rules:** CRUD by type/name
- **Users/Groups:** CRUD + group membership
- **Config:** Auth, database, engine, identity store, session, encryption keys
- **Meta:** `GET /meta/sourceAdapters`, `GET /meta/version`
---
## Engine Configuration
| Setting | Description |
|---------|-------------|
| Sleep Delay | Pause intervals during processing |
| Max Threads | Maximum concurrent execution threads |
| Trigger Query | Selection criteria (default: `'Selection Criterion'=null`) |
---
## Programmatic Workflow Creation (Core API)
**Always use the Core API for creating/updating/deleting workflows.** The Task API v2 PUT silently ignores tree XML content.
### Core API Workflow Endpoints
| Method | Path | Description |
|--------|------|-------------|
| GET | `/app/api/v1/kapps/{kapp}/workflows` | List workflows + orphan diagnostics |
| POST | `/app/api/v1/kapps/{kapp}/workflows` | Create workflow (auto-registers with platform) |
| PUT | `/app/api/v1/kapps/{kapp}/workflows/{id}` | Update workflow / upload tree definition |
| DELETE | `/app/api/v1/kapps/{kapp}/workflows/{id}` | Soft-delete workflow |
**No standalone `GET /workflows/{id}` exists.** A 404 is returned for `GET /app/api/v1/workflows/{id}` (without the kapp/form scoping). To read a single workflow's metadata, either:
- Use the kapp- or form-nested list: `GET /app/api/v1/kapps/{kapp}/workflows` or `GET /app/api/v1/kapps/{kapp}/forms/{form}/workflows`, then filter by `id`, OR
- Use the Task API by tree title: `GET /app/components/task/app/api/v2/trees/{url-encoded-title}` (see `concepts/workflow-creation` for the title format).
### Two-Step Creation
1. **Create:** `POST /workflows` with `{name, event, type:"Tree", status:"Active"}`
- Returns `id` (UUID) — also used as `sourceGroup` in Task API
- Auto-sets `platformItemType`, `platformItemId`, `guid === sourceGroup`
2. **Upload definition:** `PUT /workflows/{id}` with `{"treeXml": "<taskTree>...</taskTree>"}`
- Must be ONLY the `<taskTree>` inner element — NOT the full `<tree>` wrapper
- Server adds the wrapper automatically
**Response shape on POST/PUT `/workflows`:** the response body is a flat workflow object — `{id, name, event, status, ...}` — NOT wrapped under a `workflow` key. Code that reads `response.workflow.id` will fail; read `response.id` directly. (Contrast with `/trees` and `/forms` endpoints, which often nest the entity under a top-level key.)
**`?include=treeJson` is silently ignored on the form-scoped workflow list.** `GET /app/api/v1/kapps/{kapp}/forms/{form}/workflows?include=treeJson` returns the workflows without the `treeJson` payload. To read a workflow's tree, either fetch via the Task API by title (`GET /app/components/task/app/api/v2/trees/{title}?include=treeJson`) or use the kapp-scoped workflow list, where the include parameter is honored.
### Kapp-Level vs Form-Level Workflows
- **Kapp-level:** `POST /kapps/{kapp}/workflows` — fires for ALL forms in the kapp
- **Form-level:** `POST /kapps/{kapp}/forms/{form}/workflows` — fires only for that form
- Both share the same tree infrastructure in the Task API
### Architecture: Core vs Task
The Workflow Engine (Task) is a **separate web app** that runs independently from Core (the forms engine). However, the Task API is proxied through Core at `/app/components/task/app/api/v2/...` — this is the recommended way to access Task because Core applies permissions. Never call the Task engine host directly in production.
**CRUD pattern:**
1. **Create** workflow via Core: `POST /app/api/v1/kapps/{kapp}/forms/{form}/workflows` — this creates the tree AND registers it with the form
2. **Update** workflow (including uploading tree definition) via Core: `PUT /app/api/v1/workflows/{id}` — use `treeXml` or `treeJson` in the body
3. **Read** tree details, triggers, runs via Core-proxied Task API: `/app/components/task/app/api/v2/trees/{title}`, `/runs`, `/triggers`
4. **Delete** workflow via Core: `DELETE /app/api/v1/workflows/{id}`
**IMPORTANT:** These are completely separate queries. `GET /kapps/{kapp}/workflows` returns **only kapp-level** workflows — form-level workflows are invisible. To discover ALL workflows in a kapp, you must iterate each form with `GET /kapps/{kapp}/forms/{form}/workflows`. The `platformItemType` field distinguishes them: `"Kapp"` vs `"Form"`.
### Why NOT Task API for Workflow Creation
- `PUT /trees/{title}` with XML content returns HTTP 200 and bumps `versionId` but does NOT persist the XML
- Trees created via `POST /trees` lack platform registration — flagged as "orphaned" and may be deleted
- `guid !== sourceGroup` when created via Task API — admin UI shows "Unable to retrieve tree by GUID"
### Supported Events
**Warning:** The API accepts ANY string as the event name without validation. Invalid event names (like typos) are silently accepted but the workflow will never fire. Always use one of the exact names below.
| Category | Valid Event Names |
|----------|------------------|
| **Space** | `Space Login Failure` |
| **User** | `User Login`, `User Logout`, `User Created`, `User Updated`, `User Deleted`, `User Membership Change` |
| **Submission** | `Submission Created`, `Submission Submitted`, `Submission Updated`, `Submission Saved`, `Submission Closed`, `Submission Deleted` |
| **Form** | `Form Created`, `Form Updated`, `Form Deleted`, `Form Restored` |
| **Team** | `Team Created`, `Team Updated`, `Team Deleted`, `Team Restored`, `Team Membership Change` |
**Scope determines which events are available:**
- **Form-level workflows** — Submission events only
- **Kapp-level workflows** — Submission + Form events (fires for all forms in the kapp)
- **Space-level workflows** — All events (Space, User, Team, plus Submission/Form across all kapps)
Note: `Submission Saved` fires on every save (including Draft saves). `Submission Submitted` fires when a submission becomes `Submitted` — either via a POST with `coreState:"Submitted"` or a PUT transitioning Draft → Submitted (see the callout in "Workflow Events and coreState" above). `Form Restored` and `Team Restored` fire when a soft-deleted entity is restored.
### Workflow Response Shape
```json
{
"id": "a03b7bb6-4766-486a-9944-ccbd40121241",
"name": "On Submit",
"event": "Submission Submitted",
"filter": "",
"sourceGroup": "a03b7bb6-4766-486a-9944-ccbd40121241",
"type": "Tree",
"status": "Active",
"platformItemType": "Form",
"platformItemId": "230bacf6-32f5-11f1-98c0-6599b94dbb50",
"ownerEmail": null,
"notes": null,
"versionId": "0",
"createdAt": "2026-04-08T02:49:17.340Z",
"createdBy": "admin@example.com",
"updatedAt": "2026-04-08T02:49:17.340Z",
"updatedBy": "admin@example.com"
}
```
The `filter` field accepts KSL expressions for conditional triggering. **Critical: use function-call syntax** `values('Field')`, NOT bracket syntax `values["Field"]`.
```
// CORRECT — KSL function syntax with double-quoted string literals
"filter": "values('Status') == \"Open\""
"filter": "form('name') == \"Approval\""
// ALSO WORKS — single-quoted string literals
"filter": "values('Status') == 'Open'"
// WRONG — bracket syntax silently fails, workflow never fires
"filter": "values[\"Status\"] == \"Open\""
```
**Filter scope depends on workflow level:**
- **Form-level workflows** (Submission events) — filter can use `values('Field')`, `identity('username')`, `form('slug')`, `kapp('slug')`, `submission('property')`
- **Kapp-level workflows** (Form events) — filter can use `form('slug')`, `kapp('slug')` but NOT `values()` (no submission context)
- **Space-level workflows** (User/Team events) — filter can use `identity()`, `space('slug')` but NOT `values()`, `form()`, or `kapp()` (no form/kapp context)
The filter is evaluated by the Core API before triggering the Task engine. If the filter returns false, the workflow is silently skipped — no run is created.
**Change-detection in filters is NOT supported.** `values_previous()` is NOT a valid KSL binding even though `@values_previous` is available in node ERB. The filter accepts `values_previous('Status') != "X"` at registration but the binding returns nil/empty at runtime, so the filter never matches change-detection conditions. Workflow appears inert; no run created, no entry in `/errors`.
**Pattern: KSL filter + ERB connector guard.** Do the gross check in the filter on the current state, then guard the side-effect node inside the tree with a connector condition that uses `@values_previous`:
```json
// Workflow registration
{ "event": "Submission Updated",
"filter": "values('Status') == \"In Repair\"" }
```
```json
// Connector inside the tree (start → side-effect node)
{ "from": "start", "to": "n1", "type": "Complete",
"value": "@values_previous['Status'] != 'In Repair'" }
```
The KSL filter creates the run only when the current state matches; the connector blocks the side-effect node when it's a no-op update (Status was already "In Repair"). Empty runs (only `start` Closed) are produced for no-op PUTs but no external side effect fires.
The GET response also includes diagnostic arrays: `{ "migratable": [], "missing": [], "orphaned": [], "workflows": [...] }`
---
## Kinetic Agent
Lightweight web app for integrating across network boundaries securely:
- Deployed in DMZ for hybrid-cloud integrations
- Agent handlers execute remotely on the agent rather than on the platform
- Uses shared secret authentication between Platform and Agent
---
## Observed API Response Formats (from live testing)
### Run Object (GET /runs)
**CRITICAL: `include=details` is required** to get `id`, `createdAt`, `updatedAt`, `createdBy`, `updatedBy` on run objects. Without it, runs only contain `status`, `sourceId`, `tree`, and `source` — the `id` field is **absent**, not null.
Without `include=details`:
```json
{
"status": "Started",
"sourceId": "b003cac6-...",
"tree": { "name": "test1", ... },
"source": { "name": "Kinetic Request CE", ... }
}
```
With `include=details`:
```json
{
"id": 2690,
"status": "Started",
"sourceId": "b003cac6-...",
"createdAt": "2026-02-12T19:04:42.612Z",
"createdBy": "SYSTEM",
"updatedAt": "2026-02-12T19:04:42.660Z",
"updatedBy": "SYSTEM",
"tree": {
"name": "test1",
"title": "Kinetic Request CE :: 2e238f41-... :: test1",
"sourceName": "Kinetic Request CE",
"sourceGroup": "2e238f41-...",
"status": "Active",
"type": "Tree",
"versionId": "1"
},
"source": {
"name": "Kinetic Request CE",
"status": "Active",
"type": "Kinetic Request CE"
}
}
```
**Key observations:**
- **Always use `include=details`** when you need run IDs or timestamps — without it you cannot identify or sort runs
- `id` is a numeric integer, not a string/UUID
- `status` values observed: `"Started"`, `"Complete"`, `"Error"`
- `tree.title` format: `"SourceName :: SourceGroup :: TreeName"` — the full title is used in API paths
- `tree.name` is the short/friendly name
- `createdBy` is often `"SYSTEM"` when triggered by webhooks
- The `count` field in list responses gives the **total matching record count** (useful for KPIs without loading all data)
- Runs are returned in **descending order** (most recent first by `id`)
### Tree Object (GET /trees with include=details)
```json
{
"id": 7,
"name": "test1",
"title": "Kinetic Request CE :: 2e238f41-... :: test1",
"sourceName": "Kinetic Request CE",
"sourceGroup": "2e238f41-...",
"status": "Active",
"type": "Tree",
"versionId": "1",
"guid": "2e238f41-...",
"event": "Submission Created",
"platformItemId": "92d17329-...",
"platformItemType": "Space",
"createdAt": "2026-02-12T17:54:49.056Z",
"createdBy": "second_admin"
}
```
**Key observations:**
- `event` values include: `"Submission Created"`, `"Submission Submitted"`, `"Submission Updated"`, `"Submission Closed"`, or `null` (for WebAPI/manual triggers)
- `platformItemType` indicates scope: `"Space"`, `"Kapp"`, `"Form"`
- `sourceGroup` may be a GUID (for webhook-triggered trees) or a path like `"WebApis > catalog"` (for WebAPI trees)
- Built-in trees like `"Notify on Run Error"` have `sourceName: "Kinetic Task"` and `sourceGroup: "Run Error"`
- **`run.tree` is an object, not a string** — use `run.tree?.name` or `(typeof run.tree === "object" ? run.tree?.name : run.tree)` to get the tree name
### Task Object (GET /runs/{id}/tasks)
```json
{
"branchId": 1,
"deferredResults": {},
"definitionId": "utilities_echo_v1",
"duration": 11,
"loopIndex": "/",
"nodeId": "utilities_echo_v1_1",
"nodeName": "a",
"results": { "output": "test" },
"status": "Closed",
"token": null,
"visible": true
}
```
**Key observations:**
- Tasks do **NOT** have `createdAt`/`updatedAt` — they have `duration` in **milliseconds**
- Task `status` values: `"New"`, `"Deferred"`, `"Closed"` (NOT "Complete" — tasks use "Closed")
- `results` is a flat key-value object (not nested)
- `deferredResults` is separate from `results` — populated when a deferred task receives results
- `visible: false` = system nodes (like Start); `visible: true` = user-defined nodes
- `token` is populated for deferrable nodes awaiting completion
- `definitionId` encodes handler info: `{category}_{handler}_{version}` (e.g., `utilities_echo_v1`)
- `nodeId` is unique within the tree definition; `nodeName` is the user-assigned display name
- `branchId` identifies which execution branch the task belongs to (relevant for parallel paths)
- `loopIndex` is `/` for non-loop tasks; loop iterations get indexed paths
---
## Lessons Learned — Building Workflow UIs
### Respect the Server
- Never load all runs upfront — with thousands of workflow executions, this is slow and wasteful
- Use server-side `limit`/`offset` pagination: fetch 25 records at a time
- Use `count` from the API response to determine if "more" exist — don't show total page counts
- Use `limit=1` count-only queries for dashboard KPI numbers (total runs, today's runs)
### `include=details` is Non-Negotiable
- Without `include=details`, run objects lack `id`, `createdAt`, `updatedAt`, `createdBy`
- These fields are **absent** (not null) — code like `run.id` returns `undefined`
- Always add `&include=details` to every `/runs` request
### Server-Side vs Client-Side Filtering
- **Server-side** (use for filters that change the dataset): `tree` parameter works well
- **Client-side** (use for filtering within a loaded page): text search, status filtering within 25 loaded rows
- Changing a server-side filter should reset to offset 0 and re-fetch
- Changing a client-side filter should just re-render the current page
### Navigation Patterns
- **Prev/Next pagination** is better than numbered pages when you only load one page at a time
- Show "Showing 1–25 of 2,689" and `Previous` / `Next` buttons
- Don't show "Page 1 of 108" — you don't know how many pages exist without loading all data
- **Prev/Next within detail views**: when drilling into a run or task, provide prev/next buttons to navigate siblings without returning to the list
### Run Status Is Misleading
`run.status` is almost always `"Started"` — even after the workflow has completed successfully. The engine does not reliably update run status to `"Complete"`. **Derive real status from triggers:**
```
GET /triggers?runId={id}&status=Failed&count=true
→ count > 0 = failed run
→ count = 0 = likely succeeded (check if all triggers are Closed)
```
For UI display, classify runs by checking their triggers rather than trusting `run.status`.
### Tree Type Classification via `sourceGroup`
The `sourceGroup` field on trees reveals the tree type without needing additional lookups:
| `sourceGroup` Pattern | Tree Type | Example |
|----------------------|-----------|---------|
| `"WebApis > {kapp-slug}"` | WebAPI tree | `"WebApis > services"` |
| UUID v4 format | Event-triggered tree | `"bee52c65-dbae-4959-894e-b659e59eaba1"` |
| Other (e.g., `"-"`) | Routine | `"-"` |
### Stuck Run Repair
When a run is stuck (Start node processed but downstream nodes never fire), manually create a trigger to advance past the stuck point:
```
POST /app/components/task/app/api/v2/runs/{runId}/triggers
{
"nodeId": "utilities_echo_v1_1",
"action": "Root",
"type": "Automatic",
"loopIndex": "/"
}
```
This creates a downstream trigger to resume execution from the specified node.
### Task API PUT on Core API-Registered Workflows
The v7 docs warn that `PUT /trees/{title}` (Task API v2) on a workflow created via the Core API wipes `event`, `platformItemType`, and `platformItemId`, causing the workflow to disappear from the admin UI and stop firing. **This claim does not replicate on the current platform.** Verified May 2026: three different PUT body shapes (`{treeJson}` alone; `+event`; `+platformItemType+platformItemId`) all preserved registration metadata across consecutive PUTs, and the workflow continued to fire on subsequent submissions in each case.
For event-triggered workflows, the Core API path remains the recommended idiom: `PUT /kapps/{kapp}/workflows/{id}` with `{treeXml: "..."}` or `{treeJson: {...}}`. The Task API path is also reliable in current-platform tests but isn't the recommended idiom — leave it for WebAPI trees and routines, which don't have Core registration metadata to risk.
---
## Additional Task API Resources
### Engine Status
```bash
GET /engine
# Response: { "buildDate": "...", "status": "Running", "statusMessage": null, "version": "6.1.7" }
```
### Environment Info
```bash
GET /environment
# Response: { "System Information": { "Host": "...", "Java Version": "...", "Ruby Version": "..." }, "Server Information": { ... } }
```
### Categories (Handler Organization)
Handlers are organized into categories:
```bash
GET /categories
# Response: { "count": N, "categories": [{ "name": "System Controls", "description": "...", "type": "Integrated" }, ...] }
```
Types: `"Integrated"` (built-in system handlers), `"Stored"` (uploaded handlers).
### Sources (Workflow Trigger Sources)
Sources define where workflow triggers originate:
```bash
GET /sources?include=details
```
### Groups
Logical groupings for organizing trees and handlers.
### Policy Rules (Access Control)
Ruby-expression-based access rules for the Task API:
```bash
GET /policyRules
# Response: { "policyRules": [{ "name": "Admins", "rule": "@identity.get_property('spaceAdmin') == 'true'", "type": "API Access" }, ...] }
```
### Access Keys
API authentication keys for machine-to-machine access to the Task API (alternative to Basic Auth).
### Errors and System Errors
```bash
GET /errors?limit=10&include=details # Application-level errors (workflow failures)
GET /systemErrors?limit=10&include=details # System-level errors (infrastructure issues)
```
### Triggers
Triggers are execution records for workflow nodes — they represent scheduled or completed handler executions:
```bash
GET /triggers?include=details&limit=10
# Each trigger has: action, branchId, nodeId, nodeName, status, type, token, results, message
```