Testing and Iteration

Test workflows in the editor, work with Run History and Output, and iterate before creating a workflow deployment

Use the editor run flow to check a workflow before you save a version or create a workflow deployment. The workflow editor separates in-dashboard testing from live invocation, so you can iterate on steps and inputs without publishing changes first.

Overview

Workflow testing in the editor is centered around Run Workflow, Run History, and Start a new Workflow Run. The run sheet shows recent completions and the latest output for the workflow you are editing.

You can also run an individual step from the canvas when you want a smaller test while you are still editing the workflow.

How do I start a workflow run in the editor?

Open the workflow in the editor.

Click Run Workflow in the header.

This opens Run History. It does not start the workflow by itself.

In Run History, click Start.

Fetch Hive opens Start a new Workflow Run. Click Start Workflow Run to begin the run.

If your Start step has inputs, the modal shows a field for each variable so you can enter test values before the run starts.

Where do I see results from a workflow run?

After the run starts, keep Run History open.

Use the History tab to review the completions associated with the latest run.

Use the Output tab to inspect the workflow's current final output.

The output view can render different result types. In the reviewed UI, image outputs render as images, PDF outputs render as a download link, and other results render directly in the sheet.

If no output is available yet, the sheet shows a waiting state until the run finishes.

How do I test start inputs and repeat runs while iterating?

Define your variables on Start first.

Each time you open Start a new Workflow Run, Fetch Hive uses those Start inputs to build the run form.

Change the input values in the modal when you want to test the same workflow against a different case without changing the workflow structure itself.

This is the safest way to repeat runs while you are tuning prompt content, step settings, or failure behavior.

How do I test a single step while I edit?

Hover over a step node on the canvas.

Click Run Step to start a test for that step.

For AI Prompt steps, the step settings header also includes Run, and the right side of the sheet shows the prompt messages and model response for that step test.

Use step tests for quicker iteration when you do not need to run the full workflow every time.

What do the editor save states mean while I test?

The workflow editor header can show transient save states such as editing, saving, and saved while you change step settings.

Treat these as editor state indicators, not version checkpoints. Your changes can be saved in the editor without being saved as a named workflow version yet.

Use Save Version when you are happy with the workflow state you just tested and want a checkpoint you can compare or deploy later.

See also: Creating and Editing and Publishing and Versioning

Last updated