AI Prompt

Configure a workflow step that runs an LLM prompt with messages and returns text or structured output

Use AI Prompt when you want a workflow step to send a prompt to an LLM and store the assistant response for later steps.

Configuration

Option
Required
Description

Name

No

Label for the step in the workflow canvas.

Charge Type

No

Whether the step uses Hosted or Personal model billing.

LLM Model

Yes

The model selected in LLM Model.

Temperature

No

Controls how predictable or creative the response is when the selected model uses temperature.

Max Thinking Tokens

No

Sets Max Thinking Tokens for Claude thinking models.

Reasoning Effort

No

Sets Reasoning Effort for supported reasoning models.

Max Tokens

No

Sets the maximum response length for the step.

Prompt

Yes

Prompt messages in the Prompt section. Each message has a role and content.

Response Format

No

Lets you switch JSON Schema on for supported providers.

JSON Schema

No

Structured response schema used when JSON Schema is enabled.

When the step fails

No

Controls whether the workflow should Terminate Workflow or Continue if this step fails.

The AI Prompt step uses a wider settings sheet than the other workflow step types. The left side holds the prompt configuration, and the right side shows prompt messages and model responses from test runs in the same view.

In Parameters, the exact controls change with the selected model:

  • Temperature is shown for standard models.

  • Max Thinking Tokens is shown for Claude thinking models.

  • Reasoning Effort is shown for supported reasoning models.

In Prompt, click Add Message to build the message list. The first new message is a system message. After that, new messages default to user. Each message supports variable insertion from the workflow with the Insert Variable button.

If your selected provider supports structured output, you can enable JSON Schema in Response Format. This opens a schema editor where you can set a schema name, load an example, and save the JSON schema the model must follow.

assistant is available as a message role in the editor, but this step still runs as a single prompt execution and stores the final assistant response as the step output.

Output

This step stores the final assistant response as the step output.

Use the variable picker to insert the exact reference path for a previous prompt step. In templates and later steps, the base reference is:

If the response is plain text, reference the full value directly:

If JSON Schema is enabled and the model returns a structured object, you can reference nested fields:

The exact available keys depend on the schema and model response shown in the variable picker.

Example

Add AI Prompt from the workflow step picker.

Set Name to something like Summarize article.

In Parameters, choose your Charge Type and LLM Model. Adjust Temperature, Reasoning Effort, or Max Thinking Tokens if those controls are shown for your model.

In Prompt, add a system message that explains the task and a user message that inserts earlier workflow data such as {{step_1.output}}.

If you need structured output, enable JSON Schema and define fields such as title and summary.

Click Run in the step header to test the step. The response appears in the right-hand panel, and later steps can reference that output with the variable picker.

Notes

  • Use the step identifier shown in the variable picker when you reference this step in later fields.

  • The Clear action removes previous model responses from the test panel. It does not remove your prompt messages.

  • The step output is the final assistant response, not the full request metadata.

  • When JSON Schema is enabled, later steps can reference the returned object by field instead of parsing raw text.

See also: Creating and Editing, Testing and Iteration, and Use your own API Key

Last updated