FETCH HIVE
Get StartedBook a Demo
  • 👋Welcome
  • Getting Started
    • Prompt Quick Start
    • Workflow Quick Start
  • Agent Quick Start
  • 💬Running Prompts
    • Run with API
  • 🚀Building Workflows
    • Workflow Steps
      • AI
        • Prompt
        • Agent
      • Web Research
        • Google Search
        • Google News
        • Bing Search
        • Exa Search
        • Exa Similar
        • Website Scrape
      • Utility
        • Call API
        • Format JSON
      • Files
        • Document Upload to Text
        • Document URL to Text
  • Testing and Iteration
  • Publishing and Versioning
  • Error Handling
  • 🚀Running Workflows
    • Run with API
  • Retrieve a Workflow
  • 🤖AGENTS
    • Build an Agent
    • Send a Message
    • API
      • Create an Agent
      • Retrieve an Agent
      • Delete an Agent
      • Chats
        • Retrieve a Chat
        • Create a Chat
        • Update a Chat
        • Delete a Chat
        • Clear Chat Messages
      • Messages
        • List Messages
  • 💾Datasets
    • Build a Dataset
    • Test a Dataset
    • API
      • Retrieve a Dataset
      • Create a Dataset
      • Update a Dataset
      • Delete a Dataset
      • Dataset Items
        • Retrieve a Dataset Item
        • Create a Dataset Item
        • Update a Dataset Item
        • Delete a Dataset Item
        • Regenerate a Dataset Item
  • 🏢Your Workspace
    • Brand Kit
  • Settings
  • Use your own API keys
  • Team Members
  • 🔍Monitoring
    • Credit Usage
    • Analytics
    • Log History
  • 📚Resources
    • Discord
    • Changelog
  • Roadmap
  • Book a Demo
  • Get in Contact
  • Security and Compliance
Powered by GitBook
On this page
  • How to Test Your Workflow
  • Test One Step
  • Test your entire Workflow
  • Error Handling Best Practices

Was this helpful?

Testing and Iteration

Best practices for testing and iterating on your Workflows

PreviousDocument URL to TextNextPublishing and Versioning

Last updated 1 month ago

Was this helpful?

Testing and iteration are fundamental steps to creating a production-ready Workflow. In this section, we will outline the best practices for testing and iterating on your workflow.

How to Test Your Workflow

There are two ways to test your workflow: one step at a time or all steps at once.

When you test one step at a time, the value of each step will be updated. When you test all steps at once, the values of each step will only be updated if the run is successful. However, the values will not be updated if you encounter an error. As a result, the best practice is to always test one step at a time until your workflow is production-ready

Test One Step

  • Add example data: if you've not already ran a full workflow, it is best to add example data for now to confirm the step runs successfully

  • Click on Run Test: this will execute the desired step

  • Use the logs to understand the step's output in order to reference it in future steps: logs are critical to understanding the data structure of the step's output - use this output to help you reference this step in subsequent steps

  • Design the next step(s): continue designing your workflow by referencing the previous steps using the yellow variable helper button

  • Continue testing each subsequent step: after designing a step, click the Run Test button to test the step. Continue this process until you have a production-ready workflow

Test your entire Workflow

To get started, click Run on the top right of the navbar. From here you can see the history of the last run you started for the workflow. Additionally, you can also start a new workflow run.

Use the Run feature when your Workflow is ready:

  1. Click Start Run, which will trigger a popup where you can enter your "Start Inputs" for the workflow.

  2. Input your test values: add the test values you want to execute

  3. Once ready, click Start Workflow Run to begin a full workflow.

Error Handling Best Practices

When creating a workflow for production, there are edge cases you should test for and handle in your workflow before running at scale.

1. Context Window Limit

Every LLM model has a maximum context window, which means there is a maximum number of tokens (or characters) you can have in your prompt before it errors out.

1 token generally corresponds to ~4 characters and ~3/4 of one word.

2. Prompt Formatting Errors

Many production Workflows of large language models depend on consistent output formatting from the model. To ensure consistency in your workflow, consider the following methods to handle unexpected output from the prompt:

  1. Prompting Techniques:

    • To prevent the model from introducing it's answer with a phrase such as "Sure, here is a...", you can add a command in both the System and User such as "Output your answer in the following format, and do not include any additional commentary:"

    • If your output is HTML or JSON, to prevent the model from starting with ```html or ```json, you can tell the model "Start with #..." or "Start with {, and end with } and do not include ```json in your response"

    • If you want to always ensure your step outputs JSON, consider using the JSON Schema configuration option (only available for OpenAI models)

    • If you're struggling to build a consistent prompt framework initially, use the "Improve with AI" feature to get you started with an awesome Prompt baseline for delivering consistent results.

To ensure your prompt stays within the context window limit, you can approximate the number of tokens you will use by copying and pasting your prompt into the following OpenAI token counter:

https://platform.openai.com/tokenizer
Run a single step in a Workflow
Run a full Workflow