What's my plan?
Add-on AI agents - Advanced

After you create or update an advanced AI agent, you can test that it functions as intended before making it live. From within the add-on, you can access a test widget to safely test the overall customer experience or specific dialogue flows.

Tip: If you’re part of the EAP for agentic AI agents, see Creating generative procedures for agentic AI agents (EAP) to learn how to test procedures.

This article contains the following topics:

  • Testing end-to-end AI agent conversations
  • Testing specific dialogues in AI agents
  • Reviewing predicted use cases or intents

Testing end-to-end AI agent conversations

You can use the Test AI agent tool to simulate the current customer experience of interacting with the AI agent. It starts from the welcome reply and includes all currently active settings, including instructions. Test conversations are recorded in conversation logs for later review.

To test an AI agent

  1. In AI agents - Advanced, in the top-right corner, use the AI agent drop-down field to select the AI agent you want to test.
  2. Click Test AI agent. The test widget appears.

  3. Use the widget to start a conversation with the AI agent.
  4. To start the conversation over, click the options menu icon () in the test widget and select Restart Chat.
  5. To go directly to the logged conversation, click the options menu icon () in the test widget and select Open in Conversation Logs.

Testing specific dialogues in AI agents

While using the dialogue builder to create conversation flows for your AI agent, you can test the dialogue you’re working on. You can also test a branch of the dialogue starting from a specific block.

To test a dialogue

  1. Create a new dialogue, or open a dialogue for editing.
  2. In the top-right, click Test dialogue.
  3. In the Session parameters dialog, do one of the following:
    • To test the flow using a specific parameter, select a Parameter and enter a Value, then click Test.
    • To ignore all previously set parameters, click Test without parameters.
    Tip: To remember your selection for next time, deselect Ask me every time.

  4. In the test widget that appears, test the dialogue by sending messages to the AI agent.
  5. To change session parameters during a test, click the options menu icon () in the test widget and select Change parameters.

  6. To start a conversation over, click the options menu icon () in the test widget and select Restart Chat.
  7. To go directly to the logged conversation, click the options menu icon () in the test widget and select Open in Conversation Logs.

To test a branch of a dialogue

  1. In the dialogue builder, hover over the block you want to start from and click Test branch ().

    You can start from any type of block except for Customer message and Link to blocks.

Reviewing predicted use cases or intents

In any test mode, you can review the top use cases or intents that the AI agent predicted for a customer message, including confidence level. If you’re using a zero-training or agentic AI agent, you see use case predictions. If you’re using an expression-based AI agent, you see intent predictions.

To review top predicted intents or use cases
  1. In the test widget, hover over a customer message.
  2. Click the View predicted intents icon () to the left of the message. The top predictions dialog appears.

Powered by Zendesk