Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.prisme.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Playground is your testing environment. Chat with your agent, see how it uses tools, and identify areas for improvement before publishing.

Opening the Playground

  1. Open any agent in Agent Creator
  2. Go to the Playground section
  3. Start typing in the message input

The Chat Interface

The Playground provides a full chat experience:
  • Message input - Type your message and press Enter or click Send
  • Conversation history - See the full conversation thread
  • File attachments - Upload files for the agent to process (if supported)
  • New conversation - Start fresh with the button in the header

Understanding Responses

Agent responses may include:

Text

The agent’s direct response to your message. This is what users see.

Tool Calls

When the agent uses a capability, you’ll see:
  1. Tool name - Which capability was called
  2. Parameters - What inputs were provided
  3. Result - What the tool returned (expandable)
This transparency helps you understand how the agent reasons and whether it’s using tools correctly.

Thinking (for advanced profiles)

Full Agent and Orchestrator profiles show reasoning:
  • Planning - How the agent breaks down the task
  • Reflection - Self-evaluation of progress
  • Decision points - Why the agent chose certain actions

Testing Scenarios

Use the Playground to test:
Typical use cases where everything works as expected. Verify the agent handles common requests correctly.
Unusual requests or inputs. See how the agent handles ambiguity, incomplete information, or unexpected questions.
What happens when tools fail or information isn’t available? The agent should handle errors gracefully.
Ask about things the agent shouldn’t handle. It should politely decline or redirect rather than making things up.
Test conversations that span multiple messages. Does the agent maintain context and remember what was discussed?

Tips for Effective Testing

Test incrementally

After making changes to instructions or capabilities, test immediately. This makes it easier to identify what caused any issues.

Use realistic inputs

Test with the kind of messages real users would send, including typos, incomplete sentences, and varying levels of detail.

Try to break it

Actively try to confuse the agent or get it to behave incorrectly. This reveals weaknesses before users find them.

Note what works

When the agent handles something well, note why. This helps you reinforce good patterns in your instructions.

Conversation History

Each Playground conversation is temporary - it’s cleared when you start a new conversation or leave the page. To preserve test conversations:
  1. Note down interesting exchanges manually
  2. Use the Evaluate feature to create test cases from good examples
  3. Export conversations if needed for documentation

File Handling

If your agent supports file attachments:
  1. Click the attachment icon in the message input
  2. Select files to upload
  3. Send your message
Supported file types depend on your agent’s capabilities:
  • Documents - PDF, Word, text files
  • Images - PNG, JPEG (if vision is enabled)
  • Data - CSV, JSON, Excel

From Testing to Evaluation

When you find important scenarios in the Playground, turn them into test cases:
  1. Go to the Evaluate section
  2. Click Create Test Case
  3. Enter the user input and expected behavior
  4. Save the test case
This builds a regression test suite so you can verify your agent continues to work correctly as you make changes.

Debugging Issues

If the agent isn’t behaving as expected:
SymptomLikely CauseFix
Ignores toolsInstructions don’t mention when to use themUpdate instructions to specify tool usage
Wrong tool choiceTool descriptions are unclearImprove tool descriptions in Capabilities
Hallucinates factsNo knowledge base or not using itAdd knowledge base and instruct to search first
Too verboseNo guidance on response lengthAdd “be concise” to instructions
Too terseAsked for brevity but overdid itSpecify minimum detail level
Forgets contextProfile doesn’t support session memoryUpgrade to Light Agent or higher

Next Steps

Create evaluations

Build test cases to systematically measure agent quality

Refine instructions

Improve behavior based on what you learned in testing