The Playground is your testing environment. Chat with your agent, see how it uses tools, and identify areas for improvement before publishing.Documentation Index
Fetch the complete documentation index at: https://docs.prisme.ai/llms.txt
Use this file to discover all available pages before exploring further.
Opening the Playground
- Open any agent in Agent Creator
- Go to the Playground section
- Start typing in the message input
The Chat Interface
The Playground provides a full chat experience:- Message input - Type your message and press Enter or click Send
- Conversation history - See the full conversation thread
- File attachments - Upload files for the agent to process (if supported)
- New conversation - Start fresh with the button in the header
Understanding Responses
Agent responses may include:Text
The agent’s direct response to your message. This is what users see.Tool Calls
When the agent uses a capability, you’ll see:- Tool name - Which capability was called
- Parameters - What inputs were provided
- Result - What the tool returned (expandable)
Thinking (for advanced profiles)
Full Agent and Orchestrator profiles show reasoning:- Planning - How the agent breaks down the task
- Reflection - Self-evaluation of progress
- Decision points - Why the agent chose certain actions
Testing Scenarios
Use the Playground to test:Happy path
Happy path
Typical use cases where everything works as expected. Verify the agent handles common requests correctly.
Edge cases
Edge cases
Unusual requests or inputs. See how the agent handles ambiguity, incomplete information, or unexpected questions.
Error handling
Error handling
What happens when tools fail or information isn’t available? The agent should handle errors gracefully.
Out of scope
Out of scope
Ask about things the agent shouldn’t handle. It should politely decline or redirect rather than making things up.
Multi-turn conversations
Multi-turn conversations
Test conversations that span multiple messages. Does the agent maintain context and remember what was discussed?
Tips for Effective Testing
Test incrementally
After making changes to instructions or capabilities, test immediately. This makes it easier to identify what caused any issues.Use realistic inputs
Test with the kind of messages real users would send, including typos, incomplete sentences, and varying levels of detail.Try to break it
Actively try to confuse the agent or get it to behave incorrectly. This reveals weaknesses before users find them.Note what works
When the agent handles something well, note why. This helps you reinforce good patterns in your instructions.Conversation History
Each Playground conversation is temporary - it’s cleared when you start a new conversation or leave the page. To preserve test conversations:- Note down interesting exchanges manually
- Use the Evaluate feature to create test cases from good examples
- Export conversations if needed for documentation
File Handling
If your agent supports file attachments:- Click the attachment icon in the message input
- Select files to upload
- Send your message
- Documents - PDF, Word, text files
- Images - PNG, JPEG (if vision is enabled)
- Data - CSV, JSON, Excel
From Testing to Evaluation
When you find important scenarios in the Playground, turn them into test cases:- Go to the Evaluate section
- Click Create Test Case
- Enter the user input and expected behavior
- Save the test case
Debugging Issues
If the agent isn’t behaving as expected:| Symptom | Likely Cause | Fix |
|---|---|---|
| Ignores tools | Instructions don’t mention when to use them | Update instructions to specify tool usage |
| Wrong tool choice | Tool descriptions are unclear | Improve tool descriptions in Capabilities |
| Hallucinates facts | No knowledge base or not using it | Add knowledge base and instruct to search first |
| Too verbose | No guidance on response length | Add “be concise” to instructions |
| Too terse | Asked for brevity but overdid it | Specify minimum detail level |
| Forgets context | Profile doesn’t support session memory | Upgrade to Light Agent or higher |
Next Steps
Create evaluations
Build test cases to systematically measure agent quality
Refine instructions
Improve behavior based on what you learned in testing