Documentation Index
Fetch the complete documentation index at: https://docs.prisme.ai/llms.txt
Use this file to discover all available pages before exploring further.
Analytics show how users interact with your published agents. Use these insights to improve agent quality and demonstrate value.
The Analytics Page
Open any agent and go to the Analytics section. You’ll see metrics for the selected time period.
Time Periods
Select a period to analyze:
- Today - Current day
- Last 7 days - Past week
- Last 30 days - Past month
- Last 90 days - Past quarter
- Custom - Specific date range
Key Metrics
Users & Engagement
| Metric | Description |
|---|
| Active Users | Unique users who chatted with the agent |
| Conversations | Total chat sessions started |
| Messages | Total messages exchanged (user-facing only, excludes tool-calling iterations) |
| LLM Calls | Total LLM invocations (includes tool-calling and async tasks) |
| Avg Messages/Conversation | Depth of conversations (shown as subtitle) |
Quality
| Metric | Description |
|---|
| Average Rating | User feedback score (1-5) |
| Ratings Count | Number of ratings received |
| Rating Distribution | Breakdown by star level |
| Resolution Rate | Percentage of resolved queries |
| Metric | Description |
|---|
| P50 Response Time | Median response latency |
| P95 Response Time | 95th percentile latency |
| Error Count | Failed requests |
| Error Rate | Percentage of failures |
Cost
| Metric | Description |
|---|
| Input Tokens | Tokens from user messages |
| Output Tokens | Tokens in agent responses |
| Total Cost | Estimated API cost |
| Carbon (kgCO2eq) | Estimated carbon footprint |
Trend Charts
Visualize metrics over time:
- Users trend - Growth or decline in user base
- Conversations trend - Usage patterns by day
- Messages trend - Engagement over time
See which capabilities are used most:
| Tool | Calls | Success Rate |
|---|
| Knowledge Search | 1,234 | 98% |
| Web Search | 567 | 95% |
| Calendar | 89 | 100% |
Use this to identify:
- Underutilized capabilities (consider removing)
- High-failure tools (investigate issues)
- Most valuable integrations
Model Usage
If your agent uses multiple models, a breakdown is displayed showing:
- Model name - Each model used
- Calls - Number of LLM invocations per model
- Tokens - Total tokens consumed per model
- Cost - Estimated cost per model
Interpreting Results
Healthy Signs
- Steady or growing active users
- Ratings above 4.0
- Low error rates
- Good avg messages per conversation
Warning Signs
- Declining active users
- Ratings below 3.5
- Increasing error rates
- Very low messages per conversation (users giving up)
Action Items
Based on analytics, consider:
| Observation | Possible Action |
|---|
| Low engagement | Improve welcome message, add suggested prompts |
| High errors | Check tool configurations, review logs |
| Poor ratings | Review feedback, improve instructions |
| Slow responses | Consider faster model or simplify tools |
| Low tool usage | Update instructions to use tools more |
How It Works
Pipeline Overview
ES events agent_metrics (Mongo) API response
───────── ──────────────────── ────────────
agents.conversations.created ─┐ ┌─ summary (cached)
analytics.agent.rated ────────┼─→ Bulk Aggregator ─→ hourly ───┤
analytics.llm.completion ─────┘ (cron) daily └─ series[]
summary
Analytics flow through three stages:
- Event emission — As agents are used, Elasticsearch events are emitted by agent-factory and llm-gateway
- Aggregation — A scheduled job reads these events and writes pre-aggregated rows into the
agent_metrics collection
- Read — The analytics endpoint reads from
agent_metrics, self-heals if needed, and returns the result
Event Sources
| Event | Emitted by | Contains |
|---|
agents.conversations.created | agent-factory | User ID, conversation ID, agent ID |
analytics.agent.rated | agent-factory | Rating value (1–5) |
analytics.llm.completion | llm-gateway | Model, token counts, cost, duration, tool names, call type |
Caching
Summary metrics are cached for 1 hour in the agent_metrics collection. With warm cache, response time drops from ~1.5s to ~100ms.
Self-Healing
The analytics endpoint performs two checks:
- Current interval refresh: The current in-progress hour or day is re-aggregated if stale (15-minute threshold for hourly, 1-hour for daily)
- Gap detection: If fewer intervals exist than expected (e.g., cron missed a run), a full re-aggregation is triggered
Scheduled Jobs
| Job | Schedule | Description |
|---|
| Aggregate metrics | 30 1 * * * (daily at 01:30 UTC) | Aggregates daily metrics for all active agents |
| Cleanup old metrics | 0 3 * * * (daily) | Deletes metric rows older than 30 days |
| Refresh agent counters | */15 * * * * (every 15 min) | Updates per-agent conversation and message counters |
Refreshing Data
Analytics are cached for performance. To see the latest data:
- Click the Refresh button
- Wait for metrics to update
Analytics may have a delay of up to 15 minutes for very recent activity.
You can also force a re-aggregation via the API: POST v1/agents/:agent_id/refresh-metrics with an optional hours_back parameter (max 168 = 1 week).
Exporting Data
Export analytics for reporting:
- Select your time period
- Click Export CSV
- Download the file
Privacy Considerations
Analytics are aggregated and anonymized:
- Individual conversations are not exposed
- User identities are not revealed in metrics
- Only owners and admins see analytics
For detailed conversation review, use Insights with appropriate permissions.
Next Steps
Improve based on feedback
Use insights to refine your agent’s instructions
Insights
Deep dive into conversations and user feedback