Key Analytics Features
Usage Metrics
Track conversations, users, and document generation across all agents
Token Consumption
Monitor input and output token usage with detailed breakdowns
Cost Analysis
Track expenditure with detailed cost attribution by agent and model
Performance Trends
Identify usage patterns and performance changes over time
Agent Comparison
Compare effectiveness and efficiency across your AI agent portfolio
Custom Date Ranges
Analyze data across flexible time periods from 1 day to 12 months
Analytics Dashboards
AI Knowledge Analytics offers multiple dashboards to help you understand different aspects of your AI deployment.- Usage Analytics
- Cost Dashboard
- Agent Analytics
- Performance Optimization
The Usage Analytics dashboard provides detailed insights into how your agents are being utilized:
Key Metrics
- Generated Responses: Total number of AI responses generated across all agents
- End Users: Number of unique users interacting with your agents
- Created Documents: Total documents generated by your agents
- Total Tokens Count: Aggregate token usage across all conversations
- Input Tokens: Tokens consumed by user queries and context
- Output Tokens: Tokens generated in AI responses
Using Analytics Effectively
1
Access Analytics
Navigate to the Analytics section from your AI Knowledge dashboard.
2
Select time period
Choose your desired timeframe for analysis using the date selectors.Options include standard periods (1 day, 7 days, 30 days, 12 months) or custom date ranges.
3
Review key metrics
Examine the main performance indicators for your agents.Pay special attention to significant changes or trends in usage and costs.
4
Drill down into specific agents
Click on individual agents to see detailed performance metrics.Compare agents to identify best practices and improvement opportunities.
Best Practices for Analytics
Regular Reviews
Schedule weekly or monthly analytics reviews to track performance trends
Benchmark Agents
Compare similar agents to establish performance benchmarks
Token Optimization
Identify and optimize high token consumption scenarios
User Feedback Correlation
Connect analytics data with user feedback for deeper insights
Cost Allocation
Use analytics to allocate AI costs to appropriate departments
Continuous Improvement
Implement regular optimizations based on analytics insights
Token Optimization Strategies
Based on analytics insights, consider these strategies to optimize token usage and costs:1
Knowledge base refinement
Streamline knowledge bases to include only the most relevant information.
2
Prompt engineering
Refine system prompts and instructions to be more efficient.
3
Model selection
Choose the most cost-effective model for each use case.
4
Context window management
Optimize how much context is included in each interaction.
Custom usage analytics
Both LLM and embeddings usage are tracked byusage
events, persisted with an aggPayload
custom mapping to enable numeric aggregations in Elasticsearch/Opensearch requests.
Example usage
:
Search request body