Learn how to track your AI agent's performance, user engagement, costs, and system health with Flutch's analytics tools.
Two Levels of Analytics
Flutch provides analytics at two distinct levels:
Agent-Level Analytics (Debugging Focus)
What: Detailed information about individual conversations and messages
Where: Message Audit page (/agents/{agentId}/message-audit)
Use for:
- Debugging specific issues
- Understanding individual user interactions
- Tracking conversation quality
- Optimizing prompts and responses
See: Debugging Guide for details
Company-Level Analytics (Business Focus)
What: Aggregated metrics across all agents and users
Where: Analytics Dashboard (/analytics)
Use for:
- Business performance monitoring
- Cost tracking and forecasting
- User engagement trends
- Agent comparison
- System health monitoring
This article focuses on company-level analytics.
Accessing Company Analytics
- Log in to console.flutch.ai
- Navigate to Analytics in main menu
- Select time period (Day / Week / Month / Custom)
URL: https://console.flutch.ai/analytics
Key Metrics Overview
User Engagement Metrics
DAU (Daily Active Users)
- Users who sent at least one message today
- Tracks daily engagement
- Good for spotting trends
WAU (Weekly Active Users)
- Users who sent at least one message in last 7 days
- Tracks weekly engagement
- Smooths out daily fluctuations
MAU (Monthly Active Users)
- Users who sent at least one message in last 30 days
- Tracks monthly engagement
- Key business metric
Total Sessions
- Number of conversation sessions
- Session = continuous interaction (ends after 30 min inactivity)
- Measures conversation frequency
Average Session Duration
- How long users engage with agent
- Longer = more engaging conversations
- Benchmark: 5-15 minutes for support agents
Messages per Session
- Average number of messages in a conversation
- Higher = more complex queries or multi-turn conversations
- Benchmark: 3-8 messages for typical support
Agent Performance Metrics
Active Agents
- Number of agents that received messages
- Tracks which agents are actually being used
- Helps identify unused agents
Agent Comparison
- Side-by-side performance comparison
- Messages, sessions, costs per agent
- Identify top performers
Response Time
- Average time to generate response
- Includes model latency + tool execution
- Target: < 3 seconds for good UX
Error Rate
- Percentage of messages that failed
- Target: < 1% error rate
- Spikes indicate system issues
Cost Metrics
Total Costs
- Sum of all LLM API costs
- Broken down by:
- Provider (OpenAI, Anthropic, Google, Azure)
- Agent
- Time period
Cost per Agent
- How much each agent costs to run
- Helps budget allocation
- Identify expensive agents
Cost per Message
- Average cost per message
- Benchmark: $0.01 - $0.05 for typical agents
- Higher for complex agents with tools
Cost per User
- Average cost per active user
- Key metric for business model viability
- Helps set pricing
Token Usage
- Total tokens consumed (prompt + completion)
- Broken down by agent and model
- Helps optimize costs
System Health Metrics
Uptime
- Percentage of time system was available
- Target: > 99.9%
Request Success Rate
- Percentage of requests that succeeded
- Target: > 99%
P95 Response Time
- 95th percentile response time
- Measures "slow requests"
- Target: < 5 seconds
Queue Depth
- Number of pending requests
- High queue = system overload
- Should be near 0 most of the time
Time Period Selection
Predefined Periods
Today
- Last 24 hours
- Good for: Real-time monitoring, daily check-ins
This Week
- Last 7 days
- Good for: Weekly trends, week-over-week comparison
This Month
- Last 30 days
- Good for: Monthly reports, business reviews
Custom Range
- Select start and end date
- Good for: Specific analysis, comparing periods
Month Switcher
Navigate historical data:
bash◄ November 2024 | December 2024 | January 2025 ►
Use for:
- Year-over-year comparison
- Seasonal pattern analysis
- Historical trend review
Analytics Dashboard Sections
Overview Section
Top-level KPIs:
bash┌─────────────────────────────────────────────────────┐ │ Daily Active Users: 1,234 (↑ 12% vs last week) │ │ Total Sessions: 5,678 (↑ 8% vs last week) │ │ Active Agents: 12 (→ unchanged) │ │ Total Costs: $234.56 (↑ 15% vs last week) │ └─────────────────────────────────────────────────────┘
Quick insights:
- Are users growing?
- Is usage increasing?
- Are costs under control?
User Engagement Section
Charts:
Daily Active Users Trend
bash1500│ ╭─╮ 1200│ ╭───╯ ╰╮ 900│ ╭───╯ ╰─╮ 600│╭──╯ ╰── 300│╯ └───────────────────── Mon Tue Wed Thu Fri Sat Sun
Session Duration Distribution
bash< 1 min: ████████ 20% 1-5 min: ████████████████ 40% 5-10 min: ██████████ 25% 10+ min: ██████ 15%
Messages per Session
bashAverage: 5.2 messages Distribution: 1-2: ████ 15% 3-5: ████████████ 45% 6-10: ██████████ 35% 10+: █ 5%
Agent Performance Section
Agent Comparison Table
| Agent Name | Sessions | Messages | Avg Duration | Cost | Error % |
|---|---|---|---|---|---|
| Support Agent | 2,345 | 12,456 | 8m 32s | $123.45 | 0.5% |
| Sales Agent | 1,234 | 5,678 | 6m 15s | $67.89 | 0.3% |
| Onboarding | 789 | 3,456 | 12m 45s | $45.67 | 1.2% |
| FAQ Bot | 456 | 1,234 | 2m 10s | $12.34 | 0.1% |
Click any agent to drill down into agent-specific analytics.
Response Time Chart
bash5s│ ╭╮ 4s│ ╭╯╰╮ 3s│ ╭──╯ ╰╮ 2s│╭───╯ ╰──╮ 1s│╯ ╰── └───────────────── 12am 6am 12pm 6pm
Insights:
- Which agents are most used?
- Which are most expensive?
- Which have quality issues (high error rate)?
Cost Analysis Section
Cost Breakdown by Provider
bashOpenAI: $150.23 (64%) ████████████████ Anthropic: $65.12 (28%) ███████ Google: $15.45 (7%) ██ Azure: $3.76 (1%) ▌
Cost Trend Over Time
bash$300│ ╭─ $250│ ╭───╯ $200│ ╭───╯ $150│ ╭───╯ $100│──╯ └──────────────── Week 1 Week 2 Week 3 Week 4
Cost per Agent (Top 5)
bashSupport Agent: $123.45 █████████████████ Sales Agent: $67.89 ██████████ Onboarding: $45.67 ███████ Technical Support: $34.56 █████ FAQ Bot: $12.34 ██
Cost Efficiency Metrics
- Cost per message: $0.042
- Cost per session: $0.21
- Cost per user: $1.85
Forecast
bashCurrent spending: $234/week Projected monthly: ~$1,014 Trend: +15% week-over-week
Token Usage Section
Total Tokens Used
bashTotal: 5,234,567 tokens Breakdown: Prompt tokens: 3,145,678 (60%) Completion tokens: 2,088,889 (40%)
Tokens by Agent
| Agent | Prompt | Completion | Total | Cost |
|---|---|---|---|---|
| Support Agent | 1.2M | 800K | 2.0M | $123.45 |
| Sales Agent | 650K | 450K | 1.1M | $67.89 |
| Onboarding | 500K | 350K | 850K | $45.67 |
Tokens by Model
bashgpt-4-turbo: 2,345,678 (45%) █████████ gpt-3.5-turbo: 1,234,567 (24%) █████ claude-3: 1,000,000 (19%) ████ gemini-pro: 654,322 (12%) ███
Average Tokens per Message
bashOverall average: 523 tokens/message By agent: Support Agent: 612 tokens (high context) Sales Agent: 445 tokens (medium context) FAQ Bot: 123 tokens (low context)
System Health Section
Uptime Dashboard
bashLast 24 hours: 99.98% ████████████████████ ✅ Last 7 days: 99.95% ████████████████████ ✅ Last 30 days: 99.92% ████████████████████ ✅
Downtime incidents: 2 (total 4.5 minutes)
Error Rate
bashCurrent: 0.5% ✅ Good Last hour: 0.3% ✅ Excellent Last 24 hours: 0.7% ✅ Good Last 7 days: 1.2% ⚠️ Acceptable
Response Time Distribution (P50/P95/P99)
bashP50 (median): 1.8s ✅ Fast P95 (95th percentile): 3.2s ✅ Good P99 (99th percentile): 5.1s ⚠️ Acceptable
Request Volume
bashCurrent load: 45 req/min (normal) Peak today: 123 req/min (during lunch) Capacity: 500 req/min (healthy headroom)
Using Analytics for Optimization
Scenario 1: High Costs
Symptom: Monthly costs are $2,000, higher than expected
Analysis:
- Go to Cost Analysis section
- Check "Cost per Agent"
- Find: Support Agent costs $1,200 (60% of total)
- Click into Support Agent details
- Check token usage: 850 tokens/message average
- Check Message Audit for sample conversations
Root cause: Agent includes full conversation history (100+ messages)
Solution:
- Limit conversation history to last 20 messages
- Implement conversation summarization
- Switch to cheaper model (gpt-3.5) for simple queries
Result: Cost drops to $800/month (60% reduction)
Scenario 2: Low Engagement
Symptom: Only 50 DAU, expected 200+
Analysis:
- Check User Engagement section
- See: Average session duration is 45 seconds
- See: 80% of sessions have only 1-2 messages
- Go to Message Audit
- Sample conversations show users leaving quickly
Root cause: Agent responses are generic and unhelpful
Solution:
- Improve system prompt with specific examples
- Add knowledge base with relevant docs
- Enable tools for better answers
- Add acceptance tests for quality
Result: DAU increases to 180, avg session grows to 4 minutes
Scenario 3: Poor Performance
Symptom: Users complain about slow responses
Analysis:
- Check System Health section
- See: P95 response time is 8.5 seconds
- Check Agent Performance section
- See: Support Agent has 12s average response time
- Check Message Audit for slow messages
- See: External API tool takes 10+ seconds
Root cause: Weather API tool is very slow
Solution:
- Cache weather data (5 minute TTL)
- Add timeout to tool (3 seconds max)
- Show loading indicator in chat
- Consider removing slow tool
Result: P95 drops to 2.8 seconds, user satisfaction improves
Scenario 4: Scaling Issues
Symptom: Error rate spikes to 5% during peak hours
Analysis:
- Check System Health section
- See: Errors concentrated between 12pm-2pm
- Check request volume: 450 req/min (near capacity)
- Check queue depth: spikes to 50+ pending requests
Root cause: Insufficient capacity during lunch rush
Solution:
- Scale up backend servers during peak hours
- Implement rate limiting (graceful degradation)
- Add caching for common queries
- Consider async response mode for slow queries
Result: Error rate drops to 0.5%, smooth experience during peaks
Exporting Analytics Data
Export to CSV
bash# Export last 30 days flutch analytics export --period 30d --format csv > analytics.csv # Export specific date range flutch analytics export --start 2025-01-01 --end 2025-01-31 --format csv
CSV includes:
- Date
- DAU, WAU, MAU
- Total sessions
- Total messages
- Total costs
- Cost per message
- Average response time
- Error rate
Use for:
- Excel analysis
- Business reports
- Historical tracking
- Forecasting models
Export to JSON
bash# Full data export flutch analytics export --period 30d --format json > analytics.json
JSON includes:
- All metrics
- Agent-level breakdown
- Hourly/daily granularity
- Cost breakdown by provider
- Token usage details
Use for:
- Custom dashboards
- BI tool integration
- Data warehouse
- API integration
Scheduled Reports
Configure automated reports:
- Analytics → Settings → Reports
- Choose frequency: Daily / Weekly / Monthly
- Select recipients (email addresses)
- Choose format: PDF / CSV / Both
- Save
Example weekly report:
bashSubject: Flutch Weekly Analytics Report - Jan 15-21, 2025 Summary: - DAU: 1,234 (+12% vs last week) - Sessions: 5,678 (+8%) - Costs: $234.56 (+15%) - Error rate: 0.5% (stable) Top agents: 1. Support Agent: 2,345 sessions 2. Sales Agent: 1,234 sessions 3. Onboarding: 789 sessions [Full report attached as CSV]
Setting Up Alerts
Configure alerts for important events:
Cost Alerts
bashAlert: Daily costs exceed $50 Notify: [email protected] Action: Email + Slack notification
Performance Alerts
bashAlert: P95 response time > 5s for 10 minutes Notify: [email protected] Action: Email + PagerDuty
Error Rate Alerts
bashAlert: Error rate > 2% for 5 minutes Notify: [email protected] Action: Email + SMS
Usage Alerts
bashAlert: DAU drops below 500 Notify: [email protected] Action: Email
Best Practices
1. Check Analytics Daily
Establish routine:
- Every morning: Check overnight metrics
- Look for anomalies (spikes or drops)
- Review error rate
- Check costs vs budget
2. Weekly Deep Dive
Every week:
- Compare week-over-week trends
- Review agent performance
- Analyze cost efficiency
- Identify optimization opportunities
3. Monthly Business Review
Every month:
- Export full analytics
- Create executive summary
- Review against goals
- Plan next month's focus
4. Set Baselines and Goals
Establish targets:
- Target DAU: 1,000
- Target cost per user: $1.50
- Target error rate: < 1%
- Target response time: < 3s
Track progress toward goals.
5. Correlate with Changes
When making changes:
- Note date of deployment
- Monitor analytics closely
- Compare before/after metrics
- Document lessons learned
6. Use A/B Testing
Test improvements:
- Run two agent versions
- Split traffic 50/50
- Compare metrics after 1 week
- Deploy winning version
7. Monitor Cost Trends
Watch for cost creep:
- Set budget alerts
- Review token usage monthly
- Optimize expensive agents
- Consider cheaper models where appropriate
Troubleshooting Analytics
"Analytics not updating"
- Data updates every 5 minutes
- Refresh page to see latest
- Check if agent is receiving messages
- Verify time period selection
"Missing data for certain days"
- System maintenance windows (announced)
- Data pipeline issues (rare)
- Contact support if persists
"Costs don't match provider bill"
- Flutch shows estimates in real-time
- Provider bills are monthly and exact
- Differences < 5% are normal
- Large differences indicate issue - contact support
"Metrics seem incorrect"
- Verify time zone settings
- Check agent filter (all vs specific)
- Ensure comparing same time periods
- Clear browser cache and reload
Integration with BI Tools
Connect to Tableau/PowerBI
-
Use export API:
bashcurl https://api.flutch.ai/v1/analytics/export \ -H "Authorization: Bearer your-token" \ -d '{"period": "30d", "format": "json"}' -
Set up scheduled job to fetch data daily
-
Load into your BI tool
-
Create custom dashboards
Webhooks for Real-Time Data
Configure webhook to receive analytics events:
json{ "event": "analytics.daily_summary", "data": { "date": "2025-01-20", "dau": 1234, "sessions": 5678, "costs": 234.56, "error_rate": 0.005 } }
Send to:
- Slack
- Custom dashboard
- Data warehouse
- Monitoring system
Next Steps
- Debug Issues: Debugging Guide when analytics show problems
- Optimize Costs: Review agent settings and token usage
- Improve Engagement: Use session data to enhance user experience
- Scale Confidently: Monitor health metrics as you grow
Pro Tip: Set up a weekly analytics review meeting with your team to stay on top of trends!
Screenshots Needed
TODO: Add screenshots for:
- Analytics dashboard overview
- User engagement charts
- Cost breakdown section
- Agent comparison table
- System health dashboard