The Problem
On Monday you tested the 3 prompts in ChatGPT. You saw how analyzing user behavior → scoring health → generating recommendations works. But here's reality: your CS team can't manually review 500 user profiles per day. One CSM spending 3 hours running prompts manually? That's $90/day in labor costs. Multiply that across your team and you're looking at $27,000/year just on manual health scoring. Plus the lag time means you miss critical intervention windows—users churn before you even notice they're struggling.
See It Work
Watch the 3 prompts chain together automatically. This is what you'll build.
Watch It Work
See the AI automation in action
The Code
Three levels: start simple, add reliability, then scale to production. Pick where you are.
When to Level Up
Simple API Calls
0-100 users/month
- Direct OpenAI/Claude API calls
- Manual trigger (script or cron job)
- Results logged to CSV or simple database
- Email alerts for critical issues
- ~5 minutes per user
Add Reliability Layer
100-1,000 users/month
- Exponential backoff retries
- Redis caching for user data
- PostgreSQL for results storage
- Segment/Mixpanel integration
- Automated email/in-app triggers
- ~30 seconds per user
Framework Orchestration
1,000-5,000 users/month
- LangGraph state management
- Async parallel processing (10+ users concurrently)
- Message queue (RabbitMQ/SQS) for actions
- Comprehensive logging & monitoring
- A/B testing for interventions
- ~5 seconds per user
Multi-Agent System
5,000+ users/month
- Specialized agents (engagement, adoption, churn prediction)
- Real-time event stream processing
- ML models for churn prediction
- Advanced segmentation & personalization
- Auto-scaling infrastructure
- Custom CSM dashboards
- ~1 second per user
SaaS/Product-Specific Gotchas
Real challenges you'll hit when automating user onboarding. Here's how to handle them.
Event Data Inconsistencies
Normalize event names before sending to LLM. Create a mapping layer.
# Event normalization layer
EVENT_MAPPING = {
'signup_completed': 'user_signed_up',
'account_created': 'user_signed_up',
'user_registered': 'user_signed_up',
# Add more mappings
}
Time Zone Hell
Convert all timestamps to UTC before analysis. Store user's timezone for display purposes.
from datetime import datetime
import pytz
def standardize_timestamps(events, user_timezone='UTC'):
"""Convert all event timestamps to UTC"""
user_tz = pytz.timezone(user_timezone)
for event in events:Feature Usage Counting
Define meaningful usage metrics. Group by sessions, dedupe rapid clicks.
from collections import defaultdict
from datetime import timedelta
def calculate_meaningful_usage(events, session_gap_minutes=30):
"""Group events into sessions and count meaningful usage"""
feature_sessions = defaultdict(list)
# Sort events by timestampMulti-Tenant Segmentation
Segment by company size, plan tier, industry. Use different scoring weights.
# Segment-specific health score weights
SCORING_WEIGHTS = {
'startup': {
'engagement': 0.40, # Startups need high engagement
'adoption': 0.30,
'value_realization': 0.20,
'collaboration': 0.10 # Less important for small teams
},Rate Limiting & Costs
Batch processing with queues. Cache results. Use cheaper models for low-priority users.
import asyncio
from datetime import datetime, timedelta
class RateLimitedProcessor:
def __init__(self, max_concurrent=10, requests_per_minute=50):
self.max_concurrent = max_concurrent
self.requests_per_minute = requests_per_minute
self.semaphore = asyncio.Semaphore(max_concurrent)Adjust Your Numbers
❌ Manual Process
✅ AI-Automated
You Save
2026 Randeep Bhatia. All Rights Reserved.
No part of this content may be reproduced, distributed, or transmitted in any form without prior written permission.