Skip to main content
← Monday's Prompts

Automate User Onboarding 🚀

Turn Monday's health-score prompts into production code

September 30, 2025
21 min read
💼 SaaS/Product🐍 Python + TypeScript⚡ 10 → 10,000 users/month

The Problem

On Monday you tested the 3 prompts in ChatGPT. You saw how analyzing user behavior → scoring health → generating recommendations works. But here's reality: your CS team can't manually review 500 user profiles per day. One CSM spending 3 hours running prompts manually? That's $90/day in labor costs. Multiply that across your team and you're looking at $27,000/year just on manual health scoring. Plus the lag time means you miss critical intervention windows—users churn before you even notice they're struggling.

3+ hours
Per day manually scoring users
48-72 hrs
Lag time to detect at-risk users
Can't scale
Beyond 50-100 users/month

See It Work

Watch the 3 prompts chain together automatically. This is what you'll build.

Watch It Work

See the AI automation in action

Live Demo • No Setup Required

The Code

Three levels: start simple, add reliability, then scale to production. Pick where you are.

Basic = Quick startProduction = Full featuresAdvanced = Custom + Scale

Simple API Calls

Good for: 0-100 users/month | Setup time: 30 minutes

Simple API Calls
Good for: 0-100 users/month | Setup time: 30 minutes
# Simple API Calls (0-100 users/month)
import openai
import json
import os
from datetime import datetime, timedelta
from typing import Dict, List, Optional

# Set your API key
openai.api_key = os.getenv('OPENAI_API_KEY')

def analyze_user_onboarding(user_data: Dict) -> Dict:
    """Chain the 3 prompts: analyze → score → recommend"""
    
    # Step 1: Analyze behavior and extract signals
    analysis_prompt = f"""Analyze this SaaS user's behavior and extract engagement signals.
Showing 15 of 111 lines

When to Level Up

1

Simple API Calls

0-100 users/month

  • Direct OpenAI/Claude API calls
  • Manual trigger (script or cron job)
  • Results logged to CSV or simple database
  • Email alerts for critical issues
  • ~5 minutes per user
Level Up
2

Add Reliability Layer

100-1,000 users/month

  • Exponential backoff retries
  • Redis caching for user data
  • PostgreSQL for results storage
  • Segment/Mixpanel integration
  • Automated email/in-app triggers
  • ~30 seconds per user
Level Up
3

Framework Orchestration

1,000-5,000 users/month

  • LangGraph state management
  • Async parallel processing (10+ users concurrently)
  • Message queue (RabbitMQ/SQS) for actions
  • Comprehensive logging & monitoring
  • A/B testing for interventions
  • ~5 seconds per user
Level Up
4

Multi-Agent System

5,000+ users/month

  • Specialized agents (engagement, adoption, churn prediction)
  • Real-time event stream processing
  • ML models for churn prediction
  • Advanced segmentation & personalization
  • Auto-scaling infrastructure
  • Custom CSM dashboards
  • ~1 second per user

SaaS/Product-Specific Gotchas

Real challenges you'll hit when automating user onboarding. Here's how to handle them.

Event Data Inconsistencies

Normalize event names before sending to LLM. Create a mapping layer.

Solution
# Event normalization layer
EVENT_MAPPING = {
    'signup_completed': 'user_signed_up',
    'account_created': 'user_signed_up',
    'user_registered': 'user_signed_up',
    # Add more mappings
}
Showing 8 of 19 lines

Time Zone Hell

Convert all timestamps to UTC before analysis. Store user's timezone for display purposes.

Solution
from datetime import datetime
import pytz

def standardize_timestamps(events, user_timezone='UTC'):
    """Convert all event timestamps to UTC"""
    user_tz = pytz.timezone(user_timezone)
    
    for event in events:
Showing 8 of 23 lines

Feature Usage Counting

Define meaningful usage metrics. Group by sessions, dedupe rapid clicks.

Solution
from collections import defaultdict
from datetime import timedelta

def calculate_meaningful_usage(events, session_gap_minutes=30):
    """Group events into sessions and count meaningful usage"""
    feature_sessions = defaultdict(list)
    
    # Sort events by timestamp
Showing 8 of 42 lines

Multi-Tenant Segmentation

Segment by company size, plan tier, industry. Use different scoring weights.

Solution
# Segment-specific health score weights
SCORING_WEIGHTS = {
    'startup': {
        'engagement': 0.40,  # Startups need high engagement
        'adoption': 0.30,
        'value_realization': 0.20,
        'collaboration': 0.10  # Less important for small teams
    },
Showing 8 of 47 lines

Rate Limiting & Costs

Batch processing with queues. Cache results. Use cheaper models for low-priority users.

Solution
import asyncio
from datetime import datetime, timedelta

class RateLimitedProcessor:
    def __init__(self, max_concurrent=10, requests_per_minute=50):
        self.max_concurrent = max_concurrent
        self.requests_per_minute = requests_per_minute
        self.semaphore = asyncio.Semaphore(max_concurrent)
Showing 8 of 62 lines

Adjust Your Numbers

500
105,000
5 min
1 min60 min
$50/hr
$15/hr$200/hr

❌ Manual Process

Time per item:5 min
Cost per item:$4.17
Daily volume:500 items
Daily:$2,083
Monthly:$45,833
Yearly:$550,000

✅ AI-Automated

Time per item:~2 sec
API cost:$0.02
Review (10%):$0.42
Daily:$218
Monthly:$4,803
Yearly:$57,640

You Save

0/day
90% cost reduction
Monthly Savings
$41,030
Yearly Savings
$492,360
💡 ROI payback: Typically 1-2 months for basic implementation
💼

Want This Running in Your SaaS Platform?

We build custom user onboarding AI systems that integrate with your analytics stack, automate health scoring, and trigger personalized interventions at scale.

©

2026 Randeep Bhatia. All Rights Reserved.

No part of this content may be reproduced, distributed, or transmitted in any form without prior written permission.