← Monday's Prompts

Automate Community Management 🚀

Turn Monday's 3 prompts into 24/7 engagement automation

September 16, 2025
🚀 Growth⚡ TypeScript + Python📈 100 → 10,000 members

The Problem

On Monday you tested the 3 prompts in ChatGPT. You saw how detect toxic content → moderate with context → suggest engagement works. But here's the reality: you can't be online 24/7. You miss important conversations. Trolls post at 3am. Your best members get ignored because you're drowning in notifications.

4-6 hours
Daily time moderating manually
60% missed
Important conversations at night/weekends
500 members
Max scale before burnout

See It Work

Watch the 3 prompts chain together automatically. This is what you'll build for your Discord/Slack community.

The Code

Three levels: start with Discord webhooks, add reliability, then scale to multi-platform. Pick where you are.

Level 1: Simple Discord Bot

Good for: 0-500 members | Setup time: 30 minutes

// Simple Discord Bot (0-500 members)
import { Client, GatewayIntentBits, Message } from 'discord.js';
import Anthropic from '@anthropic-ai/sdk';

const discord = new Client({
  intents: [
    GatewayIntentBits.Guilds,
    GatewayIntentBits.GuildMessages,
    GatewayIntentBits.MessageContent,
  ],
});

const anthropic = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY!,
});

interface ModerationResult {
  toxicity_score: number;
  action: 'none' | 'warn' | 'timeout' | 'ban';
  reasoning: string;
}

async function moderateMessage(message: Message): Promise<ModerationResult> {
  // Step 1: Detect toxicity
  const detectionPrompt = `Analyze this Discord message for toxicity and violations.

Message: "${message.content}"
Author: ${message.author.username}
Channel: ${message.channel.id}

Return JSON with:
- toxicity_score (0-1)
- categories (array of violations)
- severity (low/medium/high)
- action_required (boolean)`;

  const detection = await anthropic.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages: [{ role: 'user', content: detectionPrompt }],
  });

  const detectionContent = detection.content[0];
  if (detectionContent.type !== 'text') throw new Error('Invalid response');
  const toxicityData = JSON.parse(detectionContent.text);

  // Step 2: Decide moderation action
  if (toxicityData.toxicity_score < 0.5) {
    return { toxicity_score: toxicityData.toxicity_score, action: 'none', reasoning: 'Low toxicity' };
  }

  const moderationPrompt = `Based on this toxicity analysis, recommend a moderation action.

Data: ${JSON.stringify(toxicityData)}

Return JSON with:
- action (warn/timeout/ban)
- reasoning (why this action)
- dm_message (what to tell the user)`;

  const moderation = await anthropic.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 1024,
    messages: [{ role: 'user', content: moderationPrompt }],
  });

  const moderationContent = moderation.content[0];
  if (moderationContent.type !== 'text') throw new Error('Invalid response');
  const actionData = JSON.parse(moderationContent.text);

  // Execute action
  if (actionData.action === 'timeout') {
    await message.member?.timeout(7 * 24 * 60 * 60 * 1000, actionData.reasoning); // 7 days
    await message.author.send(actionData.dm_message);
    await message.delete();
  } else if (actionData.action === 'ban') {
    await message.member?.ban({ reason: actionData.reasoning });
    await message.delete();
  } else if (actionData.action === 'warn') {
    await message.author.send(actionData.dm_message);
  }

  return {
    toxicity_score: toxicityData.toxicity_score,
    action: actionData.action,
    reasoning: actionData.reasoning,
  };
}

// Step 3: Suggest engagement opportunities
async function suggestEngagement(recentMessages: Message[]) {
  const messageTexts = recentMessages.map(m => `${m.author.username}: ${m.content}`).join('\n');

  const engagementPrompt = `Analyze these community messages and suggest engagement opportunities.

Recent messages:
${messageTexts}

Return JSON with:
- engagement_opportunities (array of {action, priority, reasoning, suggested_message})
- ambassador_candidates (array of usernames who are helpful)`;

  const response = await anthropic.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 2048,
    messages: [{ role: 'user', content: engagementPrompt }],
  });

  const content = response.content[0];
  if (content.type !== 'text') throw new Error('Invalid response');
  return JSON.parse(content.text);
}

// Listen for messages
discord.on('messageCreate', async (message) => {
  if (message.author.bot) return;

  const result = await moderateMessage(message);
  console.log(`Moderated: ${message.author.username} | Action: ${result.action} | Score: ${result.toxicity_score}`);
});

// Run engagement analysis every hour
setInterval(async () => {
  const channel = discord.channels.cache.get('YOUR_CHANNEL_ID');
  if (!channel?.isTextBased()) return;

  const messages = await channel.messages.fetch({ limit: 100 });
  const suggestions = await suggestEngagement(Array.from(messages.values()));
  
  console.log('Engagement suggestions:', suggestions);
  // Send to your mod channel
}, 60 * 60 * 1000);

discord.login(process.env.DISCORD_TOKEN);

Level 2: Multi-Platform with Database

Good for: 500-5,000 members | Setup time: 2 hours

# Multi-Platform with Database (500-5000 members)
import discord
import asyncio
from slack_sdk.web.async_client import AsyncWebClient
from anthropic import AsyncAnthropic
import psycopg2
from datetime import datetime, timedelta

class CommunityModerator:
    def __init__(self):
        self.discord_client = discord.Client(intents=discord.Intents.all())
        self.slack_client = AsyncWebClient(token=os.environ["SLACK_BOT_TOKEN"])
        self.anthropic = AsyncAnthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
        self.db = psycopg2.connect(os.environ["DATABASE_URL"])
        
    async def detect_toxicity(self, message_text: str, platform: str, user_id: str) -> dict:
        """Step 1: Detect and classify toxicity"""
        
        # Get user history from database
        cursor = self.db.cursor()
        cursor.execute("""
            SELECT 
                COUNT(*) as total_messages,
                SUM(CASE WHEN warning = true THEN 1 ELSE 0 END) as warnings,
                AVG(helpfulness_score) as avg_helpfulness
            FROM messages 
            WHERE user_id = %s AND platform = %s
        """, (user_id, platform))
        
        history = cursor.fetchone()
        
        prompt = f"""Analyze this community message for toxicity.

Message: "{message_text}"
User history: {history[0]} messages, {history[1]} warnings, {history[2]:.2f} avg helpfulness

Return JSON with:
- toxicity_score (0-1)
- categories (array)
- severity (low/medium/high)
- confidence (0-1)
- action_required (boolean)"""

        response = await self.anthropic.messages.create(
            model="claude-3-5-sonnet-20241022",
            max_tokens=1024,
            messages=[{"role": "user", "content": prompt}]
        )
        
        return json.loads(response.content[0].text)
    
    async def moderate_with_context(self, detection: dict, user_id: str, platform: str) -> dict:
        """Step 2: Recommend action with context"""
        
        if detection['toxicity_score'] < 0.5:
            return {'action': 'none', 'reasoning': 'Low toxicity'}
        
        prompt = f"""Recommend moderation action based on this analysis.

Detection: {json.dumps(detection)}

Return JSON with:
- recommended_action (warn/timeout_1d/timeout_7d/ban)
- reasoning (why)
- dm_message (what to tell user)
- escalate_to_human (boolean)
- auto_execute (boolean)"""

        response = await self.anthropic.messages.create(
            model="claude-3-5-sonnet-20241022",
            max_tokens=1024,
            messages=[{"role": "user", "content": prompt}]
        )
        
        action = json.loads(response.content[0].text)
        
        # Log to database
        cursor = self.db.cursor()
        cursor.execute("""
            INSERT INTO moderation_actions 
            (user_id, platform, action, reasoning, toxicity_score, timestamp)
            VALUES (%s, %s, %s, %s, %s, %s)
        """, (user_id, platform, action['recommended_action'], 
               action['reasoning'], detection['toxicity_score'], datetime.now()))
        self.db.commit()
        
        return action
    
    async def suggest_engagement(self, platform: str, channel_id: str) -> dict:
        """Step 3: Analyze and suggest engagement opportunities"""
        
        # Get recent messages from database
        cursor = self.db.cursor()
        cursor.execute("""
            SELECT user_id, username, message_text, helpfulness_score, timestamp
            FROM messages
            WHERE platform = %s AND channel_id = %s 
            AND timestamp > %s
            ORDER BY timestamp DESC
            LIMIT 100
        """, (platform, channel_id, datetime.now() - timedelta(days=7)))
        
        messages = cursor.fetchall()
        message_summary = "\n".join([f"{m[1]}: {m[2]}" for m in messages])
        
        prompt = f"""Analyze these community messages and suggest engagement opportunities.

Recent messages (last 7 days):
{message_summary}

Return JSON with:
- engagement_opportunities (array of {{action, priority, reasoning, suggested_message}})
- ambassador_candidates (array of {{username, score, reason}})
- trending_topics (array of topics being discussed)
- sentiment_trend (positive/neutral/negative)"""

        response = await self.anthropic.messages.create(
            model="claude-3-5-sonnet-20241022",
            max_tokens=2048,
            messages=[{"role": "user", "content": prompt}]
        )
        
        return json.loads(response.content[0].text)
    
    async def execute_action(self, action: dict, user_id: str, platform: str, message_id: str):
        """Execute moderation action across platforms"""
        
        if not action.get('auto_execute', False):
            # Send to human mod queue
            await self.send_to_mod_queue(action, user_id, platform, message_id)
            return
        
        if platform == 'discord':
            guild = self.discord_client.get_guild(int(os.environ['DISCORD_GUILD_ID']))
            member = guild.get_member(int(user_id))
            
            if action['recommended_action'] == 'timeout_7d':
                await member.timeout(timedelta(days=7), reason=action['reasoning'])
            elif action['recommended_action'] == 'ban':
                await member.ban(reason=action['reasoning'])
            
            # DM user
            await member.send(action['dm_message'])
            
        elif platform == 'slack':
            if action['recommended_action'] in ['timeout_7d', 'ban']:
                # Slack doesn't have timeout, so we restrict posting
                await self.slack_client.admin_users_session_invalidate(
                    session_id=user_id
                )
            
            # DM user
            await self.slack_client.chat_postMessage(
                channel=user_id,
                text=action['dm_message']
            )
    
    async def send_to_mod_queue(self, action: dict, user_id: str, platform: str, message_id: str):
        """Send to human moderator for review"""
        cursor = self.db.cursor()
        cursor.execute("""
            INSERT INTO mod_queue
            (user_id, platform, message_id, recommended_action, reasoning, created_at)
            VALUES (%s, %s, %s, %s, %s, %s)
        """, (user_id, platform, message_id, action['recommended_action'],
               action['reasoning'], datetime.now()))
        self.db.commit()

# Usage
moderator = CommunityModerator()

@moderator.discord_client.event
async def on_message(message):
    if message.author.bot:
        return
    
    # Chain the 3 steps
    detection = await moderator.detect_toxicity(
        message.content, 'discord', str(message.author.id)
    )
    
    if detection['action_required']:
        action = await moderator.moderate_with_context(
            detection, str(message.author.id), 'discord'
        )
        await moderator.execute_action(
            action, str(message.author.id), 'discord', str(message.id)
        )

# Run engagement analysis every 6 hours
async def engagement_loop():
    while True:
        suggestions = await moderator.suggest_engagement('discord', 'YOUR_CHANNEL_ID')
        # Post to mod channel
        print('Engagement suggestions:', suggestions)
        await asyncio.sleep(6 * 60 * 60)

asyncio.create_task(engagement_loop())
moderator.discord_client.run(os.environ['DISCORD_TOKEN'])

Level 3: Production with LangGraph Workflows

Good for: 5,000+ members | Setup time: 1 day

# Production with LangGraph (5000+ members)
from langgraph.graph import Graph, END
from typing import TypedDict, List, Literal
import anthropic
import redis
import logging

class CommunityState(TypedDict):
    message_id: str
    user_id: str
    platform: str
    message_text: str
    detection_result: dict
    moderation_action: dict
    engagement_suggestions: dict
    escalate: bool
    retry_count: int

class ProductionModerator:
    def __init__(self):
        self.anthropic = anthropic.AsyncAnthropic()
        self.redis = redis.Redis(host='localhost', port=6379, decode_responses=True)
        self.logger = logging.getLogger('community_moderator')
        
    async def detect_node(self, state: CommunityState) -> CommunityState:
        """Node 1: Detect toxicity with caching"""
        
        # Check cache first
        cache_key = f"detection:{state['user_id']}:{hash(state['message_text'])}"
        cached = self.redis.get(cache_key)
        if cached:
            state['detection_result'] = json.loads(cached)
            return state
        
        # Get user context from Redis
        user_key = f"user:{state['platform']}:{state['user_id']}"
        user_data = self.redis.hgetall(user_key)
        
        prompt = f"""Analyze for toxicity with user context.

Message: "{state['message_text']}"
User stats: {json.dumps(user_data)}

Return JSON with toxicity_score, categories, severity, confidence, action_required."""

        response = await self.anthropic.messages.create(
            model="claude-3-5-sonnet-20241022",
            max_tokens=1024,
            messages=[{"role": "user", "content": prompt}]
        )
        
        detection = json.loads(response.content[0].text)
        
        # Cache result for 1 hour
        self.redis.setex(cache_key, 3600, json.dumps(detection))
        
        state['detection_result'] = detection
        return state
    
    async def moderate_node(self, state: CommunityState) -> CommunityState:
        """Node 2: Determine moderation action"""
        
        detection = state['detection_result']
        
        # Low toxicity = skip moderation
        if detection['toxicity_score'] < 0.5:
            state['moderation_action'] = {'action': 'none'}
            return state
        
        # Get user's moderation history
        history_key = f"mod_history:{state['platform']}:{state['user_id']}"
        history = self.redis.lrange(history_key, 0, -1)
        
        prompt = f"""Recommend moderation action.

Detection: {json.dumps(detection)}
Previous actions: {json.dumps(history)}

Return JSON with recommended_action, reasoning, dm_message, escalate_to_human, auto_execute."""

        response = await self.anthropic.messages.create(
            model="claude-3-5-sonnet-20241022",
            max_tokens=1024,
            messages=[{"role": "user", "content": prompt}]
        )
        
        action = json.loads(response.content[0].text)
        
        # Store in history
        self.redis.lpush(history_key, json.dumps({
            'action': action['recommended_action'],
            'timestamp': datetime.now().isoformat(),
            'toxicity_score': detection['toxicity_score']
        }))
        self.redis.ltrim(history_key, 0, 49)  # Keep last 50
        
        state['moderation_action'] = action
        state['escalate'] = action.get('escalate_to_human', False)
        
        return state
    
    async def execute_node(self, state: CommunityState) -> CommunityState:
        """Node 3: Execute moderation action"""
        
        action = state['moderation_action']
        
        if action['action'] == 'none':
            return state
        
        # Execute via platform-specific API
        if state['platform'] == 'discord':
            await self.execute_discord_action(state['user_id'], action)
        elif state['platform'] == 'slack':
            await self.execute_slack_action(state['user_id'], action)
        
        # Log action
        self.logger.info(f"Executed {action['action']} for {state['user_id']}")
        
        return state
    
    async def engagement_node(self, state: CommunityState) -> CommunityState:
        """Node 4: Analyze engagement opportunities"""
        
        # Get recent community activity from Redis
        activity_key = f"activity:{state['platform']}"
        recent_messages = self.redis.lrange(activity_key, 0, 99)
        
        prompt = f"""Analyze community activity and suggest engagement.

Recent messages: {json.dumps(recent_messages)}

Return JSON with engagement_opportunities, ambassador_candidates, trending_topics."""

        response = await self.anthropic.messages.create(
            model="claude-3-5-sonnet-20241022",
            max_tokens=2048,
            messages=[{"role": "user", "content": prompt}]
        )
        
        state['engagement_suggestions'] = json.loads(response.content[0].text)
        return state
    
    def route_after_detection(self, state: CommunityState) -> Literal["moderate", "skip"]:
        """Route based on toxicity"""
        if state['detection_result']['action_required']:
            return "moderate"
        return "skip"
    
    def route_after_moderation(self, state: CommunityState) -> Literal["execute", "escalate"]:
        """Route based on escalation flag"""
        if state['escalate']:
            return "escalate"
        return "execute"
    
    async def execute_discord_action(self, user_id: str, action: dict):
        # Implementation for Discord API
        pass
    
    async def execute_slack_action(self, user_id: str, action: dict):
        # Implementation for Slack API
        pass

# Build the graph
def build_moderation_graph():
    moderator = ProductionModerator()
    graph = Graph()
    
    # Add nodes
    graph.add_node("detect", moderator.detect_node)
    graph.add_node("moderate", moderator.moderate_node)
    graph.add_node("execute", moderator.execute_node)
    graph.add_node("engagement", moderator.engagement_node)
    graph.add_node("escalate", lambda s: s)  # Human review queue
    
    # Add edges
    graph.set_entry_point("detect")
    
    graph.add_conditional_edges(
        "detect",
        moderator.route_after_detection,
        {
            "moderate": "moderate",
            "skip": "engagement"
        }
    )
    
    graph.add_conditional_edges(
        "moderate",
        moderator.route_after_moderation,
        {
            "execute": "execute",
            "escalate": "escalate"
        }
    )
    
    graph.add_edge("execute", "engagement")
    graph.add_edge("engagement", END)
    graph.add_edge("escalate", END)
    
    return graph.compile()

# Usage
moderation_graph = build_moderation_graph()

initial_state = {
    "message_id": "msg_123",
    "user_id": "user_456",
    "platform": "discord",
    "message_text": "This community sucks!",
    "detection_result": {},
    "moderation_action": {},
    "engagement_suggestions": {},
    "escalate": False,
    "retry_count": 0
}

result = await moderation_graph.ainvoke(initial_state)
print(f"Moderation complete: {result['moderation_action']['action']}")
print(f"Engagement suggestions: {len(result['engagement_suggestions']['engagement_opportunities'])}")

When to Level Up

1

Start: Simple Bot

0-500 members

  • Discord/Slack bot with basic moderation
  • Sequential API calls (detect → moderate → engage)
  • Manual review for edge cases
  • Simple logging with console.log
2

Scale: Multi-Platform + Database

500-5,000 members

  • Support Discord + Slack + Discourse
  • PostgreSQL for user history and moderation logs
  • Automatic action execution with human escalation
  • Scheduled engagement analysis (every 6 hours)
  • Ambassador candidate identification
3

Production: LangGraph Orchestration

5,000-20,000 members

  • LangGraph workflows with conditional routing
  • Redis caching for user context and detection results
  • Real-time moderation with <2s latency
  • Human-in-the-loop for high-severity cases
  • Engagement suggestions posted to mod dashboard
4

Enterprise: Multi-Agent System

20,000+ members

  • Specialized agents (toxicity, engagement, ambassadors, sentiment)
  • Multi-region deployment with load balancing
  • Real-time dashboard (Grafana + Prometheus)
  • A/B testing moderation strategies
  • ML model fine-tuning on community-specific data
  • Integration with CRM for ambassador programs

Growth-Specific Gotchas

The code examples work. But community management has unique challenges you need to handle.

Context Collapse Across Platforms

Same user behaves differently on Discord vs Slack vs Twitter. Track identity across platforms to maintain consistent moderation. Use a unified user ID system.

# Unified user identity across platforms
class UnifiedUser:
    def __init__(self, user_id: str):
        self.user_id = user_id
        self.identities = {}
        
    def add_platform_identity(self, platform: str, platform_user_id: str):
        """Link platform-specific IDs to unified user"""
        self.identities[platform] = platform_user_id
        
        # Store in Redis
        redis.hset(f"user:{self.user_id}", platform, platform_user_id)
        redis.hset(f"platform:{platform}:{platform_user_id}", "unified_id", self.user_id)
    
    def get_cross_platform_history(self) -> dict:
        """Get moderation history across all platforms"""
        history = {}
        for platform, platform_id in self.identities.items():
            history[platform] = redis.lrange(f"mod_history:{platform}:{platform_id}", 0, -1)
        return history

# Usage: Moderate based on cross-platform behavior
user = UnifiedUser("unified_user_123")
user.add_platform_identity("discord", "discord_456")
user.add_platform_identity("slack", "slack_789")

# When moderating, check ALL platforms
cross_platform_history = user.get_cross_platform_history()
if len(cross_platform_history['discord']) + len(cross_platform_history['slack']) > 5:
    # User has 5+ violations across platforms = stricter action
    action = "ban"
else:
    action = "warn"

Timezone-Aware Moderation

Toxic messages at 3am your time might be 3pm for users in other timezones. Don't assume malice when it's just cultural differences in online hours. Track user timezone and adjust moderation sensitivity.

// Timezone-aware moderation sensitivity
import { DateTime } from 'luxon';

interface UserTimezone {
  userId: string;
  timezone: string;
  localHour: number;
}

async function adjustModerationForTimezone(
  toxicityScore: number,
  userId: string
): Promise<number> {
  // Get user's timezone (inferred from past activity or profile)
  const userTz = await redis.get(`user:${userId}:timezone`) || 'UTC';
  const userLocalTime = DateTime.now().setZone(userTz);
  const localHour = userLocalTime.hour;

  // Late night (11pm-5am local) = higher tolerance
  // People are tired, less filtered
  if (localHour >= 23 || localHour <= 5) {
    toxicityScore *= 0.8; // 20% more lenient
  }

  // Work hours (9am-5pm local) = stricter
  // Professional context expected
  if (localHour >= 9 && localHour <= 17) {
    toxicityScore *= 1.2; // 20% stricter
  }

  return toxicityScore;
}

// Usage in moderation pipeline
let toxicityScore = 0.75;
toxicityScore = await adjustModerationForTimezone(toxicityScore, userId);

if (toxicityScore > 0.8) {
  // Take action
}

False Positives on Technical Jargon

Tech communities use words that look toxic to generic models. 'kill the process', 'master/slave architecture', 'execute the script' trigger false positives. Maintain a whitelist of technical terms.

# Whitelist technical jargon to avoid false positives
TECH_JARGON_WHITELIST = [
    r'kill (the )?process',
    r'master[/-]slave',
    r'execute (the )?script',
    r'parent[/-]child',
    r'blacklist|whitelist',
    r'abort',
    r'terminate',
    r'force (push|quit)',
    r'nuke',
    r'blow away',
]

def is_technical_context(message: str) -> bool:
    """Check if message contains technical jargon"""
    import re
    for pattern in TECH_JARGON_WHITELIST:
        if re.search(pattern, message, re.IGNORECASE):
            return True
    return False

def adjust_for_technical_context(toxicity_score: float, message: str) -> float:
    """Lower toxicity score if technical jargon detected"""
    if is_technical_context(message):
        # Check if message also contains code blocks
        if '```' in message or '`' in message:
            return toxicity_score * 0.5  # 50% less sensitive
    return toxicity_score

# Usage
raw_score = 0.82  # "kill the process" flagged as toxic
adjusted_score = adjust_for_technical_context(raw_score, message_text)
print(f"Adjusted: {adjusted_score}")  # 0.41 - no longer flagged

Ambassador Burnout Detection

Your best community members can burn out from over-helping. Monitor response rates, sentiment changes, and message frequency. Alert when ambassadors show burnout signs.

// Detect ambassador burnout patterns
interface AmbassadorMetrics {
  userId: string;
  weeklyMessages: number[];
  weeklyHelpfulResponses: number[];
  weeklySentiment: number[];
}

async function detectBurnout(userId: string): Promise<boolean> {
  // Get last 8 weeks of activity
  const metrics = await getAmbassadorMetrics(userId, 8);

  // Burnout indicators:
  // 1. Declining message volume (>30% drop)
  const recentAvg = average(metrics.weeklyMessages.slice(-4));
  const earlierAvg = average(metrics.weeklyMessages.slice(0, 4));
  const volumeDecline = (earlierAvg - recentAvg) / earlierAvg;

  // 2. Declining helpfulness (>40% drop in helpful responses)
  const recentHelpful = average(metrics.weeklyHelpfulResponses.slice(-4));
  const earlierHelpful = average(metrics.weeklyHelpfulResponses.slice(0, 4));
  const helpfulnessDecline = (earlierHelpful - recentHelpful) / earlierHelpful;

  // 3. Declining sentiment (more negative)
  const recentSentiment = average(metrics.weeklySentiment.slice(-4));
  const earlierSentiment = average(metrics.weeklySentiment.slice(0, 4));
  const sentimentDecline = earlierSentiment - recentSentiment;

  // Burnout if 2+ indicators triggered
  const indicators = [
    volumeDecline > 0.3,
    helpfulnessDecline > 0.4,
    sentimentDecline > 0.2,
  ];

  const burnoutScore = indicators.filter(Boolean).length;

  if (burnoutScore >= 2) {
    // Alert community manager
    await sendBurnoutAlert(userId, {
      volumeDecline,
      helpfulnessDecline,
      sentimentDecline,
    });
    return true;
  }

  return false;
}

// Run weekly check on all ambassadors
setInterval(async () => {
  const ambassadors = await getAmbassadorList();
  for (const ambassador of ambassadors) {
    await detectBurnout(ambassador.userId);
  }
}, 7 * 24 * 60 * 60 * 1000); // Weekly

Viral Thread Detection & Amplification

Some threads go viral organically. Detect high-engagement threads early and amplify them (pin, cross-post, notify). Don't let great conversations get buried.

# Detect and amplify viral threads
import asyncio
from datetime import datetime, timedelta

class ViralThreadDetector:
    def __init__(self):
        self.engagement_threshold = {
            'replies_per_hour': 10,
            'reactions_per_hour': 30,
            'unique_participants': 5,
            'sentiment_score': 0.6  # Positive sentiment
        }
    
    async def analyze_thread(self, thread_id: str, platform: str) -> dict:
        """Analyze thread engagement metrics"""
        
        # Get thread messages from last 2 hours
        messages = await get_thread_messages(thread_id, hours=2)
        
        metrics = {
            'replies_per_hour': len(messages) / 2,
            'reactions_per_hour': sum(m['reaction_count'] for m in messages) / 2,
            'unique_participants': len(set(m['user_id'] for m in messages)),
            'sentiment_score': await calculate_sentiment(messages)
        }
        
        return metrics
    
    async def should_amplify(self, thread_id: str, platform: str) -> bool:
        """Check if thread meets viral criteria"""
        metrics = await self.analyze_thread(thread_id, platform)
        
        # Check all thresholds
        viral = all([
            metrics['replies_per_hour'] >= self.engagement_threshold['replies_per_hour'],
            metrics['reactions_per_hour'] >= self.engagement_threshold['reactions_per_hour'],
            metrics['unique_participants'] >= self.engagement_threshold['unique_participants'],
            metrics['sentiment_score'] >= self.engagement_threshold['sentiment_score']
        ])
        
        return viral
    
    async def amplify_thread(self, thread_id: str, platform: str):
        """Amplify viral thread across channels"""
        
        if platform == 'discord':
            # Pin the thread
            await pin_discord_message(thread_id)
            
            # Cross-post to announcements
            await cross_post_to_channel(thread_id, 'announcements')
            
        elif platform == 'slack':
            # Add to channel topic
            await update_slack_topic(thread_id)
            
            # Send to #highlights channel
            await cross_post_to_channel(thread_id, 'highlights')
        
        # Notify community manager
        await notify_manager(f"Viral thread detected: {thread_id}")
        
        # Log amplification
        redis.sadd('amplified_threads', thread_id)

# Run every 30 minutes
detector = ViralThreadDetector()

async def check_viral_threads():
    while True:
        # Get active threads from last 2 hours
        active_threads = await get_active_threads(hours=2)
        
        for thread in active_threads:
            # Skip if already amplified
            if redis.sismember('amplified_threads', thread['id']):
                continue
            
            if await detector.should_amplify(thread['id'], thread['platform']):
                await detector.amplify_thread(thread['id'], thread['platform'])
        
        await asyncio.sleep(30 * 60)  # 30 minutes

asyncio.create_task(check_viral_threads())

Cost Calculator

Manual Community Management

Community Manager salary
Full-time, can handle ~500 active members
$5,000-8,000/month
Night/weekend coverage
Part-time contractors for off-hours
$2,000-3,000/month
Missed toxic content
Member churn from poor moderation
$500-2,000/month
Missed engagement opportunities
Lost conversions from inactive ambassadors
$1,000-3,000/month
Total:$8,500-16,000/month
monthly

Limitations:

  • Can only scale to ~500 active members per manager
  • No coverage during nights/weekends without extra cost
  • Human moderators have bias and inconsistency
  • Slow response time (hours, not minutes)
  • Can't analyze engagement patterns at scale

Automated Community Platform

API costs (Claude/GPT-4)
~10,000 moderation checks/month at $0.02-0.05 each
$200-500/month
Database (PostgreSQL)
Managed database for user history
$50-100/month
Redis caching
For real-time context and rate limiting
$30-50/month
Monitoring (Grafana Cloud)
Alerts and dashboards
$50-100/month
Part-time human oversight
Review escalated cases, 10 hours/week
$1,000-2,000/month
Total:$1,330-2,750/month
monthly

Benefits:

  • Scale to 10,000+ members with same infrastructure
  • 24/7 coverage with <2 second response time
  • Consistent moderation policy application
  • Proactive engagement suggestions
  • Ambassador burnout detection
  • Cross-platform identity tracking
  • Viral thread amplification
$340/day saved
87.5% cost reduction | $10,210/month | $122,520/year
Note: Average of range (84-91% reduction)
💡 Pays for itself in first month. Break-even at ~200 members.
🚀

Want This Running in Your Community?

We build custom community automation systems that scale from 100 to 100,000 members. Discord, Slack, Discourse - we integrate with your existing platforms and handle the complexity.