← Monday's Prompts

Automate Feedback Analysis πŸš€

Turn Monday's 3 prompts into a production feedback pipeline

August 26, 2025
πŸ“Š Product Management🐍 Python + TypeScript⚑ 100 β†’ 10,000 entries/day

The Problem

On Monday you tested the 3 prompts in ChatGPT. You saw how categorization β†’ sentiment β†’ theme extraction works. But here's the reality: your team gets 500+ feedback entries per week across Intercom, Zendesk, email, and surveys. You can't manually copy-paste each one into ChatGPT. By the time you finish analyzing last week's feedback, you're already 2 weeks behind on insights. Critical feature requests get buried. Angry customers don't get addressed. Your roadmap decisions are based on gut feel instead of data.

6+ hours
Per week manually categorizing feedback
2-3 weeks
Delay from feedback to insights
Can't scale
Beyond 100 entries/week manually

See It Work

Watch the 3 prompts chain together automatically. This is what you'll build - from raw feedback to roadmap priorities.

The Code

Three levels: start simple with API calls, add reliability for scale, then build a production pipeline. Pick where you are.

Level 1: Simple API Integration

Good for: 0-500 entries/week | Setup time: 30 minutes

# Simple Feedback Analysis (0-500 entries/week)
import openai
import json
from datetime import datetime

class SimpleFeedbackAnalyzer:
    def __init__(self, openai_api_key: str):
        self.client = openai.OpenAI(api_key=openai_api_key)
    
    def analyze_feedback(self, feedback_text: str, user_metadata: dict) -> dict:
        """Chain the 3 prompts: categorize β†’ sentiment β†’ themes"""
        
        # Step 1: Categorize and extract metadata
        categorization_prompt = f"""Analyze this customer feedback and extract structured data.

Feedback: {feedback_text}

User context:
- Email: {user_metadata.get('email', 'unknown')}
- Plan: {user_metadata.get('plan', 'unknown')}
- MRR: ${user_metadata.get('mrr', 0)}
- Tenure: {user_metadata.get('tenure_months', 0)} months

Extract as JSON:
{{
  "category": "bug_report|feature_request|question|complaint|praise",
  "subcategory": "specific issue type",
  "feature_area": "which product area",
  "platform": "web|mobile|desktop|api",
  "severity": "critical|high|medium|low",
  "impact": {{
    "frequency": "high|medium|low",
    "business_impact": "churn_risk|revenue_opportunity|user_satisfaction",
    "user_blocked": true|false
  }},
  "churn_signals": ["list of phrases indicating churn risk"],
  "mentioned_competitors": ["competitor names if any"]
}}

Output valid JSON only."""

        response = self.client.chat.completions.create(
            model="gpt-4",
            messages=[{"role": "user", "content": categorization_prompt}],
            temperature=0.3
        )
        
        categorization = json.loads(response.choices[0].message.content)
        
        # Step 2: Sentiment and urgency analysis
        sentiment_prompt = f"""Analyze the sentiment and urgency of this feedback.

Feedback: {feedback_text}

Category: {categorization['category']}
Severity: {categorization['severity']}

Return as JSON:
{{
  "sentiment": {{
    "overall": "very_positive|positive|neutral|negative|very_negative",
    "score": -1.0 to 1.0,
    "confidence": 0.0 to 1.0
  }},
  "emotions": [{{"emotion": "name", "intensity": 0.0 to 1.0}}],
  "urgency_level": "immediate|high|medium|low",
  "churn_risk_score": 0.0 to 1.0,
  "requires_immediate_response": true|false,
  "escalation_recommended": true|false,
  "key_phrases": ["important quotes from feedback"]
}}

Output valid JSON only."""

        response = self.client.chat.completions.create(
            model="gpt-4",
            messages=[{"role": "user", "content": sentiment_prompt}],
            temperature=0.3
        )
        
        sentiment = json.loads(response.choices[0].message.content)
        
        # Step 3: Theme extraction and prioritization
        theme_prompt = f"""Extract themes and suggest roadmap prioritization.

Feedback: {feedback_text}

Category: {categorization['category']}
Sentiment: {sentiment['sentiment']['overall']}
Churn risk: {sentiment['churn_risk_score']}
User value: ${user_metadata.get('mrr', 0)}/mo

Return as JSON:
{{
  "themes": [{{
    "theme": "theme name",
    "confidence": 0.0 to 1.0,
    "related_features": ["feature names"]
  }}],
  "roadmap_priority": {{
    "priority_score": 0.0 to 10.0,
    "recommended_action": "immediate_fix|schedule_sprint|backlog|no_action",
    "estimated_impact": "high|medium|low",
    "affected_user_segment": "segment description",
    "revenue_at_risk": "high_value_customer|standard|low_value"
  }},
  "suggested_jira_labels": ["label1", "label2"],
  "similar_feedback_count": estimate,
  "trend": "increasing|stable|decreasing"
}}

Output valid JSON only."""

        response = self.client.chat.completions.create(
            model="gpt-4",
            messages=[{"role": "user", "content": theme_prompt}],
            temperature=0.5
        )
        
        themes = json.loads(response.choices[0].message.content)
        
        # Combine all results
        return {
            "timestamp": datetime.utcnow().isoformat(),
            "user_metadata": user_metadata,
            "categorization": categorization,
            "sentiment": sentiment,
            "themes": themes,
            "processed_by": "simple_analyzer_v1"
        }

# Usage example
analyzer = SimpleFeedbackAnalyzer(openai_api_key="sk-...")

feedback = """The mobile app keeps crashing when I try to upload photos. 
This has happened 5 times today and I'm losing all my work."""

user_data = {
    "email": "sarah.chen@techstartup.com",
    "plan": "Pro",
    "mrr": 49,
    "tenure_months": 8,
    "source": "intercom_chat"
}

result = analyzer.analyze_feedback(feedback, user_data)

print(f"Category: {result['categorization']['category']}")
print(f"Sentiment: {result['sentiment']['sentiment']['overall']}")
print(f"Priority Score: {result['themes']['roadmap_priority']['priority_score']}")
print(f"Churn Risk: {result['sentiment']['churn_risk_score']:.2f}")

Level 2: With Intercom Integration & Error Handling

Good for: 500-2,000 entries/week | Setup time: 2 hours

// With Intercom Integration & Retries (500-2000 entries/week)
import Anthropic from '@anthropic-ai/sdk';
import axios from 'axios';

interface FeedbackAnalysis {
  categorization: any;
  sentiment: any;
  themes: any;
  user_metadata: any;
  timestamp: string;
}

class ProductFeedbackPipeline {
  private anthropic: Anthropic;
  private intercomToken: string;

  constructor(anthropicKey: string, intercomToken: string) {
    this.anthropic = new Anthropic({ apiKey: anthropicKey });
    this.intercomToken = intercomToken;
  }

  // Fetch feedback from Intercom
  async fetchIntercomConversations(
    startDate: Date,
    endDate: Date
  ): Promise<any[]> {
    try {
      const response = await axios.get(
        'https://api.intercom.io/conversations/search',
        {
          headers: {
            Authorization: `Bearer ${this.intercomToken}`,
            'Content-Type': 'application/json',
          },
          data: {
            query: {
              field: 'created_at',
              operator: '>',
              value: Math.floor(startDate.getTime() / 1000),
            },
          },
        }
      );

      return response.data.conversations || [];
    } catch (error) {
      console.error('Intercom fetch failed:', error);
      throw new Error('Failed to fetch Intercom conversations');
    }
  }

  // Extract feedback text from conversation
  extractFeedbackText(conversation: any): string {
    const parts = conversation.conversation_parts?.conversation_parts || [];
    const userMessages = parts
      .filter((part: any) => part.author.type === 'user')
      .map((part: any) => part.body)
      .join('\n\n');

    return userMessages || conversation.source?.body || '';
  }

  // Analyze with retries and error handling
  async analyzeWithRetries(
    feedbackText: string,
    userMetadata: any,
    maxRetries: number = 3
  ): Promise<FeedbackAnalysis> {
    let lastError: Error | null = null;

    for (let attempt = 0; attempt < maxRetries; attempt++) {
      try {
        return await this.analyzeFeedback(feedbackText, userMetadata);
      } catch (error) {
        lastError = error as Error;
        console.warn(`Attempt ${attempt + 1} failed:`, error);

        if (attempt < maxRetries - 1) {
          // Exponential backoff
          await new Promise((resolve) =>
            setTimeout(resolve, Math.pow(2, attempt) * 1000)
          );
        }
      }
    }

    throw new Error(`Analysis failed after ${maxRetries} attempts: ${lastError?.message}`);
  }

  private async analyzeFeedback(
    feedbackText: string,
    userMetadata: any
  ): Promise<FeedbackAnalysis> {
    // Step 1: Categorization
    const categorizationPrompt = `Analyze this customer feedback and extract structured data as JSON.

Feedback: ${feedbackText}

User: ${userMetadata.email} | Plan: ${userMetadata.plan} | MRR: $${userMetadata.mrr}

Extract: category, subcategory, feature_area, platform, severity, impact, churn_signals.

Output valid JSON only.`;

    const categorizationResponse = await this.anthropic.messages.create({
      model: 'claude-3-5-sonnet-20241022',
      max_tokens: 1024,
      messages: [{ role: 'user', content: categorizationPrompt }],
    });

    const categorizationContent = categorizationResponse.content[0];
    if (categorizationContent.type !== 'text') {
      throw new Error('Invalid categorization response');
    }
    const categorization = JSON.parse(categorizationContent.text);

    // Step 2: Sentiment analysis
    const sentimentPrompt = `Analyze sentiment and urgency for this feedback.

Feedback: ${feedbackText}
Category: ${categorization.category}

Return JSON with: sentiment (overall, score, confidence), emotions, urgency_level, churn_risk_score, key_phrases.

Output valid JSON only.`;

    const sentimentResponse = await this.anthropic.messages.create({
      model: 'claude-3-5-sonnet-20241022',
      max_tokens: 1024,
      messages: [{ role: 'user', content: sentimentPrompt }],
    });

    const sentimentContent = sentimentResponse.content[0];
    if (sentimentContent.type !== 'text') {
      throw new Error('Invalid sentiment response');
    }
    const sentiment = JSON.parse(sentimentContent.text);

    // Step 3: Theme extraction
    const themePrompt = `Extract themes and prioritize for roadmap.

Feedback: ${feedbackText}
Category: ${categorization.category}
Churn risk: ${sentiment.churn_risk_score}
User value: $${userMetadata.mrr}/mo

Return JSON with: themes, roadmap_priority (priority_score, recommended_action), suggested_jira_labels.

Output valid JSON only.`;

    const themeResponse = await this.anthropic.messages.create({
      model: 'claude-3-5-sonnet-20241022',
      max_tokens: 1024,
      messages: [{ role: 'user', content: themePrompt }],
    });

    const themeContent = themeResponse.content[0];
    if (themeContent.type !== 'text') {
      throw new Error('Invalid theme response');
    }
    const themes = JSON.parse(themeContent.text);

    return {
      timestamp: new Date().toISOString(),
      user_metadata: userMetadata,
      categorization,
      sentiment,
      themes,
    };
  }

  // Process batch of conversations
  async processBatch(conversations: any[]): Promise<FeedbackAnalysis[]> {
    const results: FeedbackAnalysis[] = [];

    for (const conv of conversations) {
      try {
        const feedbackText = this.extractFeedbackText(conv);
        if (!feedbackText) continue;

        const userMetadata = {
          email: conv.user?.email || 'unknown',
          plan: conv.user?.custom_attributes?.plan || 'free',
          mrr: conv.user?.custom_attributes?.mrr || 0,
          tenure_months: conv.user?.custom_attributes?.tenure_months || 0,
          source: 'intercom',
        };

        const analysis = await this.analyzeWithRetries(
          feedbackText,
          userMetadata
        );
        results.push(analysis);

        // Rate limiting: wait 1 second between requests
        await new Promise((resolve) => setTimeout(resolve, 1000));
      } catch (error) {
        console.error(`Failed to process conversation ${conv.id}:`, error);
        // Continue processing other conversations
      }
    }

    return results;
  }
}

// Usage
const pipeline = new ProductFeedbackPipeline(
  process.env.ANTHROPIC_API_KEY!,
  process.env.INTERCOM_TOKEN!
);

const startDate = new Date('2025-08-19');
const endDate = new Date('2025-08-26');

const conversations = await pipeline.fetchIntercomConversations(
  startDate,
  endDate
);
console.log(`Fetched ${conversations.length} conversations`);

const analyses = await pipeline.processBatch(conversations);
console.log(`Analyzed ${analyses.length} feedback entries`);

// Filter high-priority items
const highPriority = analyses.filter(
  (a) => a.themes.roadmap_priority.priority_score >= 8.0
);
console.log(`High priority items: ${highPriority.length}`);

Level 3: Production Pipeline with LangGraph & Jira Integration

Good for: 2,000+ entries/week | Setup time: 1 day

# Production Pipeline with LangGraph & Jira (2000+ entries/week)
from langgraph.graph import Graph, END
from typing import TypedDict, List, Optional
import openai
import requests
from datetime import datetime
import json

class FeedbackState(TypedDict):
    feedback_text: str
    user_metadata: dict
    categorization: Optional[dict]
    sentiment: Optional[dict]
    themes: Optional[dict]
    jira_ticket: Optional[dict]
    slack_notification: Optional[dict]
    retry_count: int
    error: Optional[str]

class ProductionFeedbackPipeline:
    def __init__(self, openai_key: str, jira_config: dict, slack_webhook: str):
        self.openai_client = openai.OpenAI(api_key=openai_key)
        self.jira_config = jira_config
        self.slack_webhook = slack_webhook
    
    def categorize_node(self, state: FeedbackState) -> FeedbackState:
        """Step 1: Categorize and extract metadata"""
        try:
            prompt = f"""Analyze customer feedback and extract structured data.

Feedback: {state['feedback_text']}

User: {state['user_metadata'].get('email')} | Plan: {state['user_metadata'].get('plan')} | MRR: ${state['user_metadata'].get('mrr', 0)}

Extract as JSON:
{{
  "category": "bug_report|feature_request|question|complaint|praise",
  "subcategory": "specific type",
  "feature_area": "product area",
  "platform": "web|mobile|desktop|api",
  "severity": "critical|high|medium|low",
  "impact": {{"frequency": "high|medium|low", "business_impact": "churn_risk|revenue_opportunity|user_satisfaction", "user_blocked": true|false}},
  "churn_signals": ["phrases indicating churn"],
  "mentioned_competitors": ["competitor names"]
}}

Output valid JSON only."""

            response = self.openai_client.chat.completions.create(
                model="gpt-4",
                messages=[{"role": "user", "content": prompt}],
                temperature=0.3
            )
            
            state['categorization'] = json.loads(response.choices[0].message.content)
            state['error'] = None
            
        except Exception as e:
            state['error'] = f"Categorization failed: {str(e)}"
            state['retry_count'] += 1
        
        return state
    
    def sentiment_node(self, state: FeedbackState) -> FeedbackState:
        """Step 2: Analyze sentiment and urgency"""
        try:
            prompt = f"""Analyze sentiment and urgency.

Feedback: {state['feedback_text']}
Category: {state['categorization']['category']}
Severity: {state['categorization']['severity']}

Return JSON:
{{
  "sentiment": {{"overall": "very_positive|positive|neutral|negative|very_negative", "score": -1.0 to 1.0, "confidence": 0.0 to 1.0}},
  "emotions": [{{"emotion": "name", "intensity": 0.0 to 1.0}}],
  "urgency_level": "immediate|high|medium|low",
  "churn_risk_score": 0.0 to 1.0,
  "requires_immediate_response": true|false,
  "escalation_recommended": true|false,
  "key_phrases": ["important quotes"]
}}

Output valid JSON only."""

            response = self.openai_client.chat.completions.create(
                model="gpt-4",
                messages=[{"role": "user", "content": prompt}],
                temperature=0.3
            )
            
            state['sentiment'] = json.loads(response.choices[0].message.content)
            state['error'] = None
            
        except Exception as e:
            state['error'] = f"Sentiment analysis failed: {str(e)}"
            state['retry_count'] += 1
        
        return state
    
    def theme_node(self, state: FeedbackState) -> FeedbackState:
        """Step 3: Extract themes and prioritize"""
        try:
            prompt = f"""Extract themes and suggest roadmap prioritization.

Feedback: {state['feedback_text']}
Category: {state['categorization']['category']}
Sentiment: {state['sentiment']['sentiment']['overall']}
Churn risk: {state['sentiment']['churn_risk_score']}
User value: ${state['user_metadata'].get('mrr', 0)}/mo

Return JSON:
{{
  "themes": [{{"theme": "name", "confidence": 0.0 to 1.0, "related_features": ["features"]}}],
  "roadmap_priority": {{"priority_score": 0.0 to 10.0, "recommended_action": "immediate_fix|schedule_sprint|backlog|no_action", "estimated_impact": "high|medium|low", "affected_user_segment": "segment", "revenue_at_risk": "high_value_customer|standard|low_value"}},
  "suggested_jira_labels": ["labels"],
  "similar_feedback_count": estimate,
  "trend": "increasing|stable|decreasing"
}}

Output valid JSON only."""

            response = self.openai_client.chat.completions.create(
                model="gpt-4",
                messages=[{"role": "user", "content": prompt}],
                temperature=0.5
            )
            
            state['themes'] = json.loads(response.choices[0].message.content)
            state['error'] = None
            
        except Exception as e:
            state['error'] = f"Theme extraction failed: {str(e)}"
            state['retry_count'] += 1
        
        return state
    
    def create_jira_ticket_node(self, state: FeedbackState) -> FeedbackState:
        """Step 4: Create Jira ticket for high-priority items"""
        priority = state['themes']['roadmap_priority']
        
        # Only create ticket if priority score >= 7.0
        if priority['priority_score'] < 7.0:
            state['jira_ticket'] = {"status": "skipped", "reason": "priority_too_low"}
            return state
        
        try:
            # Map priority score to Jira priority
            jira_priority = "Highest" if priority['priority_score'] >= 9.0 else "High"
            
            ticket_data = {
                "fields": {
                    "project": {"key": self.jira_config['project_key']},
                    "summary": f"{state['categorization']['category']}: {state['categorization']['feature_area']}",
                    "description": f"""User Feedback Analysis

*Original Feedback:*
{state['feedback_text']}

*User Details:*
- Email: {state['user_metadata'].get('email')}
- Plan: {state['user_metadata'].get('plan')}
- MRR: ${state['user_metadata'].get('mrr', 0)}
- Tenure: {state['user_metadata'].get('tenure_months', 0)} months

*Analysis:*
- Category: {state['categorization']['category']}
- Severity: {state['categorization']['severity']}
- Sentiment: {state['sentiment']['sentiment']['overall']}
- Churn Risk: {state['sentiment']['churn_risk_score']:.2f}
- Priority Score: {priority['priority_score']:.1f}

*Themes:*
{', '.join([t['theme'] for t in state['themes']['themes']])}

*Recommended Action:* {priority['recommended_action']}
*Estimated Impact:* {priority['estimated_impact']}
""",
                    "issuetype": {"name": "Bug" if state['categorization']['category'] == 'bug_report' else "Story"},
                    "priority": {"name": jira_priority},
                    "labels": state['themes']['suggested_jira_labels']
                }
            }
            
            response = requests.post(
                f"{self.jira_config['base_url']}/rest/api/3/issue",
                json=ticket_data,
                auth=(self.jira_config['email'], self.jira_config['api_token']),
                headers={"Content-Type": "application/json"}
            )
            
            if response.status_code == 201:
                ticket = response.json()
                state['jira_ticket'] = {
                    "status": "created",
                    "key": ticket['key'],
                    "url": f"{self.jira_config['base_url']}/browse/{ticket['key']}"
                }
            else:
                state['jira_ticket'] = {
                    "status": "failed",
                    "error": response.text
                }
        
        except Exception as e:
            state['jira_ticket'] = {
                "status": "failed",
                "error": str(e)
            }
        
        return state
    
    def notify_slack_node(self, state: FeedbackState) -> FeedbackState:
        """Step 5: Send Slack notification for urgent items"""
        sentiment = state['sentiment']
        
        # Only notify if requires immediate response or high churn risk
        if not sentiment['requires_immediate_response'] and sentiment['churn_risk_score'] < 0.8:
            state['slack_notification'] = {"status": "skipped", "reason": "not_urgent"}
            return state
        
        try:
            jira_link = state['jira_ticket'].get('url', 'No ticket created')
            
            slack_message = {
                "text": "🚨 Urgent Customer Feedback Alert",
                "blocks": [
                    {
                        "type": "header",
                        "text": {"type": "plain_text", "text": "🚨 Urgent Customer Feedback"}
                    },
                    {
                        "type": "section",
                        "fields": [
                            {"type": "mrkdwn", "text": f"*User:*\n{state['user_metadata'].get('email')}"},
                            {"type": "mrkdwn", "text": f"*Plan:*\n{state['user_metadata'].get('plan')} (${state['user_metadata'].get('mrr', 0)}/mo)"},
                            {"type": "mrkdwn", "text": f"*Category:*\n{state['categorization']['category']}"},
                            {"type": "mrkdwn", "text": f"*Churn Risk:*\n{sentiment['churn_risk_score']:.0%}"}
                        ]
                    },
                    {
                        "type": "section",
                        "text": {"type": "mrkdwn", "text": f"*Feedback:*\n>{state['feedback_text'][:200]}..."}
                    },
                    {
                        "type": "section",
                        "text": {"type": "mrkdwn", "text": f"*Priority Score:* {state['themes']['roadmap_priority']['priority_score']:.1f}/10\n*Action:* {state['themes']['roadmap_priority']['recommended_action']}"}
                    },
                    {
                        "type": "actions",
                        "elements": [
                            {"type": "button", "text": {"type": "plain_text", "text": "View Jira Ticket"}, "url": jira_link}
                        ]
                    }
                ]
            }
            
            response = requests.post(
                self.slack_webhook,
                json=slack_message,
                headers={"Content-Type": "application/json"}
            )
            
            state['slack_notification'] = {
                "status": "sent" if response.status_code == 200 else "failed",
                "timestamp": datetime.utcnow().isoformat()
            }
        
        except Exception as e:
            state['slack_notification'] = {
                "status": "failed",
                "error": str(e)
            }
        
        return state
    
    def should_retry(self, state: FeedbackState) -> str:
        """Decide whether to retry on error"""
        if state['error'] and state['retry_count'] < 3:
            return "retry"
        elif state['error']:
            return "failed"
        else:
            return "continue"
    
    def build_graph(self) -> Graph:
        """Build the LangGraph workflow"""
        graph = Graph()
        
        # Add nodes
        graph.add_node("categorize", self.categorize_node)
        graph.add_node("sentiment", self.sentiment_node)
        graph.add_node("themes", self.theme_node)
        graph.add_node("create_jira", self.create_jira_ticket_node)
        graph.add_node("notify_slack", self.notify_slack_node)
        
        # Define flow
        graph.set_entry_point("categorize")
        graph.add_conditional_edges(
            "categorize",
            self.should_retry,
            {
                "retry": "categorize",
                "continue": "sentiment",
                "failed": END
            }
        )
        graph.add_conditional_edges(
            "sentiment",
            self.should_retry,
            {
                "retry": "sentiment",
                "continue": "themes",
                "failed": END
            }
        )
        graph.add_conditional_edges(
            "themes",
            self.should_retry,
            {
                "retry": "themes",
                "continue": "create_jira",
                "failed": END
            }
        )
        graph.add_edge("create_jira", "notify_slack")
        graph.add_edge("notify_slack", END)
        
        return graph.compile()

# Usage
pipeline = ProductionFeedbackPipeline(
    openai_key="sk-...",
    jira_config={
        "base_url": "https://yourcompany.atlassian.net",
        "email": "your-email@company.com",
        "api_token": "your-jira-api-token",
        "project_key": "PROD"
    },
    slack_webhook="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
)

feedback_graph = pipeline.build_graph()

# Process single feedback
initial_state = {
    "feedback_text": "The mobile app crashes constantly...",
    "user_metadata": {
        "email": "user@company.com",
        "plan": "Pro",
        "mrr": 49,
        "tenure_months": 8
    },
    "categorization": None,
    "sentiment": None,
    "themes": None,
    "jira_ticket": None,
    "slack_notification": None,
    "retry_count": 0,
    "error": None
}

result = feedback_graph.invoke(initial_state)

print(f"Analysis complete:")
print(f"- Category: {result['categorization']['category']}")
print(f"- Priority: {result['themes']['roadmap_priority']['priority_score']:.1f}/10")
print(f"- Jira: {result['jira_ticket']['status']}")
print(f"- Slack: {result['slack_notification']['status']}")

When to Level Up

1

Start: Simple API Integration

0-500 entries/week

  • Sequential API calls to OpenAI/Anthropic
  • Basic error logging with console.log
  • Manual trigger (run script when needed)
  • CSV export of results
2

Scale: Add Reliability & Integrations

500-2,000 entries/week

  • Automatic retries with exponential backoff
  • Intercom/Zendesk API integration (pull feedback automatically)
  • Rate limiting and queue management
  • Error tracking with Sentry
  • Scheduled runs (daily/hourly cron jobs)
3

Production: Workflow Orchestration

2,000-10,000 entries/week

  • LangGraph workflow with conditional routing
  • Automatic Jira ticket creation for high-priority items
  • Slack notifications for urgent feedback
  • State management (resume on failures, no duplicate processing)
  • Analytics dashboard (Grafana + PostgreSQL)
4

Enterprise: Multi-Source Real-Time Pipeline

10,000+ entries/week

  • Real-time processing (Kafka + streaming)
  • Multi-source ingestion (Intercom, Zendesk, email, surveys, app reviews)
  • Duplicate detection and deduplication
  • Trend analysis and anomaly detection
  • Custom ML models for domain-specific categorization
  • Multi-language support with translation
  • A/B testing different prompts for accuracy

Product-Specific Gotchas

The code examples above work. But product feedback has unique challenges you need to handle.

Duplicate Feedback Detection

Users often submit the same issue multiple times across different channels (Intercom, email, Twitter, app store reviews). You need to deduplicate to avoid inflating issue counts and creating duplicate Jira tickets. Use semantic similarity (embeddings) to detect duplicates even when wording differs.

from openai import OpenAI
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity

client = OpenAI(api_key="sk-...")

def get_embedding(text: str) -> list:
    response = client.embeddings.create(
        model="text-embedding-3-small",
        input=text
    )
    return response.data[0].embedding

def is_duplicate(new_feedback: str, existing_feedbacks: list, threshold: float = 0.85) -> bool:
    """Check if new feedback is duplicate of existing ones"""
    new_embedding = get_embedding(new_feedback)
    
    for existing in existing_feedbacks:
        existing_embedding = get_embedding(existing['text'])
        similarity = cosine_similarity(
            [new_embedding],
            [existing_embedding]
        )[0][0]
        
        if similarity > threshold:
            print(f"Duplicate found! Similarity: {similarity:.2f}")
            print(f"Existing: {existing['text'][:100]}...")
            return True
    
    return False

# Usage
new = "Mobile app crashes when uploading photos"
existing = [
    {"text": "The app keeps crashing during photo upload", "id": "123"},
    {"text": "Can't upload images, app freezes", "id": "124"}
]

if is_duplicate(new, existing):
    print("Skipping duplicate feedback")
else:
    print("Processing new unique feedback")

Handling Multi-Language Feedback

If you have international customers, feedback comes in multiple languages. LLMs handle this reasonably well, but you should detect language and optionally translate to English for consistent categorization. This also helps with deduplication across languages.

import Anthropic from '@anthropic-ai/sdk';

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

async function detectAndTranslate(feedbackText: string): Promise<{
  original_language: string;
  translated_text: string;
  original_text: string;
}> {
  const prompt = `Detect the language of this feedback and translate to English if needed.

Feedback: ${feedbackText}

Return JSON:
{
  "detected_language": "language code (en, es, fr, de, etc)",
  "translated_text": "English translation (or original if already English)",
  "confidence": 0.0 to 1.0
}

Output valid JSON only.`;

  const response = await anthropic.messages.create({
    model: 'claude-3-5-sonnet-20241022',
    max_tokens: 512,
    messages: [{ role: 'user', content: prompt }],
  });

  const content = response.content[0];
  if (content.type !== 'text') throw new Error('Invalid response');
  
  const result = JSON.parse(content.text);

  return {
    original_language: result.detected_language,
    translated_text: result.translated_text,
    original_text: feedbackText,
  };
}

// Usage
const spanish = "La aplicaciΓ³n mΓ³vil se bloquea constantemente";
const result = await detectAndTranslate(spanish);

console.log(`Language: ${result.original_language}`);
console.log(`Translation: ${result.translated_text}`);
// Output: "The mobile app crashes constantly"

Contextual User Segmentation

Not all feedback is equal. A bug report from a $10k/year enterprise customer deserves different prioritization than the same bug from a free user. Enrich feedback with user context (plan tier, MRR, tenure, usage frequency, NPS score) before analysis to make better prioritization decisions.

def enrich_user_context(user_email: str, analytics_db) -> dict:
    """Pull user context from multiple sources"""
    
    # Get subscription data
    subscription = analytics_db.subscriptions.find_one({"email": user_email})
    
    # Get usage data
    usage = analytics_db.usage_events.aggregate([
        {"$match": {"user_email": user_email}},
        {"$group": {
            "_id": "$user_email",
            "total_events": {"$sum": 1},
            "last_active": {"$max": "$timestamp"}
        }}
    ])
    
    # Get NPS score
    nps = analytics_db.nps_responses.find_one(
        {"email": user_email},
        sort=[("date", -1)]
    )
    
    return {
        "email": user_email,
        "plan": subscription.get("plan_name", "free"),
        "mrr": subscription.get("mrr", 0),
        "tenure_months": subscription.get("tenure_months", 0),
        "lifetime_value": subscription.get("ltv", 0),
        "usage_frequency": usage[0].get("total_events", 0) if usage else 0,
        "last_active": usage[0].get("last_active") if usage else None,
        "nps_score": nps.get("score") if nps else None,
        "segment": classify_segment(subscription, usage)
    }

def classify_segment(subscription, usage) -> str:
    """Classify user into segment for prioritization"""
    mrr = subscription.get("mrr", 0)
    events = usage[0].get("total_events", 0) if usage else 0
    
    if mrr >= 500 and events > 1000:
        return "enterprise_power_user"
    elif mrr >= 100:
        return "high_value_customer"
    elif events > 500:
        return "engaged_free_user"
    else:
        return "casual_user"

# Usage in analysis
user_context = enrich_user_context("sarah@company.com", db)

# Adjust priority based on segment
if user_context['segment'] == 'enterprise_power_user':
    priority_multiplier = 1.5
elif user_context['segment'] == 'high_value_customer':
    priority_multiplier = 1.2
else:
    priority_multiplier = 1.0

adjusted_priority = base_priority * priority_multiplier

Feedback Loop to Product Analytics

Feedback analysis is most powerful when connected to product usage data. If 50 users report "slow loading", check analytics to see if load times actually increased. If users request a feature, check how many users would actually benefit. Integrate with Mixpanel/Amplitude to validate feedback with usage data.

import axios from 'axios';

interface AnalyticsValidation {
  feature_usage: number;
  affected_users: number;
  trend: string;
  validates_feedback: boolean;
}

async function validateWithAnalytics(
  feedbackTheme: string,
  mixpanelToken: string
): Promise<AnalyticsValidation> {
  // Map feedback theme to analytics event
  const eventMapping: Record<string, string> = {
    'mobile_stability_issues': 'app_crash',
    'slow_performance': 'page_load_time',
    'photo_upload_issues': 'photo_upload_failed',
  };

  const event = eventMapping[feedbackTheme];
  if (!event) {
    return {
      feature_usage: 0,
      affected_users: 0,
      trend: 'unknown',
      validates_feedback: false,
    };
  }

  // Query Mixpanel for last 7 days
  const response = await axios.get(
    'https://mixpanel.com/api/2.0/segmentation',
    {
      params: {
        event,
        from_date: '2025-08-19',
        to_date: '2025-08-26',
        type: 'unique',
      },
      headers: {
        Authorization: `Basic ${Buffer.from(mixpanelToken + ':').toString('base64')}`,
      },
    }
  );

  const data = response.data.data;
  const values = Object.values(data.values) as number[];
  const totalUsers = values.reduce((sum, val) => sum + val, 0);

  // Calculate trend
  const firstHalf = values.slice(0, Math.floor(values.length / 2));
  const secondHalf = values.slice(Math.floor(values.length / 2));
  const firstAvg = firstHalf.reduce((a, b) => a + b, 0) / firstHalf.length;
  const secondAvg = secondHalf.reduce((a, b) => a + b, 0) / secondHalf.length;

  let trend = 'stable';
  if (secondAvg > firstAvg * 1.2) trend = 'increasing';
  if (secondAvg < firstAvg * 0.8) trend = 'decreasing';

  return {
    feature_usage: values[values.length - 1],
    affected_users: totalUsers,
    trend,
    validates_feedback: totalUsers > 10 && trend === 'increasing',
  };
}

// Usage
const validation = await validateWithAnalytics(
  'mobile_stability_issues',
  process.env.MIXPANEL_TOKEN!
);

if (validation.validates_feedback) {
  console.log(`Analytics confirms: ${validation.affected_users} users affected`);
  console.log(`Trend: ${validation.trend}`);
  // Increase priority
} else {
  console.log('Analytics does not show significant impact');
  // May be isolated issue or user error
}

Handling Feature Request vs Bug Ambiguity

Users often frame bugs as feature requests and vice versa. "I wish the app didn't crash" is a bug, not a feature request. "The export feature is missing" might be a bug if it should exist. Use LLM to reclassify ambiguous feedback and check against your actual product capabilities to distinguish bugs from missing features.

import openai

client = openai.OpenAI(api_key="sk-...")

def reclassify_ambiguous_feedback(
    feedback: str,
    initial_category: str,
    product_features: list
) -> dict:
    """Reclassify feedback if initial categorization is ambiguous"""
    
    # Only reclassify if confidence is low or category seems wrong
    if initial_category not in ['bug_report', 'feature_request']:
        return {"category": initial_category, "confidence": 1.0}
    
    prompt = f"""Analyze this feedback and determine if it's truly a bug or feature request.

Feedback: {feedback}

Initial category: {initial_category}

Product features that exist:
{', '.join(product_features)}

Rules:
- If user says feature "doesn't work" or "is broken" and it exists β†’ bug_report
- If user says feature "should exist" or "is missing" and it doesn't exist β†’ feature_request
- If user describes unexpected behavior β†’ bug_report
- If user describes desired new capability β†’ feature_request

Return JSON:
{{
  "correct_category": "bug_report|feature_request",
  "reasoning": "why this category is correct",
  "confidence": 0.0 to 1.0,
  "feature_exists": true|false
}}

Output valid JSON only."""

    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}],
        temperature=0.2
    )
    
    result = json.loads(response.choices[0].message.content)
    
    # Log if reclassification occurred
    if result['correct_category'] != initial_category:
        print(f"Reclassified: {initial_category} β†’ {result['correct_category']}")
        print(f"Reasoning: {result['reasoning']}")
    
    return result

# Usage
product_features = [
    "photo_upload",
    "photo_filters",
    "photo_export",
    "user_profiles",
    "comments"
]

feedback = "I wish the app didn't crash when uploading photos"
initial = "feature_request"  # LLM might initially misclassify

reclassified = reclassify_ambiguous_feedback(
    feedback,
    initial,
    product_features
)

print(f"Final category: {reclassified['correct_category']}")
# Output: "bug_report" (photo_upload exists, so crash is a bug)

Cost Calculator

Manual Process (Current State)

Product Manager time (2 hours/day reading & categorizing)
$150/day
Delayed insights (2-3 week lag β†’ missed opportunities)
~$50,000/year
Missed high-priority bugs (churn from critical issues)
~$30,000/year
Total:$119,000/year
annual

Limitations:

  • β€’ Can only process 50-100 entries/week manually
  • β€’ 2-3 week delay from feedback to insights
  • β€’ Critical issues get buried in noise
  • β€’ No systematic prioritization
  • β€’ Inconsistent categorization across team members

Automated Pipeline

OpenAI API costs (gpt-4, ~2000 entries/week)
$800/month
Infrastructure (hosting, database, monitoring)
$200/month
PM time (30 min/day reviewing automated insights)
$19/day
Initial setup & maintenance (amortized)
$500/month
Total:$22,900/year
annual

Benefits:

  • βœ“ Process 2,000+ entries/week automatically
  • βœ“ Real-time insights (< 1 hour from feedback to Jira ticket)
  • βœ“ Automatic escalation of critical issues
  • βœ“ Consistent categorization and prioritization
  • βœ“ Trend detection and anomaly alerts
  • βœ“ Integration with roadmap planning tools
$263/day saved
81% cost reduction | $8,008/month | $96,100/year
πŸ’‘ Payback in 1 month, 4.2x ROI in year 1
πŸ“Š

Want This Running in Your Product Workflow?

We build custom feedback analysis pipelines that integrate with your existing tools (Intercom, Zendesk, Jira, Linear). From proof-of-concept to production in 2-4 weeks.