Week One: From Zero to Working Prototype in Five Days
The first week of building an AI product is where most teams either build unstoppable momentum or get lost in analysis paralysis. This chapter provides a day-by-day blueprint for going from nothing to a working prototype that real users can test.
67%
of AI projects that fail never reach a working prototype
The majority of failed AI initiatives die in the planning and architecture phase.
Key Insight
The Five-Day Prototype Is Your Minimum Viable Proof
A five-day prototype isn't about building a complete product—it's about proving three things: the AI can do something useful, users can interact with it, and the core value proposition holds up in practice. Stripe's first AI-powered fraud detection prototype was built in four days and caught its first fraudulent transaction on day five.
Framework
The Daily Deliverable Framework
Day 1: Working API Call
By end of day, you should be able to send a request to your AI provider and receive a meaningful res...
Day 2: Interactive Interface
Users should be able to input something and see the AI's response displayed. It doesn't need to be p...
Day 3: Complete User Flow
The full journey from user input to valuable output should work end-to-end. This includes any pre-pr...
Day 4: Graceful Failures
When things go wrong—and they will—users should see helpful error messages instead of crashes. Rate ...
Set Up Your Development Environment Before Day 1
The night before you start, ensure your development environment is ready: API keys obtained and tested, development server running, version control initialized, and deployment pipeline configured. Losing half of Day 1 to environment setup is a common trap that throws off the entire week's schedule..
How Notion AI Shipped Their First Internal Prototype
That three-day prototype led to a $10M investment in AI features and the eventua...
Day 1 Success vs. Day 1 Failure
Successful Day 1
API key configured in environment variables by 9 AM
First successful API call completed before lunch
Afternoon spent experimenting with prompts and parameters
Documented learnings about rate limits and response times
Failed Day 1
Morning spent debating which AI provider to use
API key hardcoded directly in source code
Got stuck on authentication errors for three hours
No documentation of what was learned
Day 1 Completion Checklist
Key Insight
Choose One AI Provider and Commit for the Week
Analysis paralysis about AI providers kills more prototypes than technical challenges. For your week-one prototype, pick one provider and stick with it.
Anti-Pattern: The Multi-Provider Abstraction Trap
❌ Problem
By the end of Day 1, these teams have a beautiful abstraction with zero working ...
✓ Solution
Hardcode your provider choice for the prototype. Use the SDK directly without ab...
Day 1 Hour-by-Hour Schedule
1
9:00 AM - Environment Setup (1 hour)
2
10:00 AM - First API Call (1 hour)
3
11:00 AM - Parameterize and Test (1 hour)
4
12:00 PM - Lunch Break (1 hour)
5
1:00 PM - Error Handling Basics (1.5 hours)
Use the Playground First
Before writing any code, spend 30 minutes in your AI provider's playground (OpenAI Playground, Anthropic Console, or Google AI Studio). Test your prompts interactively, experiment with parameters, and understand the response format.
Many teams obsess over crafting the perfect prompt on Day 1. They spend hours wordsmithing, testing edge cases, and optimizing for every scenario.
Practice Exercise
Day 1 Validation Exercise
45 min
Day 1 Essential Resources
OpenAI API Documentation
article
Anthropic Claude API Guide
article
AI SDK by Vercel
tool
LangSmith
tool
Watch Your Spending on Day 1
It's easy to accidentally spend $50-100 on API calls during Day 1 experimentation. Set a hard spending limit in your OpenAI/Anthropic dashboard before you start.
Large textarea for user input, plus any context selectors (model choice, temperature slider). Make t...
Output Panel (Center)
Streaming response area with clear visual distinction between user and AI messages. Include timestam...
Debug Panel (Right/Bottom)
Collapsible panel showing: raw API request/response, token counts, latency breakdown, any errors wit...
Quick Actions Bar
Simple row of buttons for common operations: clear conversation, export chat, toggle debug panel, sw...
J
Jasper AI
How Jasper's First UI Drove Product Direction
The tone selector insight led to Jasper's signature 'generate variations' featur...
Day 2 Hour-by-Hour Build Schedule
1
Hour 1-2: Basic Layout
2
Hour 3: Wire Up API
3
Hour 4: Add Streaming
4
Hour 5: Debug Panel
5
Hour 6: Error States
The 'Grandma Test' for Day 2
At the end of Day 2, you should be able to send your prototype URL to someone non-technical (like your grandma) and have them successfully complete one interaction without any instructions. If they can't figure out where to type and how to submit, your UI is too complicated.
Key Insight
Day 3: Core Workflow Is Where Products Are Made or Broken
Day 3 is the most important day of your sprint. This is where you move from 'AI that responds' to 'AI that does something useful.' The core workflow is the sequence of steps that transforms user intent into valuable output.
Anatomy of an AI Workflow
User Intent
Context Gathering
Prompt Construction
AI Processing
N
Notion
Notion AI's 'Summarize' Workflow Deep Dive
The summarize workflow became Notion AI's most-used feature at launch, with 73% ...
Core Workflow Implementation Checklist
Anti-Pattern: The 'Feature Creep' Workflow
❌ Problem
With multiple workflows, you can't tell which one is working or failing. User fe...
✓ Solution
Pick your single most important workflow and make it exceptional. If you're buil...
Key Insight
Day 4: Error Handling Is Product Design
Day 4 is about making your prototype robust enough for real users to test. This means handling every way things can go wrong—and in AI products, things go wrong constantly.
Comprehensive Error Handling Patterntypescript
123456789101112
// Robust API call with specific error handling
async function callAIWithErrorHandling(input: string, context: string) {
const startTime = Date.now();
try {
// Check input validity before API call
if (!input.trim()) {
return { error: 'empty_input', message: 'Please enter something to analyze.' };
}
const tokenCount = estimateTokens(input + context);
if (tokenCount > 14000) {
Framework
The Error Experience Hierarchy
Level 1: Prevent (Handle Before API Call)
Input validation, token counting, format checking. These errors should never reach the API. Show inl...
Errors where user modification can fix the issue: content too long, policy violation, ambiguous inpu...
Level 4: Apologize (System Failure)
Errors outside anyone's control: API outages, model unavailability, unexpected bugs. Acknowledge the...
A
Anthropic
How Claude Handles Uncertainty
Anthropic's transparent uncertainty handling became a competitive advantage, wit...
The Silent Failure Trap
The worst error handling is invisible error handling. If your AI returns a mediocre response because context was silently truncated, users blame the AI's intelligence rather than understanding the real issue.
67%
of AI product complaints relate to error handling, not AI quality
Analysis of 50,000+ user feedback items across 30 AI products found that the majority of negative feedback stemmed from poor error experiences—confusing messages, no recovery path, or silent failures—rather than the underlying AI capability.
Practice Exercise
Error Scenario Mapping Exercise
45 min
Error Handling Deep Dive Resources
Stripe's API Error Handling Guide
article
OpenAI Error Codes Reference
article
Anthropic's Constitutional AI Paper
article
Vercel AI SDK Error Handling
article
Practice Exercise
Build Your First API Integration in 60 Minutes
60 min
Production-Ready API Integration with Error Handlingtypescript
123456789101112
import OpenAI from 'openai';
import { RateLimiter } from 'limiter';
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const limiter = new RateLimiter({ tokensPerInterval: 50, interval: 'minute' });
interface AIResponse {
content: string;
usage: { prompt_tokens: number; completion_tokens: number };
latency_ms: number;
}
Practice Exercise
UI Prototype Sprint: Chat Interface in 2 Hours
120 min
Day 4 Error Handling Completeness Checklist
Comprehensive Error Handling Wrappertypescript
123456789101112
type ErrorType = 'rate_limit' | 'timeout' | 'auth' | 'content_filter' | 'network' | 'unknown';
interface HandledError {
type: ErrorType;
userMessage: string;
retryable: boolean;
retryAfterMs?: number;
}
const ERROR_MESSAGES: Record<ErrorType, string> = {
rate_limit: "We're experiencing high demand. Your request is queued and will process shortly.",
timeout: "This request is taking longer than usual. Would you like to try again with a shorter prompt?",
Anti-Pattern: The 'Ship Without Testing' Trap
❌ Problem
Products built without early user testing require 3-5x more iteration cycles to ...
✓ Solution
Schedule 5 user tests for Day 5 before you start building. Recruit testers durin...
Practice Exercise
Five-User Testing Sprint
180 min
Framework
The RAPID Testing Framework
Recruit Narrowly
Test only with users who match your ideal customer profile. Five tests with the right users beats tw...
Ask, Don't Tell
Never explain how to use the product. Ask users to think aloud as they explore. Questions like 'What...
Prioritize by Frequency
Issues appearing in 3+ of 5 sessions are critical bugs. Issues in 2 sessions are important. Issues i...
Identify Emotions
Watch for sighs, hesitation, confused expressions, and moments of delight. Emotional reactions revea...
Anti-Pattern: The Feature Creep Prototype
❌ Problem
Complex prototypes obscure what's actually being validated. When users struggle,...
✓ Solution
Create a 'parking lot' document for feature ideas that come up during the sprint...
The team misses the testing window and has to extend the sprint. Worse, they've ...
✓ Solution
Embrace 'good enough' code for prototypes. Use any patterns that work, skip test...
Practice Exercise
Post-Testing Synthesis Session
90 min
73%
of prototype issues are discovered in first 5 user tests
This foundational UX research finding means you don't need dozens of testers to find major issues.
Document Everything Before You Forget
Create a Week One retrospective document within 48 hours of completing user tests. Include: what you built, what you learned, what surprised you, and what you'd do differently.
Chapter Complete!
Day 1 API integration should be minimal but complete - a wor...
Day 2-3 UI development benefits enormously from component li...
Day 4 error handling is non-negotiable for valid testing. Us...
Day 5 user testing with just 5 users reveals 73% of usabilit...
Next: With your tested prototype and prioritized issue list, Week Two focuses on rapid iteration