On this page
- Purpose
- The Generic Chatbot Experience
- A Self-Contained Chat Component with Webhook Integration
- Understanding Real-Time UI Patterns
- Prerequisites & Tooling
- Knowledge Base
- Environment
- Testing Your Webhook
- High-Level Architecture
- Component State Flow
- The Email Client
- The Five-Layer Architecture
- The Implementation
- Defining the Component Interface
- State Management Strategy
- Session ID Generation
- Auto-Open Logic with Session Persistence
- Dual Scroll Management
- Message Submission with Error Handling
- Markdown Rendering with Custom Styles
- Inline Styles vs. CSS Classes
- Under the Hood
- Webhook Communication Pattern
- Memory and Performance Analysis
- sessionStorage Behavior
- Edge Cases & Pitfalls
- Rapid Message Submission
- Long API Response Times
- Network Offline
- Markdown Injection
- Missing Cleanup in Scroll Listener
- Focus Management on Mobile
- Conclusion
- Skills Acquired
- Using This Component
Purpose
The Generic Chatbot Experience
You want to add an AI assistant to your portfolio. You could use Intercom, Drift, or Zendesk Chat, but they:
- Cost $50-200/month for basic features
- Look generic (everyone recognizes the Intercom bubble)
- Send data to third-party servers
- Require complex integrations
- Don’t support custom AI models (GPT-4, Claude, custom RAG systems)
You could build from scratch, but then you face:
- Managing WebSocket connections for real-time updates
- Handling message persistence across page refreshes
- Implementing auto-open behavior without annoying users
- Supporting Markdown rendering for rich responses
- Managing scroll behavior as messages arrive
- Handling loading states and error recovery
- Making it mobile-responsive
The Core Problem: Chat widgets seem simple (just messages in a box), but production-ready implementations require handling dozens of edge cases around state management, user experience, and API integration.
A Self-Contained Chat Component with Webhook Integration
The code we’re analyzing (src/components/aiChat.tsx) implements a production-grade chat widget that:
- Integrates with any webhook-based AI service (n8n, Zapier, custom APIs)
- Renders Markdown responses with syntax highlighting
- Manages session state with unique IDs
- Auto-opens once per session without being intrusive
- Hides when user scrolls near page bottom (UX optimization)
- Handles loading states, errors, and network failures gracefully
- Maintains focus management for keyboard accessibility
This is the same pattern used by:
- Intercom’s Messenger
- Drift’s chat widget
- Custom support chat systems
Understanding Real-Time UI Patterns
This tutorial demonstrates six advanced concepts:
- Webhook Integration: Connecting to external AI services without WebSockets
- Session Management: Generating and maintaining unique session identifiers
- Markdown Rendering: Safely rendering rich text from untrusted sources
- Scroll Behavior: Multiple scroll contexts (chat window + page scroll)
- State Persistence: Using sessionStorage for cross-page state
- Inline Styles: When and why to use inline styles vs. CSS classes
🔵 Deep Dive: This component uses the Controlled Component pattern for the input field, Optimistic UI updates for instant message display, and Error Boundaries (implicit) for graceful degradation.
Prerequisites & Tooling
Knowledge Base
Required:
- React fundamentals (components, props, state, effects)
- TypeScript interfaces and types
- Async/await and Promises
- HTTP requests with fetch API
Helpful:
- Understanding of Markdown syntax
- Experience with chat UIs
- Knowledge of sessionStorage vs. localStorage
- Familiarity with webhook concepts
Environment
From the component’s imports:
import React, { useState, useEffect, useRef } from "react";
import ReactMarkdown from "react-markdown";
import remarkGfm from "remark-gfm";
import { ChevronDown } from "lucide-react";
Dependencies:
npm install react-markdown remark-gfm lucide-react
Key Libraries:
- react-markdown: Renders Markdown to React components
- remark-gfm: GitHub Flavored Markdown support (tables, strikethrough, task lists)
- lucide-react: Icon library (ChevronDown for collapse icon)
Testing Your Webhook
# Test with curl
curl -X POST https://your-webhook-url.com \
-H "Content-Type: application/json" \
-d '{
"action": "sendMessage",
"sessionId": "test-123",
"chatInput": "Hello, AI!"
}'
# Expected response
{
"output": "Hello! How can I help you today?"
}
High-Level Architecture
Component State Flow
stateDiagram-v2
[*] --> Closed: Initial render
Closed --> AutoOpening: After 3s (first visit)
Closed --> Open: User clicks bubble
AutoOpening --> Open: Timer completes
Open --> Closed: User clicks close
Open --> Typing: User types message
Typing --> Sending: User presses Enter
Sending --> Loading: API request in flight
Loading --> DisplayResponse: API success
Loading --> DisplayError: API failure
DisplayResponse --> Open: Ready for next message
DisplayError --> Open: Ready for retry
note right of AutoOpening
Only happens once per session
Tracked in sessionStorage
end note
note right of Loading
Shows "typing..." indicator
Input disabled
end note
The Email Client
Think of the chat widget as an email client:
| Email Client | Chat Widget |
|---|---|
| Inbox | Message list |
| Compose button | Input field |
| Send button | Send icon button |
| Unread badge | Auto-open notification |
| Minimize to tray | Close to bubble |
| Auto-check for new mail | Auto-scroll to new messages |
| Draft persistence | Session ID persistence |
Both need to:
- Display a list of messages
- Handle user input
- Send data to a server
- Show loading states
- Manage focus and scroll
- Persist state across interactions
The Five-Layer Architecture
Layer 1: Visual State (Open/Closed)
├─ Bubble button (always visible)
├─ Chat window (conditionally visible)
└─ Hide near bottom (scroll-based)
Layer 2: Message State
├─ Message history array
├─ Current input value
└─ Loading indicator
Layer 3: Session State
├─ Unique session ID (UUID)
├─ Auto-open flag (sessionStorage)
└─ Scroll position tracking
Layer 4: Network Layer
├─ Webhook POST requests
├─ Error handling
└─ Response parsing
Layer 5: Rendering Layer
├─ Markdown parsing
├─ Inline styles
└─ Accessibility attributes
The Implementation
Defining the Component Interface
The Props Type:
interface ChatBubbleProps {
webhookUrl: string; // Required: API endpoint
initialBotMessage?: string; // Optional: First message
autoOpenDelay?: number; // Optional: Delay before auto-open
botName?: string; // Optional: Bot display name
userName?: string; // Optional: User display name
bubbleIcon?: React.ReactNode; // Optional: Custom bubble icon
closeIcon?: React.ReactNode; // Optional: Custom close icon
sendIcon?: React.ReactNode; // Optional: Custom send icon
placeholder?: string; // Optional: Input placeholder
headerText?: string; // Optional: Chat header text
openIcon?: React.ReactNode; // Optional: Custom open icon
hideNearBottomOffset?: number; // Optional: Hide threshold
}
Design Decisions:
- Only webhookUrl is required: Everything else has sensible defaults
- React.ReactNode for icons: Allows emoji strings or JSX elements
- Offset as number: Flexible threshold for hiding behavior
🔵 Deep Dive: Using React.ReactNode instead of string for icons allows maximum flexibility. Users can pass "💬", <ChatIcon />, or even <img src="..." />.
State Management Strategy
The State Variables:
const [isOpen, setIsOpen] = useState(false);
const [messages, setMessages] = useState<Message[]>([]);
const [inputValue, setInputValue] = useState("");
const [isLoading, setIsLoading] = useState(false);
const [sessionId] = useState(() => crypto.randomUUID());
const [isNearBottom, setIsNearBottom] = useState(false);
Why These Specific States?
isOpen: Controls visibility (boolean is sufficient)messages: Array of message objects (needs structure for rendering)inputValue: Controlled input (React best practice)isLoading: Disables input during API callssessionId: Generated once, never changes (lazy initialization)isNearBottom: Scroll-based visibility toggle
The Message Type:
interface Message {
id: number; // Unique identifier (for React keys)
sender: "user" | "bot"; // Determines styling
text: string; // Message content
}
🔴 Danger: Using Date.now() for IDs can cause collisions if two messages are created in the same millisecond. Production code should use crypto.randomUUID() or a counter.
Session ID Generation
Naive Approach: No Session Tracking
// WRONG: Every message is a new conversation
const payload = {
action: "sendMessage",
chatInput: trimmedInput,
};
Why This Fails: The AI has no context. Each message is treated as a new conversation, so follow-up questions don’t work.
Refined Solution (From Repo):
const [sessionId] = useState(() => crypto.randomUUID());
const payload = {
action: "sendMessage",
sessionId: sessionId, // Consistent across all messages
chatInput: trimmedInput,
};
How It Works:
- Lazy Initialization:
useState(() => ...)runs only once on mount - crypto.randomUUID(): Generates RFC 4122 compliant UUID
- Example:
"550e8400-e29b-41d4-a716-446655440000"
- Example:
- Const Destructuring:
[sessionId](no setter) prevents accidental changes
Backend Correlation:
// On the backend (n8n, custom API)
const sessions = new Map();
app.post('/webhook', (req, res) => {
const { sessionId, chatInput } = req.body;
// Retrieve conversation history
let history = sessions.get(sessionId) || [];
history.push({ role: 'user', content: chatInput });
// Send to AI with full history
const response = await ai.chat(history);
// Store updated history
history.push({ role: 'assistant', content: response });
sessions.set(sessionId, history);
res.json({ output: response });
});
Auto-Open Logic with Session Persistence
The Challenge: Auto-open the chat once to grab attention, but don’t annoy users on every page load.
const SESSION_STORAGE_KEY = "chatBubbleAutoOpened";
useEffect(() => {
setMessages([{ id: Date.now(), sender: "bot", text: initialBotMessage }]);
const hasAutoOpenedInSession = sessionStorage.getItem(SESSION_STORAGE_KEY);
if (!hasAutoOpenedInSession) {
const timer = setTimeout(() => {
setIsOpen((currentIsOpenState) => {
if (!currentIsOpenState) {
sessionStorage.setItem(SESSION_STORAGE_KEY, "true");
return true;
}
return currentIsOpenState;
});
}, autoOpenDelay);
return () => clearTimeout(timer);
}
}, [initialBotMessage, autoOpenDelay]);
Breaking It Down:
- Check sessionStorage: Has this session already auto-opened?
- Set Timer: Wait
autoOpenDelayms (default 3000 = 3 seconds) - Functional Update: Check current state before opening
- Mark as Opened: Set flag in sessionStorage
- Cleanup: Cancel timer if component unmounts
Why Functional Update?
setIsOpen((currentIsOpenState) => {
if (!currentIsOpenState) {
// Only open if currently closed
sessionStorage.setItem(SESSION_STORAGE_KEY, "true");
return true;
}
return currentIsOpenState;
});
This prevents a race condition: if the user manually opens the chat before the timer fires, we don’t want to set the flag (they opened it themselves, not auto-opened).
sessionStorage vs. localStorage:
| sessionStorage | localStorage |
|---|---|
| Cleared when tab closes | Persists forever |
| Per-tab isolation | Shared across tabs |
| Used here | Not appropriate |
We use sessionStorage because we want the auto-open to happen once per browsing session, not once ever.
Dual Scroll Management
The Challenge: Two independent scroll contexts:
- Chat window scroll: Auto-scroll to show new messages
- Page scroll: Hide chat when user scrolls near bottom
Chat Window Auto-Scroll:
const messageListRef = useRef<HTMLDivElement>(null);
useEffect(() => {
if (messageListRef.current) {
messageListRef.current.scrollTop = messageListRef.current.scrollHeight;
}
}, [messages]);
How It Works:
scrollTop: Current scroll position (pixels from top)scrollHeight: Total height of scrollable content- Setting
scrollTop = scrollHeightscrolls to bottom
Page Scroll Detection:
useEffect(() => {
const handleScroll = () => {
if (typeof window !== "undefined" && hideNearBottomOffset > 0) {
const nearBottom =
window.scrollY + window.innerHeight >=
document.documentElement.scrollHeight - hideNearBottomOffset;
setIsNearBottom(nearBottom);
}
};
if (typeof window !== "undefined") {
window.addEventListener("scroll", handleScroll, { passive: true });
handleScroll(); // Check initial state
return () => window.removeEventListener("scroll", handleScroll);
}
}, [hideNearBottomOffset]);
The Math:
window.scrollY = How far user has scrolled down
window.innerHeight = Viewport height
document.documentElement.scrollHeight = Total page height
nearBottom = (scrollY + innerHeight) >= (totalHeight - offset)
Example:
scrollY = 1000px
innerHeight = 800px
totalHeight = 2000px
offset = 600px
(1000 + 800) >= (2000 - 600)
1800 >= 1400 ✓ Near bottom!
Why Hide Near Bottom?
If the user scrolls to the footer (contact info, social links), the chat bubble covers important content. Hiding it improves UX.
The passive: true Flag:
window.addEventListener("scroll", handleScroll, { passive: true });
This tells the browser: “This listener won’t call preventDefault(), so you can optimize scrolling performance.” Without it, the browser must wait for the handler to complete before scrolling, causing jank.
Message Submission with Error Handling
The Complete Flow:
const handleSendMessage = async (event?: FormEvent) => {
if (event) event.preventDefault();
const trimmedInput = inputValue.trim();
if (!trimmedInput || isLoading) return;
// 1. Add user message optimistically
const userMessage: Message = {
id: Date.now(),
sender: "user",
text: trimmedInput,
};
setMessages((prevMessages) => [...prevMessages, userMessage]);
setInputValue("");
setIsLoading(true);
// 2. Prepare payload
const payload = {
action: "sendMessage",
sessionId: sessionId,
chatInput: trimmedInput,
};
try {
// 3. Send to webhook
const response = await fetch(webhookUrl, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
});
// 4. Check response status
if (!response.ok) {
let errorData: any = { message: `Request failed with status: ${response.status}` };
try {
errorData = await response.json();
} catch (parseError) {
/* Ignore parse errors */
}
throw new Error(errorData?.message || errorData?.error || `Webhook request failed: ${response.status}`);
}
// 5. Parse response
const data = await response.json();
const botResponseText = data.output || "Sorry, I didn't get a valid response.";
// 6. Add bot message
const botMessage: Message = {
id: Date.now() + 1,
sender: "bot",
text: botResponseText,
};
setMessages((prevMessages) => [...prevMessages, botMessage]);
} catch (error) {
// 7. Handle errors gracefully
console.error("Chat Error:", error);
const errorMessage: Message = {
id: Date.now() + 1,
sender: "bot",
text: `Sorry, an error occurred: ${error instanceof Error ? error.message : "Could not connect."}. Please try again later.`,
};
setMessages((prevMessages) => [...prevMessages, errorMessage]);
} finally {
// 8. Always reset loading state
setIsLoading(false);
if (isOpen && inputRef.current) {
inputRef.current.focus();
}
}
};
Key Patterns:
- Optimistic UI: Add user message immediately (don’t wait for API)
- Guard Clauses: Early return if input is empty or already loading
- Error Parsing: Try to extract error message from response body
- Fallback Messages: Provide defaults if API response is malformed
- Finally Block: Always reset loading state, even on error
- Focus Management: Return focus to input after submission
🔵 Deep Dive: The finally block is crucial. Without it, if the API throws an error, isLoading stays true forever, permanently disabling the input.
Markdown Rendering with Custom Styles
The Challenge: Bot responses contain Markdown, but default rendering looks ugly.
{msg.sender === "bot" ? (
<ReactMarkdown
children={msg.text}
remarkPlugins={[remarkGfm]}
components={{
p: ({ node, ...props }) => <p style={styles.botMessageMarkdown_p} {...props} />,
ul: ({ node, ...props }) => <ul style={styles.botMessageMarkdown_ul} {...props} />,
ol: ({ node, ...props }) => <ol style={styles.botMessageMarkdown_ol} {...props} />,
li: ({ node, ...props }) => <li style={styles.botMessageMarkdown_li} {...props} />,
a: ({ node, ...props }) => <a style={styles.botMessageMarkdown_a} {...props} target="_blank" rel="noopener noreferrer" />,
strong: ({ node, ...props }) => <strong style={styles.botMessageMarkdown_strong} {...props} />,
em: ({ node, ...props }) => <em style={styles.botMessageMarkdown_em} {...props} />,
code: ({ node, inline, ...props }) => <code style={styles.botMessageMarkdown_code} {...props} />,
pre: ({ node, ...props }) => <pre style={styles.botMessageMarkdown_pre} {...props} />,
}}
/>
) : (
msg.text
)}
Why Custom Components?
ReactMarkdown renders to standard HTML elements, which inherit browser default styles. By providing custom components, we can:
- Control spacing (margins, padding)
- Style links (color, underline)
- Format code blocks (background, font)
- Ensure consistency with chat bubble design
Security Note:
ReactMarkdown is safe by default—it doesn’t render raw HTML. This prevents XSS attacks:
<script>alert('XSS')</script>
Renders as plain text, not executed JavaScript.
The remarkGfm Plugin:
Enables GitHub Flavored Markdown features:
- Tables
- Strikethrough (
~~text~~) - Task lists (
- [ ] Todo) - Autolinks (URLs become clickable)
Inline Styles vs. CSS Classes
Why Inline Styles?
const styles: { [key: string]: React.CSSProperties } = {
chatContainer: {
position: "fixed",
bottom: "20px",
right: "20px",
zIndex: 1000
},
// ... more styles
};
<div style={styles.chatContainer}>
Advantages:
- Self-Contained: Component works anywhere without external CSS
- No Class Name Conflicts: No risk of global CSS overriding styles
- Dynamic Styles: Easy to compute styles based on props/state
- Type Safety: TypeScript validates style properties
Disadvantages:
- No Pseudo-Classes: Can’t use
:hover,:focus, etc. - No Media Queries: Can’t do responsive styles
- Larger Bundle: Styles duplicated if component used multiple times
- No CSS Optimizations: Can’t benefit from CSS minification
When to Use Inline Styles:
- Reusable components distributed as libraries
- Styles that depend on props/state
- Prototyping and demos
When to Use CSS Classes:
- Application-specific components
- Complex responsive layouts
- Hover/focus states
- Animations and transitions
Under the Hood
Webhook Communication Pattern
The Request:
POST /webhook HTTP/1.1
Host: your-n8n-instance.com
Content-Type: application/json
{
"action": "sendMessage",
"sessionId": "550e8400-e29b-41d4-a716-446655440000",
"chatInput": "What is React?"
}
The Response:
HTTP/1.1 200 OK
Content-Type: application/json
{
"output": "React is a JavaScript library for building user interfaces..."
}
Why Webhooks Instead of WebSockets?
| WebSockets | Webhooks |
|---|---|
| Persistent connection | Request/response |
| Real-time bidirectional | One-way communication |
| Complex server setup | Simple HTTP endpoint |
| Requires connection management | Stateless |
| Not needed for chat | Perfect for chat |
Chat doesn’t need real-time push notifications (the user initiates all messages), so webhooks are simpler and more reliable.
Memory and Performance Analysis
State Memory:
messages = [
{ id: 1234567890, sender: "bot", text: "Hello! How can I help?" },
{ id: 1234567891, sender: "user", text: "What is TypeScript?" },
{ id: 1234567892, sender: "bot", text: "TypeScript is..." },
];
Memory per Message:
id: 8 bytes (number)sender: ~8 bytes (string pointer)text: Variable (average ~200 bytes)- Object overhead: ~50 bytes
- Total: ~266 bytes per message
For 50 messages: ~13KB (negligible)
Render Performance:
Each message triggers:
- State update: ~0.1ms
- Re-render: ~2ms (depends on Markdown complexity)
- Scroll: ~1ms
- Total: ~3ms per message
At 60fps (16.67ms per frame), this leaves 13.67ms for other work—plenty of headroom.
sessionStorage Behavior
Storage Limits:
- Capacity: 5-10MB per origin (browser-dependent)
- Lifetime: Until tab closes
- Scope: Per-origin, per-tab
What Happens:
// Tab 1
sessionStorage.setItem("chatBubbleAutoOpened", "true");
// Tab 2 (same site)
sessionStorage.getItem("chatBubbleAutoOpened"); // null (different tab!)
// Tab 1 (after refresh)
sessionStorage.getItem("chatBubbleAutoOpened"); // "true" (same tab)
// Tab 1 (after close and reopen)
sessionStorage.getItem("chatBubbleAutoOpened"); // null (new session)
This ensures the auto-open happens once per tab session, not once globally.
Edge Cases & Pitfalls
Rapid Message Submission
Problem: User types “hello”, presses Enter, immediately types “world”, presses Enter.
Current Behavior:
- Message 1 sends,
isLoading = true - Message 2 blocked (input disabled while loading)
- Message 1 completes,
isLoading = false - User can now send Message 2
Alternative: Queue Messages
const [messageQueue, setMessageQueue] = useState<string[]>([]);
const handleSendMessage = async (cmd: string) => {
setMessageQueue(queue => [...queue, cmd]);
};
useEffect(() => {
if (messageQueue.length > 0 && !isLoading) {
const nextMessage = messageQueue[0];
setMessageQueue(queue => queue.slice(1));
sendToAPI(nextMessage);
}
}, [messageQueue, isLoading]);
Long API Response Times
Problem: Webhook takes 30 seconds to respond (complex AI reasoning).
Current Behavior: User sees “typing…” for 30 seconds, no feedback.
Better UX: Streaming Responses
const handleSendMessage = async () => {
// ... send request
const reader = response.body.getReader();
const decoder = new TextDecoder();
let botText = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
botText += decoder.decode(value);
// Update message in real-time
setMessages(prev => {
const lastMsg = prev[prev.length - 1];
if (lastMsg.sender === "bot") {
return [...prev.slice(0, -1), { ...lastMsg, text: botText }];
}
return [...prev, { id: Date.now(), sender: "bot", text: botText }];
});
}
};
This requires the backend to support streaming responses (Server-Sent Events or chunked transfer encoding).
Network Offline
Problem: User has no internet connection.
Current Behavior: Fetch throws error, shows generic error message.
Better Handling:
const handleSendMessage = async () => {
if (!navigator.onLine) {
const errorMessage: Message = {
id: Date.now(),
sender: "bot",
text: "You appear to be offline. Please check your internet connection.",
};
setMessages(prev => [...prev, errorMessage]);
return;
}
// ... rest of logic
};
Markdown Injection
Problem: User types Markdown in their message.
User: **Hello** _world_
Current Behavior: User messages render as plain text (not Markdown).
Is This Correct? Yes! User input should not be interpreted as Markdown for security and UX reasons:
- Security: Prevents users from injecting malicious Markdown
- UX: Users expect their text to appear exactly as typed
Only bot responses are rendered as Markdown.
Missing Cleanup in Scroll Listener
Problem: Component unmounts, but scroll listener still fires.
Current Solution: Cleanup function removes listener:
return () => {
window.removeEventListener("scroll", handleScroll);
};
Without Cleanup: Memory leak + potential errors if setIsNearBottom is called after unmount.
Focus Management on Mobile
Problem: On mobile, keyboard doesn’t appear when chat opens.
Current Solution: Focus input after animation:
if (isOpen && inputRef.current) {
setTimeout(() => inputRef.current?.focus(), 300);
}
The 300ms delay matches the CSS transition duration, ensuring focus happens after the chat is fully visible.
Conclusion
Skills Acquired
You’ve learned:
- Webhook Integration: Connecting to external APIs without complex WebSocket setup
- Session Management: Generating and tracking unique session identifiers
- Markdown Rendering: Safely rendering rich text with custom styling
- Dual Scroll Management: Handling multiple scroll contexts independently
- State Persistence: Using sessionStorage for cross-page state
- Optimistic UI: Updating UI before API confirmation for better UX
- Error Recovery: Graceful degradation when APIs fail
The Proficiency Marker: Most developers use pre-built chat widgets without understanding their internals. You now understand chat UIs as stateful, network-dependent systems with complex interactions between user input, API communication, and scroll behavior. This mental model transfers to:
- Real-time collaboration tools (Google Docs comments)
- Support ticket systems
- Social media messaging
- Live streaming chat
- Customer service platforms
Using This Component
In your Astro page:
---
// src/pages/index.astro
import ChatBubble from '@/components/aiChat';
---
<html>
<body>
<!-- Your page content -->
<ChatBubble
client:load
webhookUrl="https://your-n8n-instance.com/webhook/chat"
initialBotMessage="Hi! I'm Jason's AI assistant. Ask me anything about his work!"
botName="AI Assistant"
autoOpenDelay={5000}
hideNearBottomOffset={400}
/>
</body>
</html>
Custom Styling:
<ChatBubble
webhookUrl={webhookUrl}
bubbleIcon={<MessageCircle size={24} />}
closeIcon={<X size={18} />}
sendIcon={<Send size={18} />}
headerText="Chat with AI"
placeholder="Ask me anything..."
/>
Next Challenge: Implement message persistence using IndexedDB to save chat history across page refreshes, allowing users to continue conversations where they left off.