featured image

Building an AI-Powered Chat Widget with Markdown Support and Session Management

Learn how to create a production-grade AI chat widget in React/TypeScript that integrates with webhook-based AI services, supports Markdown rendering, manages session state, and handles complex UX patterns.

Published

Tue May 27 2025

Technologies Used

LLM React Typescript Markdown
Advanced 28 minutes

Purpose

The Generic Chatbot Experience

You want to add an AI assistant to your portfolio. You could use Intercom, Drift, or Zendesk Chat, but they:

  • Cost $50-200/month for basic features
  • Look generic (everyone recognizes the Intercom bubble)
  • Send data to third-party servers
  • Require complex integrations
  • Don’t support custom AI models (GPT-4, Claude, custom RAG systems)

You could build from scratch, but then you face:

  • Managing WebSocket connections for real-time updates
  • Handling message persistence across page refreshes
  • Implementing auto-open behavior without annoying users
  • Supporting Markdown rendering for rich responses
  • Managing scroll behavior as messages arrive
  • Handling loading states and error recovery
  • Making it mobile-responsive

The Core Problem: Chat widgets seem simple (just messages in a box), but production-ready implementations require handling dozens of edge cases around state management, user experience, and API integration.

A Self-Contained Chat Component with Webhook Integration

The code we’re analyzing (src/components/aiChat.tsx) implements a production-grade chat widget that:

  1. Integrates with any webhook-based AI service (n8n, Zapier, custom APIs)
  2. Renders Markdown responses with syntax highlighting
  3. Manages session state with unique IDs
  4. Auto-opens once per session without being intrusive
  5. Hides when user scrolls near page bottom (UX optimization)
  6. Handles loading states, errors, and network failures gracefully
  7. Maintains focus management for keyboard accessibility

This is the same pattern used by:

  • Intercom’s Messenger
  • Drift’s chat widget
  • Custom support chat systems

Understanding Real-Time UI Patterns

This tutorial demonstrates six advanced concepts:

  • Webhook Integration: Connecting to external AI services without WebSockets
  • Session Management: Generating and maintaining unique session identifiers
  • Markdown Rendering: Safely rendering rich text from untrusted sources
  • Scroll Behavior: Multiple scroll contexts (chat window + page scroll)
  • State Persistence: Using sessionStorage for cross-page state
  • Inline Styles: When and why to use inline styles vs. CSS classes

🔵 Deep Dive: This component uses the Controlled Component pattern for the input field, Optimistic UI updates for instant message display, and Error Boundaries (implicit) for graceful degradation.

Prerequisites & Tooling

Knowledge Base

Required:

  • React fundamentals (components, props, state, effects)
  • TypeScript interfaces and types
  • Async/await and Promises
  • HTTP requests with fetch API

Helpful:

  • Understanding of Markdown syntax
  • Experience with chat UIs
  • Knowledge of sessionStorage vs. localStorage
  • Familiarity with webhook concepts

Environment

From the component’s imports:

import React, { useState, useEffect, useRef } from "react";
import ReactMarkdown from "react-markdown";
import remarkGfm from "remark-gfm";
import { ChevronDown } from "lucide-react";

Dependencies:

npm install react-markdown remark-gfm lucide-react

Key Libraries:

  • react-markdown: Renders Markdown to React components
  • remark-gfm: GitHub Flavored Markdown support (tables, strikethrough, task lists)
  • lucide-react: Icon library (ChevronDown for collapse icon)

Testing Your Webhook

# Test with curl
curl -X POST https://your-webhook-url.com \
  -H "Content-Type: application/json" \
  -d '{
    "action": "sendMessage",
    "sessionId": "test-123",
    "chatInput": "Hello, AI!"
  }'

# Expected response
{
  "output": "Hello! How can I help you today?"
}

High-Level Architecture

Component State Flow

stateDiagram-v2
    [*] --> Closed: Initial render
    Closed --> AutoOpening: After 3s (first visit)
    Closed --> Open: User clicks bubble
    AutoOpening --> Open: Timer completes
    Open --> Closed: User clicks close
    
    Open --> Typing: User types message
    Typing --> Sending: User presses Enter
    Sending --> Loading: API request in flight
    Loading --> DisplayResponse: API success
    Loading --> DisplayError: API failure
    DisplayResponse --> Open: Ready for next message
    DisplayError --> Open: Ready for retry
    
    note right of AutoOpening
      Only happens once per session
      Tracked in sessionStorage
    end note
    
    note right of Loading
      Shows "typing..." indicator
      Input disabled
    end note

The Email Client

Think of the chat widget as an email client:

Email ClientChat Widget
InboxMessage list
Compose buttonInput field
Send buttonSend icon button
Unread badgeAuto-open notification
Minimize to trayClose to bubble
Auto-check for new mailAuto-scroll to new messages
Draft persistenceSession ID persistence

Both need to:

  • Display a list of messages
  • Handle user input
  • Send data to a server
  • Show loading states
  • Manage focus and scroll
  • Persist state across interactions

The Five-Layer Architecture

Layer 1: Visual State (Open/Closed)
  ├─ Bubble button (always visible)
  ├─ Chat window (conditionally visible)
  └─ Hide near bottom (scroll-based)

Layer 2: Message State
  ├─ Message history array
  ├─ Current input value
  └─ Loading indicator

Layer 3: Session State
  ├─ Unique session ID (UUID)
  ├─ Auto-open flag (sessionStorage)
  └─ Scroll position tracking

Layer 4: Network Layer
  ├─ Webhook POST requests
  ├─ Error handling
  └─ Response parsing

Layer 5: Rendering Layer
  ├─ Markdown parsing
  ├─ Inline styles
  └─ Accessibility attributes

The Implementation

Defining the Component Interface

The Props Type:

interface ChatBubbleProps {
  webhookUrl: string;                    // Required: API endpoint
  initialBotMessage?: string;            // Optional: First message
  autoOpenDelay?: number;                // Optional: Delay before auto-open
  botName?: string;                      // Optional: Bot display name
  userName?: string;                     // Optional: User display name
  bubbleIcon?: React.ReactNode;          // Optional: Custom bubble icon
  closeIcon?: React.ReactNode;           // Optional: Custom close icon
  sendIcon?: React.ReactNode;            // Optional: Custom send icon
  placeholder?: string;                  // Optional: Input placeholder
  headerText?: string;                   // Optional: Chat header text
  openIcon?: React.ReactNode;            // Optional: Custom open icon
  hideNearBottomOffset?: number;         // Optional: Hide threshold
}

Design Decisions:

  1. Only webhookUrl is required: Everything else has sensible defaults
  2. React.ReactNode for icons: Allows emoji strings or JSX elements
  3. Offset as number: Flexible threshold for hiding behavior

🔵 Deep Dive: Using React.ReactNode instead of string for icons allows maximum flexibility. Users can pass "💬", <ChatIcon />, or even <img src="..." />.

State Management Strategy

The State Variables:

const [isOpen, setIsOpen] = useState(false);
const [messages, setMessages] = useState<Message[]>([]);
const [inputValue, setInputValue] = useState("");
const [isLoading, setIsLoading] = useState(false);
const [sessionId] = useState(() => crypto.randomUUID());
const [isNearBottom, setIsNearBottom] = useState(false);

Why These Specific States?

  • isOpen: Controls visibility (boolean is sufficient)
  • messages: Array of message objects (needs structure for rendering)
  • inputValue: Controlled input (React best practice)
  • isLoading: Disables input during API calls
  • sessionId: Generated once, never changes (lazy initialization)
  • isNearBottom: Scroll-based visibility toggle

The Message Type:

interface Message {
  id: number;              // Unique identifier (for React keys)
  sender: "user" | "bot";  // Determines styling
  text: string;            // Message content
}

🔴 Danger: Using Date.now() for IDs can cause collisions if two messages are created in the same millisecond. Production code should use crypto.randomUUID() or a counter.

Session ID Generation

Naive Approach: No Session Tracking

// WRONG: Every message is a new conversation
const payload = {
  action: "sendMessage",
  chatInput: trimmedInput,
};

Why This Fails: The AI has no context. Each message is treated as a new conversation, so follow-up questions don’t work.

Refined Solution (From Repo):

const [sessionId] = useState(() => crypto.randomUUID());

const payload = {
  action: "sendMessage",
  sessionId: sessionId,  // Consistent across all messages
  chatInput: trimmedInput,
};

How It Works:

  1. Lazy Initialization: useState(() => ...) runs only once on mount
  2. crypto.randomUUID(): Generates RFC 4122 compliant UUID
    • Example: "550e8400-e29b-41d4-a716-446655440000"
  3. Const Destructuring: [sessionId] (no setter) prevents accidental changes

Backend Correlation:

// On the backend (n8n, custom API)
const sessions = new Map();

app.post('/webhook', (req, res) => {
  const { sessionId, chatInput } = req.body;
  
  // Retrieve conversation history
  let history = sessions.get(sessionId) || [];
  history.push({ role: 'user', content: chatInput });
  
  // Send to AI with full history
  const response = await ai.chat(history);
  
  // Store updated history
  history.push({ role: 'assistant', content: response });
  sessions.set(sessionId, history);
  
  res.json({ output: response });
});

Auto-Open Logic with Session Persistence

The Challenge: Auto-open the chat once to grab attention, but don’t annoy users on every page load.

const SESSION_STORAGE_KEY = "chatBubbleAutoOpened";

useEffect(() => {
  setMessages([{ id: Date.now(), sender: "bot", text: initialBotMessage }]);
  
  const hasAutoOpenedInSession = sessionStorage.getItem(SESSION_STORAGE_KEY);

  if (!hasAutoOpenedInSession) {
    const timer = setTimeout(() => {
      setIsOpen((currentIsOpenState) => {
        if (!currentIsOpenState) {
          sessionStorage.setItem(SESSION_STORAGE_KEY, "true");
          return true;
        }
        return currentIsOpenState;
      });
    }, autoOpenDelay);
    
    return () => clearTimeout(timer);
  }
}, [initialBotMessage, autoOpenDelay]);

Breaking It Down:

  1. Check sessionStorage: Has this session already auto-opened?
  2. Set Timer: Wait autoOpenDelay ms (default 3000 = 3 seconds)
  3. Functional Update: Check current state before opening
  4. Mark as Opened: Set flag in sessionStorage
  5. Cleanup: Cancel timer if component unmounts

Why Functional Update?

setIsOpen((currentIsOpenState) => {
  if (!currentIsOpenState) {
    // Only open if currently closed
    sessionStorage.setItem(SESSION_STORAGE_KEY, "true");
    return true;
  }
  return currentIsOpenState;
});

This prevents a race condition: if the user manually opens the chat before the timer fires, we don’t want to set the flag (they opened it themselves, not auto-opened).

sessionStorage vs. localStorage:

sessionStoragelocalStorage
Cleared when tab closesPersists forever
Per-tab isolationShared across tabs
Used hereNot appropriate

We use sessionStorage because we want the auto-open to happen once per browsing session, not once ever.

Dual Scroll Management

The Challenge: Two independent scroll contexts:

  1. Chat window scroll: Auto-scroll to show new messages
  2. Page scroll: Hide chat when user scrolls near bottom

Chat Window Auto-Scroll:

const messageListRef = useRef<HTMLDivElement>(null);

useEffect(() => {
  if (messageListRef.current) {
    messageListRef.current.scrollTop = messageListRef.current.scrollHeight;
  }
}, [messages]);

How It Works:

  • scrollTop: Current scroll position (pixels from top)
  • scrollHeight: Total height of scrollable content
  • Setting scrollTop = scrollHeight scrolls to bottom

Page Scroll Detection:

useEffect(() => {
  const handleScroll = () => {
    if (typeof window !== "undefined" && hideNearBottomOffset > 0) {
      const nearBottom =
        window.scrollY + window.innerHeight >=
        document.documentElement.scrollHeight - hideNearBottomOffset;
      setIsNearBottom(nearBottom);
    }
  };

  if (typeof window !== "undefined") {
    window.addEventListener("scroll", handleScroll, { passive: true });
    handleScroll();  // Check initial state
    return () => window.removeEventListener("scroll", handleScroll);
  }
}, [hideNearBottomOffset]);

The Math:

window.scrollY = How far user has scrolled down
window.innerHeight = Viewport height
document.documentElement.scrollHeight = Total page height

nearBottom = (scrollY + innerHeight) >= (totalHeight - offset)

Example:
scrollY = 1000px
innerHeight = 800px
totalHeight = 2000px
offset = 600px

(1000 + 800) >= (2000 - 600)
1800 >= 1400  ✓ Near bottom!

Why Hide Near Bottom?

If the user scrolls to the footer (contact info, social links), the chat bubble covers important content. Hiding it improves UX.

The passive: true Flag:

window.addEventListener("scroll", handleScroll, { passive: true });

This tells the browser: “This listener won’t call preventDefault(), so you can optimize scrolling performance.” Without it, the browser must wait for the handler to complete before scrolling, causing jank.

Message Submission with Error Handling

The Complete Flow:

const handleSendMessage = async (event?: FormEvent) => {
  if (event) event.preventDefault();
  const trimmedInput = inputValue.trim();
  if (!trimmedInput || isLoading) return;

  // 1. Add user message optimistically
  const userMessage: Message = {
    id: Date.now(),
    sender: "user",
    text: trimmedInput,
  };
  setMessages((prevMessages) => [...prevMessages, userMessage]);
  setInputValue("");
  setIsLoading(true);

  // 2. Prepare payload
  const payload = {
    action: "sendMessage",
    sessionId: sessionId,
    chatInput: trimmedInput,
  };

  try {
    // 3. Send to webhook
    const response = await fetch(webhookUrl, {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify(payload),
    });

    // 4. Check response status
    if (!response.ok) {
      let errorData: any = { message: `Request failed with status: ${response.status}` };
      try {
        errorData = await response.json();
      } catch (parseError) {
        /* Ignore parse errors */
      }
      throw new Error(errorData?.message || errorData?.error || `Webhook request failed: ${response.status}`);
    }

    // 5. Parse response
    const data = await response.json();
    const botResponseText = data.output || "Sorry, I didn't get a valid response.";

    // 6. Add bot message
    const botMessage: Message = {
      id: Date.now() + 1,
      sender: "bot",
      text: botResponseText,
    };
    setMessages((prevMessages) => [...prevMessages, botMessage]);
    
  } catch (error) {
    // 7. Handle errors gracefully
    console.error("Chat Error:", error);
    const errorMessage: Message = {
      id: Date.now() + 1,
      sender: "bot",
      text: `Sorry, an error occurred: ${error instanceof Error ? error.message : "Could not connect."}. Please try again later.`,
    };
    setMessages((prevMessages) => [...prevMessages, errorMessage]);
    
  } finally {
    // 8. Always reset loading state
    setIsLoading(false);
    if (isOpen && inputRef.current) {
      inputRef.current.focus();
    }
  }
};

Key Patterns:

  1. Optimistic UI: Add user message immediately (don’t wait for API)
  2. Guard Clauses: Early return if input is empty or already loading
  3. Error Parsing: Try to extract error message from response body
  4. Fallback Messages: Provide defaults if API response is malformed
  5. Finally Block: Always reset loading state, even on error
  6. Focus Management: Return focus to input after submission

🔵 Deep Dive: The finally block is crucial. Without it, if the API throws an error, isLoading stays true forever, permanently disabling the input.

Markdown Rendering with Custom Styles

The Challenge: Bot responses contain Markdown, but default rendering looks ugly.

{msg.sender === "bot" ? (
  <ReactMarkdown
    children={msg.text}
    remarkPlugins={[remarkGfm]}
    components={{
      p: ({ node, ...props }) => <p style={styles.botMessageMarkdown_p} {...props} />,
      ul: ({ node, ...props }) => <ul style={styles.botMessageMarkdown_ul} {...props} />,
      ol: ({ node, ...props }) => <ol style={styles.botMessageMarkdown_ol} {...props} />,
      li: ({ node, ...props }) => <li style={styles.botMessageMarkdown_li} {...props} />,
      a: ({ node, ...props }) => <a style={styles.botMessageMarkdown_a} {...props} target="_blank" rel="noopener noreferrer" />,
      strong: ({ node, ...props }) => <strong style={styles.botMessageMarkdown_strong} {...props} />,
      em: ({ node, ...props }) => <em style={styles.botMessageMarkdown_em} {...props} />,
      code: ({ node, inline, ...props }) => <code style={styles.botMessageMarkdown_code} {...props} />,
      pre: ({ node, ...props }) => <pre style={styles.botMessageMarkdown_pre} {...props} />,
    }}
  />
) : (
  msg.text
)}

Why Custom Components?

ReactMarkdown renders to standard HTML elements, which inherit browser default styles. By providing custom components, we can:

  • Control spacing (margins, padding)
  • Style links (color, underline)
  • Format code blocks (background, font)
  • Ensure consistency with chat bubble design

Security Note:

ReactMarkdown is safe by default—it doesn’t render raw HTML. This prevents XSS attacks:

<script>alert('XSS')</script>

Renders as plain text, not executed JavaScript.

The remarkGfm Plugin:

Enables GitHub Flavored Markdown features:

  • Tables
  • Strikethrough (~~text~~)
  • Task lists (- [ ] Todo)
  • Autolinks (URLs become clickable)

Inline Styles vs. CSS Classes

Why Inline Styles?

const styles: { [key: string]: React.CSSProperties } = {
  chatContainer: { 
    position: "fixed", 
    bottom: "20px", 
    right: "20px", 
    zIndex: 1000 
  },
  // ... more styles
};

<div style={styles.chatContainer}>

Advantages:

  1. Self-Contained: Component works anywhere without external CSS
  2. No Class Name Conflicts: No risk of global CSS overriding styles
  3. Dynamic Styles: Easy to compute styles based on props/state
  4. Type Safety: TypeScript validates style properties

Disadvantages:

  1. No Pseudo-Classes: Can’t use :hover, :focus, etc.
  2. No Media Queries: Can’t do responsive styles
  3. Larger Bundle: Styles duplicated if component used multiple times
  4. No CSS Optimizations: Can’t benefit from CSS minification

When to Use Inline Styles:

  • Reusable components distributed as libraries
  • Styles that depend on props/state
  • Prototyping and demos

When to Use CSS Classes:

  • Application-specific components
  • Complex responsive layouts
  • Hover/focus states
  • Animations and transitions

Under the Hood

Webhook Communication Pattern

The Request:

POST /webhook HTTP/1.1
Host: your-n8n-instance.com
Content-Type: application/json

{
  "action": "sendMessage",
  "sessionId": "550e8400-e29b-41d4-a716-446655440000",
  "chatInput": "What is React?"
}

The Response:

HTTP/1.1 200 OK
Content-Type: application/json

{
  "output": "React is a JavaScript library for building user interfaces..."
}

Why Webhooks Instead of WebSockets?

WebSocketsWebhooks
Persistent connectionRequest/response
Real-time bidirectionalOne-way communication
Complex server setupSimple HTTP endpoint
Requires connection managementStateless
Not needed for chatPerfect for chat

Chat doesn’t need real-time push notifications (the user initiates all messages), so webhooks are simpler and more reliable.

Memory and Performance Analysis

State Memory:

messages = [
  { id: 1234567890, sender: "bot", text: "Hello! How can I help?" },
  { id: 1234567891, sender: "user", text: "What is TypeScript?" },
  { id: 1234567892, sender: "bot", text: "TypeScript is..." },
];

Memory per Message:

  • id: 8 bytes (number)
  • sender: ~8 bytes (string pointer)
  • text: Variable (average ~200 bytes)
  • Object overhead: ~50 bytes
  • Total: ~266 bytes per message

For 50 messages: ~13KB (negligible)

Render Performance:

Each message triggers:

  1. State update: ~0.1ms
  2. Re-render: ~2ms (depends on Markdown complexity)
  3. Scroll: ~1ms
  4. Total: ~3ms per message

At 60fps (16.67ms per frame), this leaves 13.67ms for other work—plenty of headroom.

sessionStorage Behavior

Storage Limits:

  • Capacity: 5-10MB per origin (browser-dependent)
  • Lifetime: Until tab closes
  • Scope: Per-origin, per-tab

What Happens:

// Tab 1
sessionStorage.setItem("chatBubbleAutoOpened", "true");

// Tab 2 (same site)
sessionStorage.getItem("chatBubbleAutoOpened");  // null (different tab!)

// Tab 1 (after refresh)
sessionStorage.getItem("chatBubbleAutoOpened");  // "true" (same tab)

// Tab 1 (after close and reopen)
sessionStorage.getItem("chatBubbleAutoOpened");  // null (new session)

This ensures the auto-open happens once per tab session, not once globally.

Edge Cases & Pitfalls

Rapid Message Submission

Problem: User types “hello”, presses Enter, immediately types “world”, presses Enter.

Current Behavior:

  • Message 1 sends, isLoading = true
  • Message 2 blocked (input disabled while loading)
  • Message 1 completes, isLoading = false
  • User can now send Message 2

Alternative: Queue Messages

const [messageQueue, setMessageQueue] = useState<string[]>([]);

const handleSendMessage = async (cmd: string) => {
  setMessageQueue(queue => [...queue, cmd]);
};

useEffect(() => {
  if (messageQueue.length > 0 && !isLoading) {
    const nextMessage = messageQueue[0];
    setMessageQueue(queue => queue.slice(1));
    sendToAPI(nextMessage);
  }
}, [messageQueue, isLoading]);

Long API Response Times

Problem: Webhook takes 30 seconds to respond (complex AI reasoning).

Current Behavior: User sees “typing…” for 30 seconds, no feedback.

Better UX: Streaming Responses

const handleSendMessage = async () => {
  // ... send request
  
  const reader = response.body.getReader();
  const decoder = new TextDecoder();
  let botText = "";
  
  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    
    botText += decoder.decode(value);
    
    // Update message in real-time
    setMessages(prev => {
      const lastMsg = prev[prev.length - 1];
      if (lastMsg.sender === "bot") {
        return [...prev.slice(0, -1), { ...lastMsg, text: botText }];
      }
      return [...prev, { id: Date.now(), sender: "bot", text: botText }];
    });
  }
};

This requires the backend to support streaming responses (Server-Sent Events or chunked transfer encoding).

Network Offline

Problem: User has no internet connection.

Current Behavior: Fetch throws error, shows generic error message.

Better Handling:

const handleSendMessage = async () => {
  if (!navigator.onLine) {
    const errorMessage: Message = {
      id: Date.now(),
      sender: "bot",
      text: "You appear to be offline. Please check your internet connection.",
    };
    setMessages(prev => [...prev, errorMessage]);
    return;
  }
  
  // ... rest of logic
};

Markdown Injection

Problem: User types Markdown in their message.

User: **Hello** _world_

Current Behavior: User messages render as plain text (not Markdown).

Is This Correct? Yes! User input should not be interpreted as Markdown for security and UX reasons:

  • Security: Prevents users from injecting malicious Markdown
  • UX: Users expect their text to appear exactly as typed

Only bot responses are rendered as Markdown.

Missing Cleanup in Scroll Listener

Problem: Component unmounts, but scroll listener still fires.

Current Solution: Cleanup function removes listener:

return () => {
  window.removeEventListener("scroll", handleScroll);
};

Without Cleanup: Memory leak + potential errors if setIsNearBottom is called after unmount.

Focus Management on Mobile

Problem: On mobile, keyboard doesn’t appear when chat opens.

Current Solution: Focus input after animation:

if (isOpen && inputRef.current) {
  setTimeout(() => inputRef.current?.focus(), 300);
}

The 300ms delay matches the CSS transition duration, ensuring focus happens after the chat is fully visible.

Conclusion

Skills Acquired

You’ve learned:

  1. Webhook Integration: Connecting to external APIs without complex WebSocket setup
  2. Session Management: Generating and tracking unique session identifiers
  3. Markdown Rendering: Safely rendering rich text with custom styling
  4. Dual Scroll Management: Handling multiple scroll contexts independently
  5. State Persistence: Using sessionStorage for cross-page state
  6. Optimistic UI: Updating UI before API confirmation for better UX
  7. Error Recovery: Graceful degradation when APIs fail

The Proficiency Marker: Most developers use pre-built chat widgets without understanding their internals. You now understand chat UIs as stateful, network-dependent systems with complex interactions between user input, API communication, and scroll behavior. This mental model transfers to:

  • Real-time collaboration tools (Google Docs comments)
  • Support ticket systems
  • Social media messaging
  • Live streaming chat
  • Customer service platforms

Using This Component

In your Astro page:

---
// src/pages/index.astro
import ChatBubble from '@/components/aiChat';
---

<html>
  <body>
    <!-- Your page content -->
    
    <ChatBubble 
      client:load
      webhookUrl="https://your-n8n-instance.com/webhook/chat"
      initialBotMessage="Hi! I'm Jason's AI assistant. Ask me anything about his work!"
      botName="AI Assistant"
      autoOpenDelay={5000}
      hideNearBottomOffset={400}
    />
  </body>
</html>

Custom Styling:

<ChatBubble
  webhookUrl={webhookUrl}
  bubbleIcon={<MessageCircle size={24} />}
  closeIcon={<X size={18} />}
  sendIcon={<Send size={18} />}
  headerText="Chat with AI"
  placeholder="Ask me anything..."
/>

Next Challenge: Implement message persistence using IndexedDB to save chat history across page refreshes, allowing users to continue conversations where they left off.

We respect your privacy.

← View All Tutorials

Related Projects

    Ask me anything!