On this page
- Purpose
- The Problem
- The Solution
- What to Know Before Wiring Strategy Patterns Into Hooks: Design Patterns Meets React
- The Assembly Line: How Context Travels From a Button Tap to a System Prompt
- Building the Pipeline One Layer at a Time: From Constants to Context-Aware Strings
- Step 1: The Foundation — A Typed Constant as the Coach’s Identity
- Step 2: The Builder — Composing Prompt Layers Without if/else Hell
- Step 3: The Hook — Assembling PromptOptions from Runtime State
- Why Pure Functions Make LLMs Predictable: Referential Transparency and Testability
- What Happens When Context Gets Stale or Options Conflict: Defense at the Seams
- You Can Now Design Prompt Systems That Scale: The Composable Builder Skill
Purpose
The Problem
Imagine you’re building a chatbot and your first version hard-codes a system prompt as a string constant at the top of your chat component. It works. Then the product requires topic-specific coaching. You add an if/else. Then time-of-day awareness. Another if/else. Then emotional state. Now your component is a 200-line prompt assembly function with business logic scattered across the UI layer, untestable, and impossible to extend without touching the screen file.
This is the prompt spaghetti trap, and it kills maintainability faster than almost any other architectural mistake in LLM-powered apps.
The Solution
We’ll analyze the two-layer prompt pipeline in this codebase: services/llm/PromptBuilder.ts (a pure service class that knows how to build prompts) and hooks/useChat.ts (a React hook that knows when to build them and with what context). Together they implement the Strategy pattern — separating the prompt construction algorithm from the code that decides which prompt to build.
You’ll understand how to design a prompt system that is independently testable, composable at runtime, and completely decoupled from the UI layer.
What to Know Before Wiring Strategy Patterns Into Hooks: Design Patterns Meets React
Knowledge Base:
- The Strategy design pattern (algorithm family, encapsulated, interchangeable)
- React custom hooks and the rules of hooks
- TypeScript interfaces and optional properties
- Familiarity with how LLMs use system prompts
Environment (from package.json):
react 19.1.0
react-native 0.81.5
typescript ~5.9.2
expo ~54.0.22
uuid ^13.0.0
The Assembly Line: How Context Travels From a Button Tap to a System Prompt
graph LR
U[User taps QuickAction button] --> UC[useChat.sendQuickAction]
UC --> PO[PromptOptions assembled in useChat]
PO --> PB[PromptBuilder.buildSystemPrompt]
PB --> BP[BASE_SYSTEM_PROMPT from constants/prompts.ts]
PB --> TP[Topic-specific addition]
PB --> CTX[UserContext addition]
BP & TP & CTX --> SP[Final system prompt string]
SP --> IO[InferenceOptions passed to useLLMContext]
IO --> GR[generateResponse called on LLMService]
GR --> LLM[On-device Gemma model]
Analogy: Think of PromptBuilder as a chef following a mise en place system. The BASE_SYSTEM_PROMPT in constants/prompts.ts is the stock — it’s always on the stove. PromptOptions are the additional ingredients the server brings from the dining room (topic, time of day, emotional state). buildSystemPrompt() is the chef combining everything to order. The dining room (the React component) never touches the stove.
Building the Pipeline One Layer at a Time: From Constants to Context-Aware Strings
Step 1: The Foundation — A Typed Constant as the Coach’s Identity
Everything starts in constants/prompts.ts. The base prompt is not assembled at runtime — it’s a fixed identity for the coaching persona.
// constants/prompts.ts
// This is NOT a template string — it's a finished persona definition.
// It tells the model WHO it is, not just WHAT to do.
export const BASE_SYSTEM_PROMPT = `You are a compassionate mindfulness coach
who draws wisdom from both Buddhist and Stoic philosophical traditions...
Core Principles:
- Buddhist Perspective: Emphasize mindfulness, compassion, impermanence
- Stoic Perspective: Focus on virtue, rational thinking, acceptance
Response Style:
- Keep responses concise but meaningful (2-4 paragraphs typically)
- Use questions to encourage reflection
- Maintain a calm, grounded tone`;
// Topic-specific additions are keyed to a union type — not a string.
// TypeScript will error if you try to access a key that doesn't exist.
export const TOPIC_PROMPTS: Record<MindfulnessTopic, string> = {
[MindfulnessTopic.Anxiety]: `The user is experiencing anxiety...
- Buddhist: Mindfulness of breath, impermanence of feelings
- Stoic: Distinguishing between what we control and don't control`,
// ... other topics
};
Step 2: The Builder — Composing Prompt Layers Without if/else Hell
The naive approach adds conditionals directly inside generateResponse. The refined approach extracts composition into a dedicated class:
// services/llm/PromptBuilder.ts
export class PromptBuilder implements PromptBuilderInterface {
buildSystemPrompt(options?: PromptOptions): string {
// Start with the immutable base — always included
let prompt = BASE_SYSTEM_PROMPT;
// If no options, return base only — clean early return
if (!options) return prompt;
// Each modifier appends to the prompt string rather than replacing it.
// This is additive composition, not branching replacement.
if (options.emphasizeBuddhism && !options.emphasizeStoicism) {
prompt += '\n\nFor this conversation, place extra emphasis on Buddhist teachings.';
} else if (options.emphasizeStoicism && !options.emphasizeBuddhism) {
prompt += '\n\nFor this conversation, place extra emphasis on Stoic philosophy.';
}
// User context is its own sub-builder — keeps this method clean
if (options.userContext) {
const contextAddition = this.buildUserContextPrompt(options.userContext);
if (contextAddition) {
prompt += '\n\n' + contextAddition;
}
}
if (options.conversationGoal) {
prompt += `\n\nConversation Goal: ${options.conversationGoal}`;
}
return prompt;
}
Step 3: The Hook — Assembling PromptOptions from Runtime State
useChat is where React state meets the PromptBuilder. It translates component-level concerns (which topic is active, what the user sent) into PromptOptions:
// hooks/useChat.ts (simplified key path)
export function useChat(options?: UseChatOptions): UseChatReturn {
// promptOptions is stateful — it can change during a session
// when the user switches topics or quick actions
const [promptOptions, setPromptOptions] = useState<PromptOptions>(
options?.promptOptions || {}
);
const sendMessage = useCallback(async (content: string) => {
// Build the system prompt fresh for each message.
// This means a topic change mid-conversation is reflected immediately.
const systemPrompt = promptBuilder.buildSystemPrompt(promptOptions);
const inferenceOptions: InferenceOptions = {
systemPrompt,
maxTokens: inferenceOptions.maxTokens,
temperature: inferenceOptions.temperature,
};
// The fully assembled prompt travels down to the LLM layer
await llmContext.generateResponse(messages, inferenceOptions, onToken);
}, [promptOptions, messages]);
// Quick actions map directly to pre-written prompt additions
const sendQuickAction = useCallback(async (action: QuickAction) => {
const actionPrompt = promptBuilder.getQuickActionPrompt(action);
// The quick action text becomes the user's message content
await sendMessage(actionPrompt);
}, [sendMessage]);
// Topic changes update state — next sendMessage will use the new topic
const setTopic = useCallback((topic: MindfulnessTopic) => {
const topicAddition = promptBuilder.addTopicEmphasis(topic);
setPromptOptions(prev => ({
...prev,
conversationGoal: topicAddition,
}));
}, []);
Why Pure Functions Make LLMs Predictable: Referential Transparency and Testability
🔵 Deep Dive: PromptBuilder.buildSystemPrompt() is a pure function with respect to its inputs: given the same PromptOptions, it always returns the same string. This property — referential transparency — has a direct practical benefit: you can write a unit test for every possible coaching scenario without mocking a React component, starting an LLM, or simulating user interaction.
// This test runs in milliseconds, no native modules needed
it('adds Buddhist emphasis when emphasizeBuddhism is true', () => {
const result = promptBuilder.buildSystemPrompt({ emphasizeBuddhism: true });
expect(result).toContain('Buddhist teachings');
expect(result).not.toContain('Stoic philosophy'); // Mutual exclusion
});
The separation also means prompt changes don’t cause React re-renders. PromptBuilder is a singleton outside the React tree. Updating the base prompt constant in constants/prompts.ts affects the output of buildSystemPrompt() without touching any component. Compare this to the alternative of storing the system prompt in React state — every prompt update would trigger a render cycle across every component subscribed to that state.
What Happens When Context Gets Stale or Options Conflict: Defense at the Seams
Stale promptOptions between sessions: useChat receives sessionId as a prop. When the user navigates from one session to another, the hook re-mounts with new options. If promptOptions is initialized from options?.promptOptions at mount time, it correctly resets for the new session. The risk is if a developer passes promptOptions as an object literal inline — React will create a new object reference on every render, which could cause subtle re-initialization loops. The correct pattern is to memoize or hoist promptOptions out of the render function.
Conflicting philosophical emphasis: The buildSystemPrompt method handles emphasizeBuddhism && emphasizeStoicism (both true) by falling through to the else if — meaning neither emphasis is added. This is a silent fallback.
🔴 Danger: If a caller sets both emphasizeBuddhism: true and emphasizeStoicism: true, the system prompt will contain neither addition, and there will be no error or warning. This is a silent failure mode. A more defensive implementation would either log a warning or add both emphases, depending on the intended behavior. As you extend PromptOptions, document mutual-exclusion constraints explicitly in the interface definition.
Injection via user content: buildUserContextPrompt() inserts context.emotionalState directly into the system prompt string. If this value ever comes from user input rather than a controlled enum, it becomes a prompt injection vector — a user could craft an emotional state string that overrides coaching behavior. The current implementation appears to use controlled values, but this should be enforced at the type level with a union type rather than string.
You Can Now Design Prompt Systems That Scale: The Composable Builder Skill
You’ve implemented the Strategy + Builder pattern for LLM prompt construction. Specifically, you can now:
- Separate prompt construction logic (PromptBuilder) from prompt trigger logic (useChat)
- Design additive prompt composition that avoids branching explosions as requirements grow
- Write pure, unit-testable prompt builders that run without React or native modules
- Understand why prompt construction belongs in the service layer, not the UI layer
- Identify prompt injection risks at the seam between user-controlled data and system prompts