Welcome to a truly exciting chapter! Up to this point, we’ve explored the foundational concepts of integrating AI into our frontend applications: from understanding AI APIs and prompt engineering to managing streaming responses and implementing basic guardrails. Now, it’s time to bring these pieces together and build something tangible and genuinely useful: a Context-Aware Copilot.
This project will guide you step-by-step through creating an interactive AI assistant that doesn’t just respond to your explicit prompts but also understands the current state of your application. Imagine an AI that knows which product you’re viewing, what form you’re filling out, or what content is on your screen, and tailors its responses accordingly. This ability to leverage context is what elevates a simple chatbot to a powerful copilot, making your applications smarter and more intuitive.
By the end of this chapter, you’ll have built a functional client-side copilot in React, gaining deep confidence in orchestrating AI interactions, managing complex state, and applying best practices for a seamless user experience. We’ll reinforce concepts like dynamic prompt construction, handling asynchronous streaming data, and essential client-side safety checks, preparing you for even more advanced agentic workflows.
Prerequisites
To get the most out of this chapter, you should have a solid understanding of:
- React Fundamentals: Components, props, state (
useState), side effects (useEffect), and the Context API. We will be using React version 18.2.0, which is the current stable release as of January 2026. - Asynchronous JavaScript: Promises,
async/await, and basic error handling. - Basic AI API Interaction: How to send requests to an AI service and process responses (covered in earlier chapters).
- Prompt Engineering Basics: Crafting effective instructions for AI models.
Let’s dive in and build our intelligent assistant!
Core Concepts: Understanding Our Context-Aware Copilot
Before we write any code, let’s solidify what a context-aware copilot is and why it’s so valuable.
What is a Context-Aware Copilot?
At its heart, a context-aware copilot is an AI assistant embedded within your application that intelligently uses information about the user’s current activity or the application’s state to provide more relevant, personalized, and helpful responses. Instead of you having to explicitly tell the AI everything, it observes its surroundings and incorporates that information into its understanding.
Think of it this way:
- Traditional Chatbot: “What’s the weather like?” (User provides all context)
- Context-Aware Copilot: User is viewing a product page. User asks, “Tell me more about this.” The copilot automatically knows “this” refers to the product currently displayed and provides details about that specific product.
This capability significantly reduces cognitive load for the user and makes the AI feel like a true helper, rather than just a command-line interface.
The Power of Contextual Information
Where does this “context” come from? In a frontend application, it can originate from several places:
- UI State: What component is active? What values are in a form? Which tab is open? Is a modal displayed?
- Application Data: Data loaded from an API (e.g., product details, user profile), items in a shopping cart, preferences stored locally.
- User Interaction History: Previous messages in a conversation, recent actions taken by the user within the app.
- Environmental Data: Device type, screen size (less common for AI context, but possible).
Our copilot will primarily focus on capturing UI state and application data to enrich the user’s explicit prompt.
Client-Side Agent Orchestration Flow
The magic of a context-aware copilot on the frontend lies in how we orchestrate the gathering of this context and integrate it into the AI’s prompt. Here’s a simplified flow:
Explanation of the Flow:
- User Interaction: The user types a question into the copilot’s input field or clicks a button to invoke it.
- Copilot UI: Our React component that manages the input, display, and interaction.
- Current UI Context: This is where the “context-aware” part comes in. The UI component actively gathers relevant information about the application state. This might involve reading from React Context, component props, or even querying the DOM (though less common for AI context).
- User’s Text Prompt: The explicit question or command the user types.
- Client-Side Agent Logic: This is a crucial piece. It’s not a full-blown backend agent, but rather the intelligent frontend code responsible for:
- Combining the UI context with the user’s prompt.
- Formatting this combined information into a robust prompt for the AI model.
- Adding any client-side guardrails or pre-processing.
- Making the actual API call to our (simulated) AI backend.
- Handling the streaming response.
- AI Service API: This represents our backend endpoint that communicates with a large language model (LLM). For this project, we’ll simulate this API call, as our focus is purely on the frontend integration.
- Streams Response: The AI service sends its response back in chunks, which the client-side agent processes.
- Updates Copilot UI: The client-side agent incrementally updates the UI with the streaming response, making the experience feel dynamic and responsive.
This client-side orchestration ensures that all the “intelligence” of gathering and formatting context happens directly in the user’s browser, allowing for highly dynamic and responsive AI interactions.
Step-by-Step Implementation: Building Our Copilot
Let’s get our hands dirty and start building! We’ll begin with a basic React application. If you have a project from a previous chapter, feel free to use it. Otherwise, let’s create a new one.
Setup: A Fresh React Project
First, ensure you have Node.js (v18.x or higher recommended) and npm (v9.x or higher) installed. We’ll use Vite, a modern build tool, to quickly set up our React project.
Create a new React project:
npm create vite@latest my-copilot-app -- --template reactWhen prompted, choose
reactfor the framework andJavaScript(orTypeScriptif you prefer, but we’ll use JS for examples).Navigate into your project directory:
cd my-copilot-appInstall dependencies:
npm installStart the development server:
npm run devYou should see your basic React app running in your browser, typically at
http://localhost:5173.
Now, let’s clean up src/App.jsx to prepare for our copilot. Replace its content with this minimal setup:
// src/App.jsx
import React, { useState, createContext, useContext } from 'react';
import './App.css'; // Assuming some basic CSS for styling
// We'll create a simple context to simulate application state
const AppContext = createContext(null);
function ProductPage() {
const [selectedProduct, setSelectedProduct] = useState(null);
const products = [
{ id: 'p1', name: 'Wireless Headphones', price: 199.99, description: 'Premium noise-cancelling headphones with long battery life.' },
{ id: 'p2', name: 'Smartwatch Pro', price: 249.99, description: 'Advanced smartwatch with health tracking and GPS.' },
{ id: 'p3', name: 'Portable Bluetooth Speaker', price: 79.99, description: 'Compact speaker with powerful sound and 10-hour battery.' },
];
const { setAppContext } = useContext(AppContext);
// Update global context when a product is selected
useEffect(() => {
setAppContext(prev => ({ ...prev, selectedProduct: selectedProduct }));
}, [selectedProduct, setAppContext]);
return (
<div className="product-page">
<h2>Our Products</h2>
<div className="product-list">
{products.map(product => (
<div
key={product.id}
className={`product-card ${selectedProduct?.id === product.id ? 'selected' : ''}`}
onClick={() => setSelectedProduct(product)}
>
<h3>{product.name}</h3>
<p>${product.price.toFixed(2)}</p>
<p className="description">{product.description}</p>
{selectedProduct?.id === product.id && <p className="selection-indicator">Selected!</p>}
</div>
))}
</div>
{selectedProduct && (
<div className="product-details">
<h3>Selected Product Details:</h3>
<p><strong>Name:</strong> {selectedProduct.name}</p>
<p><strong>Price:</strong> ${selectedProduct.price.toFixed(2)}</p>
<p><strong>Description:</strong> {selectedProduct.description}</p>
</div>
)}
</div>
);
}
// Our main App component
function App() {
// This state will hold our application-wide context
const [appContext, setAppContext] = useState({
selectedProduct: null,
// We can add more context here later, e.g., user preferences, cart items
});
return (
<AppContext.Provider value={{ appContext, setAppContext }}>
<div className="app-container">
<h1>Welcome to Our Smart Store!</h1>
<ProductPage />
{/* Our Copilot will go here */}
</div>
</AppContext.Provider>
);
}
export default App;
And add some basic styling to src/App.css:
/* src/App.css */
#root {
max-width: 1280px;
margin: 0 auto;
padding: 2rem;
text-align: center;
}
.app-container {
font-family: 'Arial', sans-serif;
color: #333;
}
h1, h2, h3 {
color: #0056b3;
}
.product-page {
margin-top: 40px;
padding: 20px;
border: 1px solid #eee;
border-radius: 8px;
background-color: #f9f9f9;
}
.product-list {
display: flex;
flex-wrap: wrap;
justify-content: center;
gap: 20px;
margin-top: 20px;
}
.product-card {
border: 1px solid #ddd;
border-radius: 8px;
padding: 15px;
width: 200px;
text-align: left;
cursor: pointer;
transition: all 0.2s ease-in-out;
background-color: #fff;
box-shadow: 0 2px 4px rgba(0,0,0,0.05);
}
.product-card:hover {
border-color: #007bff;
box-shadow: 0 4px 8px rgba(0,0,0,0.1);
transform: translateY(-2px);
}
.product-card.selected {
border-color: #28a745;
box-shadow: 0 0 0 3px rgba(40, 167, 69, 0.5);
}
.product-card h3 {
margin-top: 0;
color: #333;
}
.product-card .description {
font-size: 0.9em;
color: #666;
height: 60px; /* Fixed height for consistency */
overflow: hidden;
}
.product-details {
margin-top: 30px;
padding: 20px;
border: 1px dashed #007bff;
border-radius: 8px;
background-color: #e7f3ff;
text-align: left;
}
.product-details p {
margin: 5px 0;
}
.copilot-container {
margin-top: 50px;
padding: 20px;
border: 2px solid #6c757d;
border-radius: 12px;
background-color: #f0f2f5;
text-align: left;
}
.copilot-input-area {
display: flex;
gap: 10px;
margin-top: 15px;
}
.copilot-input-area input {
flex-grow: 1;
padding: 10px 15px;
border: 1px solid #ccc;
border-radius: 6px;
font-size: 1em;
}
.copilot-input-area button {
padding: 10px 20px;
background-color: #007bff;
color: white;
border: none;
border-radius: 6px;
cursor: pointer;
font-size: 1em;
transition: background-color 0.2s;
}
.copilot-input-area button:hover:not(:disabled) {
background-color: #0056b3;
}
.copilot-input-area button:disabled {
background-color: #cccccc;
cursor: not-allowed;
}
.copilot-response-area {
margin-top: 20px;
padding: 15px;
background-color: #e9ecef;
border-radius: 8px;
min-height: 80px;
white-space: pre-wrap; /* Preserves whitespace and line breaks */
word-wrap: break-word; /* Breaks long words */
border: 1px solid #dee2e6;
}
.copilot-response-area p {
margin: 0;
color: #495057;
}
.loading-indicator {
color: #007bff;
font-style: italic;
margin-top: 10px;
}
.error-message {
color: #dc3545;
font-weight: bold;
margin-top: 10px;
}
.selection-indicator {
font-size: 0.8em;
color: #28a745;
margin-top: 5px;
}
Now you have a simple product page where you can click on products to select them. The selectedProduct is stored in a React Context, which our copilot will later access.
Step 1: Basic Copilot UI Shell
Let’s create the Copilot component. This will house our input field, a button to send queries, and an area to display the AI’s response.
Create a new file src/components/Copilot.jsx:
// src/components/Copilot.jsx
import React, { useState, useContext } from 'react';
import { AppContext } from '../App'; // Import our context
function Copilot() {
const [userPrompt, setUserPrompt] = useState('');
const [aiResponse, setAiResponse] = useState('');
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState(null);
// We'll use this later to access application context
const { appContext } = useContext(AppContext);
const handleSendMessage = async () => {
if (!userPrompt.trim()) return; // Don't send empty prompts
setIsLoading(true);
setError(null);
setAiResponse(''); // Clear previous response
console.log("App Context for prompt:", appContext); // For debugging context
console.log("User Prompt:", userPrompt); // For debugging user prompt
// --- Placeholder for actual AI API call ---
// In a real application, you'd send a request to your backend here.
// For now, let's simulate a delay and a response.
try {
await new Promise(resolve => setTimeout(resolve, 1500)); // Simulate API delay
setAiResponse(`(Simulated AI response based on "${userPrompt}" and context: ${appContext.selectedProduct ? appContext.selectedProduct.name : 'No product selected'})`);
} catch (err) {
setError("Failed to get AI response (simulated error).");
} finally {
setIsLoading(false);
setUserPrompt(''); // Clear input after sending
}
};
return (
<div className="copilot-container">
<h2>Your Smart Copilot</h2>
<div className="copilot-input-area">
<input
type="text"
value={userPrompt}
onChange={(e) => setUserPrompt(e.target.value)}
placeholder="Ask your copilot anything..."
onKeyPress={(e) => {
if (e.key === 'Enter' && !isLoading) {
handleSendMessage();
}
}}
disabled={isLoading}
/>
<button onClick={handleSendMessage} disabled={isLoading}>
{isLoading ? 'Thinking...' : 'Send'}
</button>
</div>
<div className="copilot-response-area">
{isLoading && <p className="loading-indicator">Copilot is thinking...</p>}
{error && <p className="error-message">Error: {error}</p>}
{aiResponse && <p>{aiResponse}</p>}
{!isLoading && !error && !aiResponse && <p>No response yet. Ask me something!</p>}
</div>
</div>
);
}
export default Copilot;
Now, import and render the Copilot component in src/App.jsx.
// src/App.jsx (updated part)
// ... (imports and AppContext definition)
import Copilot from './components/Copilot'; // Add this import
// ... (ProductPage component)
function App() {
// This state will hold our application-wide context
const [appContext, setAppContext] = useState({
selectedProduct: null,
// We can add more context here later, e.g., user preferences, cart items
});
return (
<AppContext.Provider value={{ appContext, setAppContext }}>
<div className="app-container">
<h1>Welcome to Our Smart Store!</h1>
<ProductPage />
<Copilot /> {/* Render our Copilot here */}
</div>
</AppContext.Provider>
);
}
export default App;
Run npm run dev again. You should now see the product list and your copilot interface below it. Try typing a message and sending it. You’ll see a simulated response that already incorporates the selected product’s name! This demonstrates the basic contextual awareness.
Step 2: Crafting the Contextual Prompt
Now, let’s make our prompt construction more robust and explicitly define how the application context influences the AI’s understanding.
We’ll introduce a utility function to build our prompt, combining a system instruction, the current application context, and the user’s explicit query.
Create a new file src/utils/promptBuilder.js:
// src/utils/promptBuilder.js
/**
* Builds a comprehensive prompt for the AI, incorporating system instructions,
* application context, and the user's explicit query.
*
* @param {object} options - Configuration for building the prompt.
* @param {string} options.userPrompt - The direct question or command from the user.
* @param {object} options.appContext - The current application state (e.g., selected product).
* @param {Array<object>} options.chatHistory - (Optional) Previous messages in the conversation.
* @returns {string} The fully constructed prompt string.
*/
export function buildCopilotPrompt({ userPrompt, appContext, chatHistory = [] }) {
let promptParts = [];
// 1. System Instruction: Define the AI's role and persona
promptParts.push(
"You are a helpful and friendly AI copilot embedded in a smart e-commerce application. " +
"Your goal is to assist the user by providing information, making suggestions, or answering questions " +
"relevant to their current activity within the store. Be concise and to the point."
);
// 2. Application Context: Inject relevant UI state
if (appContext.selectedProduct) {
promptParts.push(
`The user is currently viewing a product with the following details:
Name: ${appContext.selectedProduct.name}
Price: $${appContext.selectedProduct.price.toFixed(2)}
Description: ${appContext.selectedProduct.description}`
);
} else {
promptParts.push("The user is currently browsing the product list, no specific product is selected.");
}
// 3. (Optional) Chat History: Provide conversational memory
// For this basic example, we'll only include the current user prompt.
// In a more advanced copilot, you would format chatHistory into a conversation
// like: "User: [message]\nAI: [response]\nUser: [new message]"
// This helps the AI remember previous turns.
// 4. User's Explicit Prompt
promptParts.push(`User's question: "${userPrompt}"`);
return promptParts.join('\n\n'); // Join parts with double newlines for clarity
}
Now, let’s update our Copilot.jsx to use this prompt builder.
// src/components/Copilot.jsx (updated part)
import React, { useState, useContext, useEffect } from 'react'; // Added useEffect
import { AppContext } from '../App';
import { buildCopilotPrompt } from '../utils/promptBuilder'; // Import the builder
function Copilot() {
const [userPrompt, setUserPrompt] = useState('');
const [aiResponse, setAiResponse] = useState('');
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState(null);
const { appContext } = useContext(AppContext);
const handleSendMessage = async () => {
if (!userPrompt.trim()) return;
setIsLoading(true);
setError(null);
setAiResponse('');
// --- CRITICAL: Build the prompt with context ---
const fullPrompt = buildCopilotPrompt({
userPrompt: userPrompt,
appContext: appContext,
// chatHistory: [] // We'll add this in a later step
});
console.log("--- Full AI Prompt Sent ---");
console.log(fullPrompt);
console.log("---------------------------");
// --- Simulate AI API call ---
try {
// In a real app, you'd send `fullPrompt` to your AI backend.
// e.g., const response = await fetch('/api/copilot', { method: 'POST', body: JSON.stringify({ prompt: fullPrompt }) });
// For now, we simulate a more intelligent response based on the *fullPrompt* logic.
await new Promise(resolve => setTimeout(resolve, 1500));
const simulatedResponse = `Based on your question "${userPrompt}" and the current context (product: ${appContext.selectedProduct?.name || 'none selected'}), I can tell you: ... (simulated detailed AI answer)`;
setAiResponse(simulatedResponse);
} catch (err) {
console.error("AI API call failed:", err);
setError("Failed to get AI response. Please try again.");
} finally {
setIsLoading(false);
setUserPrompt('');
}
};
return (
<div className="copilot-container">
<h2>Your Smart Copilot</h2>
{/* ... (input and button as before) ... */}
<div className="copilot-input-area">
<input
type="text"
value={userPrompt}
onChange={(e) => setUserPrompt(e.target.value)}
placeholder="Ask your copilot anything..."
onKeyPress={(e) => {
if (e.key === 'Enter' && !isLoading) {
handleSendMessage();
}
}}
disabled={isLoading}
/>
<button onClick={handleSendMessage} disabled={isLoading}>
{isLoading ? 'Thinking...' : 'Send'}
</button>
</div>
<div className="copilot-response-area">
{isLoading && <p className="loading-indicator">Copilot is thinking...</p>}
{error && <p className="error-message">Error: {error}</p>}
{aiResponse && <p>{aiResponse}</p>}
{!isLoading && !error && !aiResponse && <p>No response yet. Ask me something!</p>}
</div>
</div>
);
}
export default Copilot;
Now, when you run the app and interact with the copilot, check your browser’s console. You’ll see the --- Full AI Prompt Sent --- log, showing how the system instructions, selected product context, and your query are all combined into a single, comprehensive prompt. This is what would typically be sent to your AI backend.
Step 3: Handling Streaming Responses
Simulating a full streaming response directly in the browser without a backend is a bit complex, but we can mimic the effect of streaming. In a real application, your backend would use Server-Sent Events (SSE) or WebSockets, and your frontend would consume a ReadableStream from the fetch API.
For our simulation, we’ll incrementally build the aiResponse string over time.
Let’s modify src/components/Copilot.jsx to simulate streaming.
// src/components/Copilot.jsx (updated part for streaming simulation)
import React, { useState, useContext, useEffect, useRef } from 'react'; // Added useRef
import { AppContext } from '../App';
import { buildCopilotPrompt } from '../utils/promptBuilder';
function Copilot() {
const [userPrompt, setUserPrompt] = useState('');
const [aiResponse, setAiResponse] = useState('');
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState(null);
const { appContext } = useContext(AppContext);
const abortControllerRef = useRef(null); // To handle cancellation
const handleSendMessage = async () => {
if (!userPrompt.trim()) return;
setIsLoading(true);
setError(null);
setAiResponse('');
// Create a new AbortController for this request
abortControllerRef.current = new AbortController();
const signal = abortControllerRef.current.signal;
const fullPrompt = buildCopilotPrompt({
userPrompt: userPrompt,
appContext: appContext,
});
console.log("--- Full AI Prompt Sent (Streaming Simulation) ---");
console.log(fullPrompt);
console.log("-------------------------------------------------");
try {
// Simulate a streaming response
const simulatedFullResponse = `Here's a detailed AI response based on your query "${userPrompt}" and the context of the ${appContext.selectedProduct?.name || 'general product page'}. I can provide more information on its features, pricing, or comparisons with other items. For example, the ${appContext.selectedProduct?.name || 'product'} is known for its high quality and user satisfaction. What else would you like to know?`;
const words = simulatedFullResponse.split(' ');
let currentResponse = '';
for (let i = 0; i < words.length; i++) {
// Check for cancellation signal
if (signal.aborted) {
console.log("Streaming aborted!");
break;
}
currentResponse += words[i] + ' ';
setAiResponse(currentResponse);
await new Promise(resolve => setTimeout(resolve, 50 + Math.random() * 100)); // Simulate chunk delay
}
} catch (err) {
if (err.name === 'AbortError') {
console.log("Fetch aborted by user.");
setAiResponse("Request cancelled.");
} else {
console.error("AI API call failed:", err);
setError("Failed to get AI response. Please try again.");
}
} finally {
setIsLoading(false);
setUserPrompt('');
abortControllerRef.current = null; // Clear the controller
}
};
const handleCancelMessage = () => {
if (abortControllerRef.current) {
abortControllerRef.current.abort(); // Trigger cancellation
setIsLoading(false);
setError("AI response generation cancelled.");
setAiResponse("");
}
};
// Cleanup abort controller on unmount
useEffect(() => {
return () => {
if (abortControllerRef.current) {
abortControllerRef.current.abort();
}
};
}, []);
return (
<div className="copilot-container">
<h2>Your Smart Copilot</h2>
<div className="copilot-input-area">
<input
type="text"
value={userPrompt}
onChange={(e) => setUserPrompt(e.target.value)}
placeholder="Ask your copilot anything..."
onKeyPress={(e) => {
if (e.key === 'Enter' && !isLoading) {
handleSendMessage();
}
}}
disabled={isLoading}
/>
<button onClick={isLoading ? handleCancelMessage : handleSendMessage} disabled={userPrompt.trim() === '' && !isLoading}>
{isLoading ? 'Cancel' : 'Send'}
</button>
</div>
<div className="copilot-response-area">
{isLoading && <p className="loading-indicator">Copilot is thinking...</p>}
{error && <p className="error-message">Error: {error}</p>}
{aiResponse && <p>{aiResponse}</p>}
{!isLoading && !error && !aiResponse && <p>No response yet. Ask me something!</p>}
</div>
</div>
);
}
export default Copilot;
What changed?
abortControllerRef: We introduceduseRefto hold anAbortControllerinstance. This is a standard Web API for cancellingfetchrequests and other asynchronous operations. We create a new controller for each request.- Simulated Streaming Loop: Instead of setting the
aiResponseonce, we now split thesimulatedFullResponseinto words and append them one by one with a small delay, mimicking a streaming effect. - Cancellation Logic: The loop checks
signal.abortedin each iteration. IfhandleCancelMessageis called (by clicking the “Cancel” button), it triggersabortControllerRef.current.abort(), which setssignal.abortedto true, stopping the loop. - Dynamic Button Label: The button now says “Cancel” when
isLoadingis true, allowing the user to stop the generation. useEffectCleanup: AnuseEffecthook ensures that if the component unmounts while a request is in progress, theAbortControlleris used to clean up any pending operations.
Now, when you type a query, you’ll see the AI response “type out” word by word. This provides a much better user experience than waiting for the entire response.
Step 4: Adding Basic Client-Side Guardrails
Guardrails are essential for safety and quality. Even if your backend has robust guardrails, adding client-side checks can provide immediate feedback to the user and prevent unnecessary API calls.
Let’s add a simple length check and a basic keyword filter.
Modify src/components/Copilot.jsx:
// src/components/Copilot.jsx (updated for guardrails)
import React, { useState, useContext, useEffect, useRef } from 'react';
import { AppContext } from '../App';
import { buildCopilotPrompt } from '../utils/promptBuilder';
// Define some client-side guardrail parameters
const MAX_PROMPT_LENGTH = 200;
const BLOCKED_KEYWORDS = ['violence', 'hate speech', 'illegal activities']; // Example keywords
function Copilot() {
const [userPrompt, setUserPrompt] = useState('');
const [aiResponse, setAiResponse] = useState('');
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState(null);
const [inputError, setInputError] = useState(null); // New state for input-specific errors
const { appContext } = useContext(AppContext);
const abortControllerRef = useRef(null);
// Function to check client-side guardrails
const checkGuardrails = (prompt) => {
if (prompt.length > MAX_PROMPT_LENGTH) {
return `Your prompt is too long! Please keep it under ${MAX_PROMPT_LENGTH} characters.`;
}
for (const keyword of BLOCKED_KEYWORDS) {
if (prompt.toLowerCase().includes(keyword)) {
return `Your prompt contains a blocked keyword: "${keyword}". Please rephrase.`;
}
}
return null; // No guardrail violation
};
const handleSendMessage = async () => {
if (!userPrompt.trim()) {
setInputError("Prompt cannot be empty.");
return;
}
// --- Apply client-side guardrails BEFORE sending ---
const guardrailMessage = checkGuardrails(userPrompt);
if (guardrailMessage) {
setInputError(guardrailMessage);
return;
}
// If guardrails pass, clear any previous input error
setInputError(null);
setIsLoading(true);
setError(null);
setAiResponse('');
abortControllerRef.current = new AbortController();
const signal = abortControllerRef.current.signal;
const fullPrompt = buildCopilotPrompt({
userPrompt: userPrompt,
appContext: appContext,
});
try {
const simulatedFullResponse = `Here's a detailed AI response based on your query "${userPrompt}" and the context of the ${appContext.selectedProduct?.name || 'general product page'}. I can provide more information on its features, pricing, or comparisons with other items. For example, the ${appContext.selectedProduct?.name || 'product'} is known for its high quality and user satisfaction. What else would you like to know?`;
const words = simulatedFullResponse.split(' ');
let currentResponse = '';
for (let i = 0; i < words.length; i++) {
if (signal.aborted) {
console.log("Streaming aborted!");
break;
}
currentResponse += words[i] + ' ';
setAiResponse(currentResponse);
await new Promise(resolve => setTimeout(resolve, 50 + Math.random() * 100));
}
} catch (err) {
if (err.name === 'AbortError') {
console.log("Fetch aborted by user.");
setAiResponse("Request cancelled.");
} else {
console.error("AI API call failed:", err);
setError("Failed to get AI response. Please try again.");
}
} finally {
setIsLoading(false);
setUserPrompt('');
abortControllerRef.current = null;
}
};
const handleCancelMessage = () => {
if (abortControllerRef.current) {
abortControllerRef.current.abort();
setIsLoading(false);
setError("AI response generation cancelled.");
setAiResponse("");
}
};
useEffect(() => {
return () => {
if (abortControllerRef.current) {
abortControllerRef.current.abort();
}
};
}, []);
return (
<div className="copilot-container">
<h2>Your Smart Copilot</h2>
<div className="copilot-input-area">
<input
type="text"
value={userPrompt}
onChange={(e) => {
setUserPrompt(e.target.value);
// Clear input error as user types
if (inputError) setInputError(null);
}}
placeholder="Ask your copilot anything..."
onKeyPress={(e) => {
if (e.key === 'Enter' && !isLoading) {
handleSendMessage();
}
}}
disabled={isLoading}
/>
<button onClick={isLoading ? handleCancelMessage : handleSendMessage} disabled={userPrompt.trim() === '' && !isLoading}>
{isLoading ? 'Cancel' : 'Send'}
</button>
</div>
{inputError && <p className="error-message">{inputError}</p>} {/* Display input errors */}
<div className="copilot-response-area">
{isLoading && <p className="loading-indicator">Copilot is thinking...</p>}
{error && <p className="error-message">Error: {error}</p>}
{aiResponse && <p>{aiResponse}</p>}
{!isLoading && !error && !aiResponse && <p>No response yet. Ask me something!</p>}
</div>
</div>
);
}
export default Copilot;
Key additions for Guardrails:
MAX_PROMPT_LENGTH&BLOCKED_KEYWORDS: Constants to define our guardrail rules.inputErrorstate: A newuseStatevariable to manage errors specific to the user’s input, displayed directly below the input field.checkGuardrailsfunction: Encapsulates the logic for checking prompt length and blocked keywords. It returns an error message if a violation is found, otherwisenull.- Pre-send Check:
handleSendMessagenow callscheckGuardrailsbefore initiating the AI request. If a violation is found, it setsinputErrorand returns, preventing the AI call. - Error Clearing:
inputErroris cleared when the user types or when a successful send clears all errors.
Test this out! Try typing a very long prompt or one containing a blocked keyword. You should immediately see an error message appear below the input field, without sending any request to the (simulated) AI. This is quick, user-friendly feedback.
Step 5: Memory and Conversation History (Local State)
A truly helpful copilot remembers what you’ve discussed. Let’s add basic conversational memory by storing previous messages in our component’s state and including them in subsequent prompts.
We’ll add chatHistory to our Copilot component and pass it to buildCopilotPrompt.
Modify src/components/Copilot.jsx and src/utils/promptBuilder.js.
First, update src/components/Copilot.jsx:
// src/components/Copilot.jsx (updated for chat history)
import React, { useState, useContext, useEffect, useRef } from 'react';
import { AppContext } from '../App';
import { buildCopilotPrompt } from '../utils/promptBuilder';
const MAX_PROMPT_LENGTH = 200;
const BLOCKED_KEYWORDS = ['violence', 'hate speech', 'illegal activities'];
function Copilot() {
const [userPrompt, setUserPrompt] = useState('');
const [aiResponse, setAiResponse] = useState('');
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState(null);
const [inputError, setInputError] = useState(null);
const [chatHistory, setChatHistory] = useState([]); // NEW: State for conversation history
const { appContext } = useContext(AppContext);
const abortControllerRef = useRef(null);
const chatHistoryRef = useRef(null); // Ref for scrolling chat history
const checkGuardrails = (prompt) => {
if (prompt.length > MAX_PROMPT_LENGTH) {
return `Your prompt is too long! Please keep it under ${MAX_PROMPT_LENGTH} characters.`;
}
for (const keyword of BLOCKED_KEYWORDS) {
if (prompt.toLowerCase().includes(keyword)) {
return `Your prompt contains a blocked keyword: "${keyword}". Please rephrase.`;
}
}
return null;
};
const handleSendMessage = async () => {
if (!userPrompt.trim()) {
setInputError("Prompt cannot be empty.");
return;
}
const guardrailMessage = checkGuardrails(userPrompt);
if (guardrailMessage) {
setInputError(guardrailMessage);
return;
}
setInputError(null);
const currentPrompt = userPrompt; // Capture current prompt before clearing input
setIsLoading(true);
setError(null);
setAiResponse('');
// Add user's message to chat history immediately
setChatHistory(prev => [...prev, { role: 'user', content: currentPrompt }]);
abortControllerRef.current = new AbortController();
const signal = abortControllerRef.current.signal;
// --- CRITICAL: Pass chatHistory to the prompt builder ---
const fullPrompt = buildCopilotPrompt({
userPrompt: currentPrompt,
appContext: appContext,
chatHistory: [...chatHistory, { role: 'user', content: currentPrompt }] // Include current user prompt in history for AI
});
console.log("--- Full AI Prompt Sent (with History) ---");
console.log(fullPrompt);
console.log("------------------------------------------");
try {
const simulatedFullResponse = `Acknowledging your previous messages and current context (${appContext.selectedProduct?.name || 'no product selected'}). Here is a detailed AI response to "${currentPrompt}": I understand you're interested in this. This product is fantastic because of X, Y, and Z. What else can I clarify for you today regarding this item or other products?`;
const words = simulatedFullResponse.split(' ');
let currentResponse = '';
for (let i = 0; i < words.length; i++) {
if (signal.aborted) {
console.log("Streaming aborted!");
break;
}
currentResponse += words[i] + ' ';
setAiResponse(currentResponse);
await new Promise(resolve => setTimeout(resolve, 50 + Math.random() * 100));
}
// Add AI's final response to chat history
setChatHistory(prev => [...prev, { role: 'assistant', content: currentResponse.trim() }]);
} catch (err) {
if (err.name === 'AbortError') {
console.log("Fetch aborted by user.");
setAiResponse("Request cancelled.");
} else {
console.error("AI API call failed:", err);
setError("Failed to get AI response. Please try again.");
}
} finally {
setIsLoading(false);
setUserPrompt('');
abortControllerRef.current = null;
}
};
const handleCancelMessage = () => {
if (abortControllerRef.current) {
abortControllerRef.current.abort();
setIsLoading(false);
setError("AI response generation cancelled.");
setAiResponse("");
}
};
// Scroll to bottom of chat history when it updates
useEffect(() => {
if (chatHistoryRef.current) {
chatHistoryRef.current.scrollTop = chatHistoryRef.current.scrollHeight;
}
}, [chatHistory]);
useEffect(() => {
return () => {
if (abortControllerRef.current) {
abortControllerRef.current.abort();
}
};
}, []);
return (
<div className="copilot-container">
<h2>Your Smart Copilot</h2>
{/* NEW: Display Chat History */}
<div className="copilot-response-area chat-history" ref={chatHistoryRef}>
{chatHistory.length === 0 && <p>No conversation yet. Ask me something!</p>}
{chatHistory.map((msg, index) => (
<p key={index} className={`message-${msg.role}`}>
<strong>{msg.role === 'user' ? 'You' : 'Copilot'}:</strong> {msg.content}
</p>
))}
{isLoading && <p className="loading-indicator">Copilot is thinking...</p>}
{error && <p className="error-message">Error: {error}</p>}
{aiResponse && <p className="message-assistant-streaming"><strong>Copilot (streaming):</strong> {aiResponse}</p>}
</div>
<div className="copilot-input-area">
<input
type="text"
value={userPrompt}
onChange={(e) => {
setUserPrompt(e.target.value);
if (inputError) setInputError(null);
}}
placeholder="Ask your copilot anything..."
onKeyPress={(e) => {
if (e.key === 'Enter' && !isLoading) {
handleSendMessage();
}
}}
disabled={isLoading}
/>
<button onClick={isLoading ? handleCancelMessage : handleSendMessage} disabled={userPrompt.trim() === '' && !isLoading}>
{isLoading ? 'Cancel' : 'Send'}
</button>
</div>
{inputError && <p className="error-message">{inputError}</p>}
</div>
);
}
export default Copilot;
And update src/utils/promptBuilder.js to actually use the chatHistory:
// src/utils/promptBuilder.js (updated for chat history)
export function buildCopilotPrompt({ userPrompt, appContext, chatHistory = [] }) {
let promptParts = [];
promptParts.push(
"You are a helpful and friendly AI copilot embedded in a smart e-commerce application. " +
"Your goal is to assist the user by providing information, making suggestions, or answering questions " +
"relevant to their current activity within the store. Be concise and to the point."
);
if (appContext.selectedProduct) {
promptParts.push(
`The user is currently viewing a product with the following details:
Name: ${appContext.selectedProduct.name}
Price: $${appContext.selectedProduct.price.toFixed(2)}
Description: ${appContext.selectedProduct.description}`
);
} else {
promptParts.push("The user is currently browsing the product list, no specific product is selected.");
}
// NEW: Include chat history in the prompt
if (chatHistory.length > 0) {
promptParts.push("--- Conversation History ---");
// Format history for the AI. Limit to last few turns to save tokens.
// For simplicity, we'll include all here, but in production, you'd truncate.
chatHistory.slice(-5).forEach(msg => { // Include last 5 turns
promptParts.push(`${msg.role === 'user' ? 'User' : 'Assistant'}: ${msg.content}`);
});
promptParts.push("--- End History ---");
}
promptParts.push(`User's current question: "${userPrompt}"`);
return promptParts.join('\n\n');
}
Finally, add some CSS for the chat history in src/App.css:
/* src/App.css (additional styles for chat history) */
.chat-history {
border: 1px solid #dee2e6;
background-color: #f8f9fa;
padding: 15px;
border-radius: 8px;
max-height: 300px; /* Limit height */
overflow-y: auto; /* Enable scrolling */
margin-bottom: 20px;
text-align: left;
}
.chat-history p {
margin-bottom: 8px;
line-height: 1.4;
}
.message-user {
color: #007bff;
font-weight: 500;
}
.message-assistant {
color: #28a745;
font-weight: 500;
}
.message-assistant-streaming {
color: #6c757d;
font-style: italic;
}
What changed for memory?
chatHistorystate: An array of objects{ role: 'user' | 'assistant', content: string }to store messages.- Updating
chatHistory:- Before sending, the user’s
currentPromptis added. - After the AI response is complete, the
currentResponseis added.
- Before sending, the user’s
- Displaying History: The
copilot-response-areais now repurposed to show the fullchatHistory. The current streamingaiResponseis shown separately below the history. chatHistoryRef&useEffectfor Scrolling: AuseRefanduseEffectare used to automatically scroll the chat history to the bottom when new messages are added, ensuring the latest conversation is always visible.buildCopilotPromptUpdate: ThechatHistoryis now passed tobuildCopilotPrompt, which formats it and includes it in the AI’s prompt. We added aslice(-5)tochatHistoryin thepromptBuilderto demonstrate how you might truncate history to manage token costs in a real scenario, although for our simple simulation, the full history is fine.
Now, your copilot will display a conversation history, and more importantly, the AI’s “brain” (our fullPrompt log in the console) will receive the context of previous turns, allowing for more coherent and continuous conversations.
Mini-Challenge: Actionable Suggestions
You’ve built a solid context-aware copilot! Now, let’s push its capabilities slightly further.
Challenge: Enhance the copilot to not just provide information, but also suggest actionable next steps based on the current context. For example, if a product is selected and the user asks “What can I do with this?”, the copilot should suggest actions like “Add to Cart”, “View Reviews”, or “Compare with similar products”.
Hint:
- Modify
promptBuilder.js: Adjust the system instruction to explicitly ask the AI to suggest actions when appropriate. You might add a sentence like, “If relevant, suggest up to 3 actionable next steps the user could take within the application.” - Simulate Actionable Output: In
Copilot.jsx, when generating thesimulatedFullResponse, try to include a recognizable pattern for actions (e.g.,[ACTION: Add to Cart],[ACTION: View Reviews]). - Parse and Display Actions: In
Copilot.jsx, after theaiResponseis fully received, you could add logic to parse these[ACTION: ...]patterns and display them as clickable buttons below the AI’s text. This would involve usinguseEffectto trigger parsing onceisLoadingisfalseandaiResponseis present.
What to observe/learn: This challenge will highlight how to instruct an AI for specific output formats and how to parse and integrate those structured outputs back into your UI for a more interactive and agentic experience.
Common Pitfalls & Troubleshooting
Building AI-powered frontend features can introduce new complexities. Here are some common issues and how to approach them:
Contextual Overload / Irrelevant Responses:
- Pitfall: Sending too much irrelevant information in the
appContextorchatHistorycan confuse the AI, make responses less accurate, or hit token limits (if using a real LLM API). - Troubleshooting:
- Be Selective: Only include truly pertinent context. Does the AI really need to know the user’s screen resolution to answer about a product? Probably not.
- Summarize History: For
chatHistory, don’t send the entire conversation. Implement strategies to summarize older messages or only send the last N turns (as we hinted inpromptBuilder.js). - Review Prompts: Regularly log and review the
fullPromptbeing sent to the AI. Is it clear, concise, and focused?
- Pitfall: Sending too much irrelevant information in the
Streaming Issues (Network & UI Lag):
- Pitfall: Network latency, slow AI responses, or inefficient UI updates can make the streaming experience feel choppy or unresponsive.
- Troubleshooting:
- Optimistic UI: Display a “Thinking…” or “Generating…” state immediately.
- Debounce Input: For rapid user input, consider debouncing the
onChangehandler if you’re doing expensive client-side validation or preview generation. - Efficient State Updates: Ensure your
setAiResponseupdates are efficient. Appending to a string is generally fine, but complex object manipulations in a loop can be slow. - Backend Optimization: If using a real backend, ensure your AI service is optimized for streaming (e.g., using
text/event-streamfor SSE).
Security: Exposing API Keys:
- Pitfall: Accidentally hardcoding or exposing AI API keys (e.g., OpenAI, Google Gemini, Anthropic) directly in your client-side code. This is a critical security vulnerability.
- Troubleshooting:
- ALWAYS Use a Backend Proxy: Your frontend should never directly call third-party AI APIs with sensitive keys. Instead, call your own backend endpoint (e.g.,
/api/copilot) which then securely calls the AI service. This backend acts as a proxy, abstracting the API key away from the client. - Environment Variables: Use environment variables (
.envfiles) for local development, but ensure they are not bundled into client-side production builds.
- ALWAYS Use a Backend Proxy: Your frontend should never directly call third-party AI APIs with sensitive keys. Instead, call your own backend endpoint (e.g.,
State Desynchronization:
- Pitfall: The
appContextor other UI state used to build the prompt doesn’t accurately reflect what the user is seeing or doing. - Troubleshooting:
- Clear State Management: Use React Context, Redux, Zustand, or other state management libraries consistently to ensure a single source of truth for application state.
useEffectDependencies: Pay close attention to the dependency arrays ofuseEffecthooks that update context or derive values from it.- Debugging: Use React DevTools to inspect the state of your
AppContextandCopilotcomponent to verify that the context being captured is correct before the prompt is sent.
- Pitfall: The
Summary
Congratulations! You’ve successfully built a context-aware copilot within a React application. This project demonstrated several crucial aspects of modern frontend AI integration:
- Context Capture: How to effectively gather application-specific UI state (like
selectedProduct) using React Context and integrate it into your AI’s prompt. - Dynamic Prompt Engineering: Crafting comprehensive prompts that combine system instructions, dynamic context, and user input using a dedicated
promptBuilderutility. - Streaming Responses: Simulating and understanding the mechanics of streaming AI responses for a better user experience, including cancellation logic with
AbortController. - Client-Side Guardrails: Implementing immediate input validation and safety checks to provide fast user feedback and prevent unnecessary or harmful AI API calls.
- Conversational Memory: Integrating
chatHistoryto give the AI a sense of continuity and enable more natural, multi-turn conversations.
You’ve moved beyond simple API calls to orchestrating a more intelligent, interactive AI experience directly in the browser. This foundation is invaluable as you continue to explore more complex agentic workflows and AI-driven UI patterns.
What’s Next?
In the upcoming chapters, we’ll build upon this project by delving deeper into:
- Tool Calling from the UI: How your frontend agent can instruct the AI to “call” specific application functions (e.g., “add to cart,” “navigate to settings”).
- Advanced Agent Orchestration: Building more complex client-side agents that can chain multiple AI calls or tool interactions.
- Robust Error Handling and Fallbacks: Strategies for gracefully handling AI failures and providing alternative experiences.
Keep experimenting with your copilot, modifying the context, guardrails, and prompt structure to see how it influences the AI’s behavior!
References
- React Official Documentation: https://react.dev/ - Essential for understanding React hooks and context.
- MDN Web Docs - Fetch API: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API - Details on
fetchandReadableStreamfor handling streaming data. - MDN Web Docs - AbortController: https://developer.mozilla.org/en-US/docs/Web/API/AbortController - For understanding how to cancel asynchronous operations.
- Vite Documentation: https://vitejs.dev/ - For modern React project setup.
- Hugging Face Transformers.js (for in-browser AI context): https://huggingface.co/docs/transformers.js/index - While not directly used in this project, it’s a key resource for running models locally in the browser, offering alternatives to backend API calls for certain tasks.
This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.