Welcome to Chapter 10! As you build increasingly complex and interactive React applications, it’s paramount to remember that security isn’t just a backend concern—it’s a full-stack responsibility. The frontend, often the first point of interaction for your users, is a critical battleground for safeguarding data, maintaining user trust, and protecting your application’s integrity.
In this chapter, we’ll dive deep into essential frontend security practices for modern React applications. You’ll learn how to defend against common vulnerabilities like Cross-Site Scripting (XSS), implement robust Content Security Policies (CSP), make informed decisions about secure data storage, and understand the risks and mitigations associated with third-party scripts. By the end, you’ll have a strong foundation for building more resilient and trustworthy React applications.
Before we begin, a basic understanding of React components, state management, and how web applications communicate with servers (from previous chapters) will be beneficial. Let’s make your React apps not just functional, but formidable!
Core Concepts: Building a Secure Frontend Foundation
Frontend security might sound intimidating, but it largely boils down to understanding common attack vectors and applying proven defense mechanisms. Let’s break down the key concepts we’ll tackle.
Understanding Cross-Site Scripting (XSS) Prevention
Imagine a scenario where a malicious actor injects harmful code into your website, and when an unsuspecting user visits, that code executes right in their browser. This is the essence of Cross-Site Scripting (XSS). It’s one of the most prevalent web vulnerabilities and can lead to severe consequences like session hijacking, data theft, or website defacement.
Why does XSS exist? It primarily arises when an application includes untrusted data (often user-supplied) in a web page without proper validation or escaping. If the browser interprets this untrusted data as executable code, an XSS attack occurs.
What real production problem does it solve? Preventing XSS protects your users’ privacy, keeps their accounts secure, and maintains the reputation and integrity of your application. Ignoring XSS is like leaving your front door wide open in a busy city—it’s not a matter of if something bad will happen, but when.
How React helps (and where it needs your help): By default, React is quite good at preventing XSS. When you render content like this:
function App() {
const userInput = "<script>alert('You are hacked!');</script>";
return (
<div>
<h1>Welcome!</h1>
<p>{userInput}</p>
</div>
);
}
React automatically escapes the userInput string. This means < becomes <, > becomes >, etc., rendering the script harmlessly as text rather than executing it. This protects against most common reflected and stored XSS attacks.
However, React provides an escape hatch called dangerouslySetInnerHTML. As the name implies, it’s dangerous! When you use it, you’re explicitly telling React, “I know what I’m doing, and I want to inject this raw HTML directly into the DOM.” If the HTML you inject comes from an untrusted source and isn’t sanitized, you’ve opened a massive XSS hole.
function UnsafeApp() {
const rawHtml = "Hello, <script>alert('XSS!');</script><b>User</b>";
return (
<div dangerouslySetInnerHTML={{ __html: rawHtml }} />
);
}
In the example above, the alert('XSS!'); script would execute, demonstrating a successful XSS attack if rawHtml came from an attacker.
Implementing a Content Security Policy (CSP)
Even with careful XSS prevention, what if an attacker finds a zero-day vulnerability or manages to bypass your sanitization? This is where Content Security Policy (CSP) comes in as a crucial “defense-in-depth” layer.
What is CSP? CSP is an added layer of security that helps mitigate XSS and other code injection attacks by allowing you to define a whitelist of trusted sources for content (scripts, stylesheets, images, fonts, etc.) that your web application is allowed to load and execute. It’s typically enforced by the browser via an HTTP response header or a <meta> tag.
Why is it important? A strict CSP acts like a bouncer at your application’s club. Even if a malicious script is somehow injected, the CSP can prevent it from loading external resources, sending data to unauthorized domains, or executing inline scripts, significantly limiting the damage.
How it functions: You specify directives like script-src, style-src, img-src, etc., followed by allowed origins. For example, script-src 'self' https://cdn.example.com would only allow scripts from your own domain and cdn.example.com.
Modern best practices (2026):
- Strictness: Aim for the strictest possible CSP, disallowing
unsafe-inlineandunsafe-evalfor scripts. - Nonces: For any necessary inline scripts (e.g., those generated by build tools or specific libraries), use a cryptographic nonce (a “number used once”) that is generated on each request and included in both the CSP header and the script tag.
report-uriorreport-to: Always include a directive to report CSP violations. This helps you discover potential vulnerabilities and misconfigurations in production.
Failures if ignored: Without a CSP, if an XSS vulnerability exists, the malicious script has far fewer restrictions, making it easier for attackers to achieve their goals.
Secure Storage for Sensitive Data
Your React application might handle sensitive user data, most notably authentication tokens (access tokens, refresh tokens). Storing these securely on the client-side is paramount.
What’s sensitive data?
- Access Tokens: Short-lived tokens used to authenticate API requests.
- Refresh Tokens: Long-lived tokens used to obtain new access tokens when the current one expires.
- User preferences, payment information (though typically handled by payment gateways, not stored directly).
Where NOT to store sensitive data (especially tokens):
localStorageandsessionStorage: These are easily accessible via JavaScript. If an XSS attack occurs, an attacker can simply read these storage mechanisms and steal your tokens. This is a very common and dangerous anti-pattern.- IndexedDB: While more robust than
localStorage, it’s still accessible via JavaScript, making it vulnerable to XSS.
Where to store (and the trade-offs):
Memory (React state, Zustand, Redux Toolkit):
- Best for Access Tokens.
- Why: Tokens stored in memory are not persistently saved to the browser’s disk. They are cleared when the tab is closed or the page is refreshed. This means an XSS attack can only steal the token while the user is actively on the page, and the token’s short lifespan limits its utility to the attacker.
- Trade-off: Requires a mechanism to re-obtain an access token on refresh (e.g., using a refresh token via an API call).
HTTP-only, Secure, SameSite Cookies:
- Best for Refresh Tokens.
- Why:
- HTTP-only: The cookie is inaccessible to client-side JavaScript, making it immune to XSS attacks.
- Secure: The cookie is only sent over HTTPS connections, preventing eavesdropping.
- SameSite=Lax/Strict: Provides protection against Cross-Site Request Forgery (CSRF) attacks by preventing the browser from sending the cookie with cross-site requests (unless explicitly allowed by
LaxorNonewithSecure).
- How it works: Your backend sets this cookie. When the frontend needs to refresh an access token, it makes an API call, and the browser automatically attaches the HTTP-only refresh token cookie. The backend uses this to issue a new access token.
- Trade-off: Still vulnerable to CSRF if
SameSiteisn’t configured correctly or if specific attack patterns are exploited. Requires careful backend implementation.
Failures if ignored: Storing tokens insecurely is a direct path to session hijacking and unauthorized access to user accounts, leading to severe data breaches and loss of user trust.
Third-Party Script Isolation
Modern React applications often integrate numerous third-party scripts for analytics, advertising, customer support, A/B testing, and more. While these services provide valuable functionality, they also introduce significant security risks.
What are third-party scripts? Any JavaScript code loaded from a domain you don’t fully control (e.g., Google Analytics, Intercom, Stripe.js).
Why are they a risk? These scripts run with the same privileges as your own code. If a third-party script’s server is compromised, or if the script itself contains vulnerabilities, it can be used to:
- Inject malicious code (XSS).
- Exfiltrate sensitive user data from your page.
- Deface your website.
- Redirect users to phishing sites.
This is often referred to as a “supply chain attack” in the frontend context.
Isolation strategies:
- Strict CSP: This is your primary defense. By restricting
script-srcand other directives, you can limit which third-party domains are allowed to load scripts and what resources they can access. - Subresource Integrity (SRI): For scripts loaded from CDNs, SRI allows you to provide a cryptographic hash (like
sha384-xyz...) in the<script>tag. The browser will only execute the script if its content matches the hash, preventing execution if the script has been tampered with on the CDN. - Sandboxed Iframes: For highly sensitive third-party widgets, you can embed them within an
<iframe>with thesandboxattribute. This attribute severely restricts what the iframe content can do (e.g., disable scripts, popups, form submissions). This is complex to implement correctly for interactive widgets. - Server-Side Tag Management/Proxying: In some enterprise scenarios, third-party scripts are loaded via a server-side tag manager or proxied through your own backend. This gives you more control over the script’s content and execution environment, but adds complexity.
Failures if ignored: A single compromised third-party script can undermine all your other security efforts, leading to widespread data breaches or reputational damage.
Safe HTML Rendering for User-Generated Content
Let’s say your application allows users to create profiles with rich text descriptions, post comments, or write forum posts. If you directly render this user-supplied HTML, you’re inviting XSS attacks.
Why it’s a concern: An attacker could submit a comment like:
"Hello! <img src='x' onerror='alert(\"Got your cookie: \" + document.cookie)' />"
If rendered unsafely, this would try to load a non-existent image, trigger the onerror event, and execute the malicious JavaScript.
React’s dangerouslySetInnerHTML revisited: As discussed, this is the mechanism to render raw HTML in React. It’s dangerous precisely because it bypasses React’s automatic escaping.
The Solution: HTML Sanitization
Before passing any user-generated HTML to dangerouslySetInnerHTML, you must sanitize it. Sanitization is the process of inspecting an HTML string and removing any elements or attributes that could lead to an XSS attack, while preserving the harmless formatting the user intended.
How it works:
- A robust HTML sanitization library (e.g.,
DOMPurify) parses the HTML string. - It uses a whitelist approach, allowing only known safe tags (like
<b>,<i>,<a>,<p>) and attributes (likehref,srcfor specific tags). - All potentially dangerous elements (like
<script>,<embed>) and attributes (likeonload,onerror,stylewith JavaScript URLs) are stripped out. - The sanitized, safe HTML string is returned.
Failures if ignored: Stored XSS attacks, where malicious content persists in your database and affects every user who views it, are a direct consequence of unsafe HTML rendering.
Step-by-Step Implementation: Securing Your React App
Let’s put these concepts into practice. We’ll build a small React application to demonstrate XSS prevention with sanitization, a basic CSP, and secure data handling principles.
Step 1: Setting Up Your Project
First, let’s create a new React project using Vite, a fast build tool. We’ll use TypeScript for better developer experience and type safety, which can also indirectly help prevent some common errors.
# Verify npm is installed and up-to-date
npm -v # Should be 8.x or higher as of 2026-02-11
# Create a new React project with Vite
npm create vite@latest my-secure-app -- --template react-ts
# Navigate into your new project directory
cd my-secure-app
# Install dependencies
npm install
Now, let’s install dompurify, our chosen HTML sanitization library. As of early 2026, dompurify version 3.x is the standard.
npm install dompurify@latest
npm install -D @types/dompurify@latest # Install TypeScript types
Step 2: XSS Prevention with dangerouslySetInnerHTML and DOMPurify
We’ll create a component that attempts to render user-provided HTML, first unsafely, then safely with DOMPurify.
Open src/App.tsx. Replace its content with the following:
// src/App.tsx
import React, { useState } from 'react';
import DOMPurify from 'dompurify';
function App() {
const [userInput, setUserInput] = useState('');
const [safeHtml, setSafeHtml] = useState('');
// 1. Unsafe input that contains a malicious script
const maliciousInput = `
<p>Hello there!</p>
<img src="x" onerror="alert('XSS Attack! Your session ID is: ' + document.cookie);" />
<a href="javascript:alert('Another XSS!')">Click me</a>
<p>This is some benign text.</p>
`;
// 2. A simple, harmless input
const benignInput = `
<p>Welcome to our secure platform!</p>
<p>Feel free to use <b>bold</b> and <i>italic</i> text.</p>
`;
// Function to handle input change and sanitize it
const handleInputChange = (event: React.ChangeEvent<HTMLTextAreaElement>) => {
const rawInput = event.target.value;
setUserInput(rawInput);
// Sanitize the input using DOMPurify
const cleanHtml = DOMPurify.sanitize(rawInput, {
USE_PROFILES: { html: true }, // Ensure basic HTML tags are allowed
FORBID_TAGS: ['script', 'style'], // Explicitly forbid script/style tags if not covered by profile
FORBID_ATTR: ['onerror', 'onload'], // Explicitly forbid event handlers
});
setSafeHtml(cleanHtml);
};
return (
<div className="container" style={{ maxWidth: '800px', margin: '20px auto', fontFamily: 'Arial, sans-serif' }}>
<h1>Frontend Security: XSS Prevention</h1>
<section style={{ marginBottom: '40px', border: '1px solid #ccc', padding: '20px', borderRadius: '8px' }}>
<h2>1. Unsafe Rendering (Demonstration Only!)</h2>
<p>This section shows what *not* to do. Input is rendered directly.</p>
<div style={{ border: '1px dashed red', padding: '10px', minHeight: '80px', backgroundColor: '#ffe6e6' }}>
{/* DANGER: This is extremely unsafe if 'maliciousInput' comes from user */}
<div dangerouslySetInnerHTML={{ __html: maliciousInput }} />
</div>
<p style={{ color: 'red', fontWeight: 'bold' }}>
If you see an alert box when this page loads, the XSS attack was successful!
This demonstrates why direct `dangerouslySetInnerHTML` with untrusted input is a critical vulnerability.
</p>
</section>
<section style={{ marginBottom: '40px', border: '1px solid #007bff', padding: '20px', borderRadius: '8px' }}>
<h2>2. Safe Rendering with DOMPurify</h2>
<p>
This is the recommended approach for rendering user-generated HTML.
We use `DOMPurify` to strip out malicious content before rendering.
</p>
<textarea
style={{ width: '100%', minHeight: '150px', padding: '10px', fontSize: '1em', marginBottom: '15px' }}
placeholder="Type some HTML here, try putting a script tag!"
value={userInput}
onChange={handleInputChange}
/>
<h3>Preview (Sanitized Output):</h3>
<div style={{ border: '1px solid green', padding: '10px', minHeight: '80px', backgroundColor: '#e6ffe6' }}>
{/* SAFE: Input is sanitized by DOMPurify before rendering */}
<div dangerouslySetInnerHTML={{ __html: safeHtml }} />
</div>
<p>
Notice how `DOMPurify` removed the `<script>` tags and `onerror` attributes,
rendering only the safe HTML.
</p>
<button onClick={() => setUserInput(maliciousInput)} style={{ marginRight: '10px', padding: '8px 15px', cursor: 'pointer' }}>
Load Malicious Input
</button>
<button onClick={() => setUserInput(benignInput)} style={{ padding: '8px 15px', cursor: 'pointer' }}>
Load Benign Input
</button>
</section>
</div>
);
}
export default App;
Explanation of the code:
import DOMPurify from 'dompurify';: We import theDOMPurifylibrary.maliciousInput: This string contains an<img>tag with anonerrorattribute and ajavascript:URL in an<a>tag, both common XSS vectors.handleInputChange: When the user types, this function takes the raw input.DOMPurify.sanitize(rawInput, { ... }): This is the core of our defense.DOMPurifytakes the raw HTML and cleans it.USE_PROFILES: { html: true }: A common profile that allows standard HTML tags.FORBID_TAGS: ['script', 'style']: Explicitly bans these tags, though profiles often handle this.FORBID_ATTR: ['onerror', 'onload']: Explicitly bans common event handler attributes.
- The first
dangerouslySetInnerHTMLusesmaliciousInputdirectly to demonstrate the vulnerability. You should see analertpop up when the page loads. In a real application, you would NEVER do this with untrusted input. - The second
dangerouslySetInnerHTMLusessafeHtml, which has been processed byDOMPurify. No alerts should pop up from the user input field, even if you paste themaliciousInputinto it.
Run your app to see this in action:
npm run dev
You should first see an alert from the “Unsafe Rendering” section, then experiment with the text area in the “Safe Rendering” section.
Step 3: Implementing a Content Security Policy (CSP)
CSP is typically set by your web server or CDN, but for development and client-side rendered apps, you can include it as a <meta> tag in your public/index.html file. This is useful for testing and simple deployments, though a server-side header is generally preferred for robustness.
Open public/index.html. Find the <head> section and add the following <meta> tag, ideally right after the <title> tag:
<!-- public/index.html -->
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>My Secure React App</title>
<!-- CRITICAL: Content Security Policy -->
<meta http-equiv="Content-Security-Policy" content="
default-src 'self';
script-src 'self' 'unsafe-inline' 'unsafe-eval'; <!-- Temporarily relaxed for Vite dev server -->
style-src 'self' 'unsafe-inline';
img-src 'self' data:;
font-src 'self';
connect-src 'self' http://localhost:* ws://localhost:*;
object-src 'none';
base-uri 'self';
form-action 'self';
frame-ancestors 'none';
report-uri /csp-report-endpoint;
">
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>
Explanation of the CSP:
default-src 'self': This is the fallback. By default, only resources from the same origin as the document are allowed.script-src 'self' 'unsafe-inline' 'unsafe-eval': For a production application, you would strive to remove'unsafe-inline'and'unsafe-eval'. However, development servers (like Vite) often rely on inline scripts and eval for hot module reloading and bundling. For a strict production CSP, you’d replace'unsafe-inline'with nonces and'unsafe-eval'with a hash of your webpack/vite runtime script if absolutely necessary, or eliminate it.style-src 'self' 'unsafe-inline': Similar toscript-src,'unsafe-inline'is often needed for development or if you use inline styles extensively.img-src 'self' data:: Allows images from your own origin anddata:URIs (base64 encoded images).connect-src 'self' http://localhost:* ws://localhost:*: Allows connections (e.g., API calls, WebSockets) to your own origin and the development server.object-src 'none': Prevents embedding<object>,<embed>, or<applet>elements.base-uri 'self': Restricts the URLs that can be used in a document’s<base>element.form-action 'self': Restricts URLs that can be used as the target of HTML form submissions.frame-ancestors 'none': Prevents your page from being embedded in iframes on other sites (good for clickjacking prevention).report-uri /csp-report-endpoint;: (Important for production!) Specifies a URL where the browser will send JSON reports of CSP violations. You’d need a backend endpoint to receive and log these.
Challenge: After adding the CSP, restart your npm run dev server. Open your browser’s developer tools (Console tab). If your CSP is too strict, you might see errors. Try removing 'unsafe-inline' from script-src and style-src directives. What happens? You’ll likely see errors, demonstrating how tightly integrated modern build tools are with inline scripts. This highlights the balance between security and development experience, especially during active development.
Step 4: Secure Token Storage (Conceptual)
Implementing a full authentication flow with an HTTP-only refresh token and in-memory access token requires both frontend and backend logic. Here, we’ll focus on the frontend’s role in managing the access token securely in memory.
Let’s simulate an access token stored in a simple React Context. In a real app, you’d likely use a global state manager like Zustand or Redux Toolkit for this.
Create a new file src/AuthContext.tsx:
// src/AuthContext.tsx
import React, { createContext, useState, useContext, ReactNode, useEffect } from 'react';
// Define the shape of our authentication context
interface AuthContextType {
accessToken: string | null;
setAccessToken: (token: string | null) => void;
isAuthenticated: boolean;
login: (token: string) => void;
logout: () => void;
// In a real app, you'd have a refresh function here too
// refreshAccessToken: () => Promise<void>;
}
// Create the context with a default (null) value
const AuthContext = createContext<AuthContextType | undefined>(undefined);
// AuthProvider component to wrap our application
export const AuthProvider: React.FC<{ children: ReactNode }> = ({ children }) => {
// Store the access token in memory (React state)
const [accessToken, setAccessToken] = useState<string | null>(null);
const isAuthenticated = !!accessToken;
// Simulate a login action
const login = (token: string) => {
console.log('Login: Access token stored in memory.');
setAccessToken(token);
// In a real app, successful login would trigger a backend
// to set an HTTP-only refresh token cookie.
};
// Simulate a logout action
const logout = () => {
console.log('Logout: Access token cleared from memory.');
setAccessToken(null);
// In a real app, logout would also invalidate the refresh token on the backend.
};
// Example of how to "refresh" an access token using a refresh token
// (This would involve an API call, with the browser automatically sending the HTTP-only cookie)
// useEffect(() => {
// if (!isAuthenticated && /* check for refresh token presence */) {
// // Call backend to refresh token
// // If successful, set new access token
// }
// }, [isAuthenticated]);
const value = {
accessToken,
setAccessToken,
isAuthenticated,
login,
logout,
};
return <AuthContext.Provider value={value}>{children}</AuthContext.Provider>;
};
// Custom hook for consuming the auth context
export const useAuth = () => {
const context = useContext(AuthContext);
if (context === undefined) {
throw new Error('useAuth must be used within an AuthProvider');
}
return context;
};
Now, let’s integrate this into src/App.tsx to demonstrate its usage:
// src/App.tsx (Modified to include AuthProvider and useAuth)
import React, { useState } from 'react';
import DOMPurify from 'dompurify';
import { AuthProvider, useAuth } from './AuthContext'; // Import AuthProvider and useAuth
// This component will use the AuthContext
const AuthDisplay: React.FC = () => {
const { accessToken, isAuthenticated, login, logout } = useAuth();
const dummyAccessToken = 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvZSBEb2UiLCJpYXQiOjE1MTYyMzkwMjJ9.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c';
return (
<div style={{ marginTop: '20px', borderTop: '1px dashed #ccc', paddingTop: '20px' }}>
<h2>3. Secure Token Storage (Access Token in Memory)</h2>
<p>
**Current Status:** {isAuthenticated ? 'Logged In' : 'Logged Out'}
</p>
{isAuthenticated ? (
<>
<p>
**Access Token (in memory):** <code style={{ wordBreak: 'break-all' }}>{accessToken}</code>
<br />
<small style={{ color: '#888' }}>
This token is stored in React's component state (memory) and would be lost on refresh.
It's ideal for short-lived access tokens.
</small>
</p>
<button onClick={logout} style={{ padding: '8px 15px', cursor: 'pointer', backgroundColor: '#dc3545', color: 'white', border: 'none', borderRadius: '4px' }}>
Logout (Clear Access Token)
</button>
</>
) : (
<button onClick={() => login(dummyAccessToken)} style={{ padding: '8px 15px', cursor: 'pointer', backgroundColor: '#28a745', color: 'white', border: 'none', borderRadius: '4px' }}>
Login (Store Dummy Access Token)
</button>
)}
<p style={{ marginTop: '15px' }}>
<small style={{ color: '#888' }}>
**Refresh Tokens (Conceptual):** In a real application, a long-lived HTTP-only, Secure, SameSite cookie
would be set by the backend. This cookie would not be accessible to JavaScript, protecting it from XSS.
When the access token expires, the frontend would make an API call (e.g., to `/api/refresh-token`),
and the browser would automatically attach the HTTP-only refresh token cookie.
The backend would then issue a new access token, which the frontend stores in memory.
</small>
</p>
</div>
);
};
// Main App component (wrapper for AuthProvider)
function App() {
const [userInput, setUserInput] = useState('');
const [safeHtml, setSafeHtml] = useState('');
const maliciousInput = `
<p>Hello there!</p>
<img src="x" onerror="alert('XSS Attack! Your session ID is: ' + document.cookie);" />
<a href="javascript:alert('Another XSS!')">Click me</a>
<p>This is some benign text.</p>
`;
const benignInput = `
<p>Welcome to our secure platform!</p>
<p>Feel free to use <b>bold</b> and <i>italic</i> text.</p>
`;
const handleInputChange = (event: React.ChangeEvent<HTMLTextAreaElement>) => {
const rawInput = event.target.value;
setUserInput(rawInput);
const cleanHtml = DOMPurify.sanitize(rawInput, {
USE_PROFILES: { html: true },
FORBID_TAGS: ['script', 'style'],
FORBID_ATTR: ['onerror', 'onload'],
});
setSafeHtml(cleanHtml);
};
return (
<AuthProvider> {/* Wrap the entire application with AuthProvider */}
<div className="container" style={{ maxWidth: '800px', margin: '20px auto', fontFamily: 'Arial, sans-serif' }}>
<h1>Frontend Security: XSS Prevention</h1>
<section style={{ marginBottom: '40px', border: '1px solid #ccc', padding: '20px', borderRadius: '8px' }}>
<h2>1. Unsafe Rendering (Demonstration Only!)</h2>
<p>This section shows what *not* to do. Input is rendered directly.</p>
<div style={{ border: '1px dashed red', padding: '10px', minHeight: '80px', backgroundColor: '#ffe6e6' }}>
<div dangerouslySetInnerHTML={{ __html: maliciousInput }} />
</div>
<p style={{ color: 'red', fontWeight: 'bold' }}>
If you see an alert box when this page loads, the XSS attack was successful!
This demonstrates why direct `dangerouslySetInnerHTML` with untrusted input is a critical vulnerability.
</p>
</section>
<section style={{ marginBottom: '40px', border: '1px solid #007bff', padding: '20px', borderRadius: '8px' }}>
<h2>2. Safe Rendering with DOMPurify</h2>
<p>
This is the recommended approach for rendering user-generated HTML.
We use `DOMPurify` to strip out malicious content before rendering.
</p>
<textarea
style={{ width: '100%', minHeight: '150px', padding: '10px', fontSize: '1em', marginBottom: '15px' }}
placeholder="Type some HTML here, try putting a script tag!"
value={userInput}
onChange={handleInputChange}
/>
<h3>Preview (Sanitized Output):</h3>
<div style={{ border: '1px solid green', padding: '10px', minHeight: '80px', backgroundColor: '#e6ffe6' }}>
<div dangerouslySetInnerHTML={{ __html: safeHtml }} />
</div>
<p>
Notice how `DOMPurify` removed the `<script>` tags and `onerror` attributes,
rendering only the safe HTML.
</p>
<button onClick={() => setUserInput(maliciousInput)} style={{ marginRight: '10px', padding: '8px 15px', cursor: 'pointer' }}>
Load Malicious Input
</button>
<button onClick={() => setUserInput(benignInput)} style={{ padding: '8px 15px', cursor: 'pointer' }}>
Load Benign Input
</button>
</section>
<AuthDisplay /> {/* Render the AuthDisplay component */}
</div>
</AuthProvider>
);
}
export default App;
Explanation:
AuthContext.tsxdefines anAuthContextusing React’screateContextanduseStateto hold theaccessToken.AuthProviderwraps the application, making theaccessTokenandlogin/logoutfunctions available to all child components.useAuthis a custom hook to easily access these values.AuthDisplaycomponent usesuseAuthto show the current authentication status and provides buttons to simulate login/logout. When you click “Login,” a dummy access token is stored in theaccessTokenstate. When you refresh the page, this token is lost, simulating the in-memory storage principle.
Step 5: Third-Party Script Isolation with Subresource Integrity (SRI)
SRI is applied directly in your HTML when you link to external scripts. Let’s imagine we’re using a hypothetical third-party script from a CDN.
Open public/index.html again. Below the <body> tag, before your React app’s script, you might add a third-party script.
<!-- public/index.html -->
<!doctype html>
<html lang="en">
<head>
<!-- ... (existing head content) ... -->
</head>
<body>
<div id="root"></div>
<!-- Example of a third-party script with Subresource Integrity (SRI) -->
<script
src="https://cdnjs.cloudflare.com/ajax/libs/some-library/1.0.0/some-library.min.js"
integrity="sha384-YOUR_ACTUAL_HASH_FOR_THIS_SCRIPT"
crossorigin="anonymous"
></script>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>
Explanation:
src: The URL of the external script.integrity: This attribute contains a cryptographic hash (e.g., SHA-384) of the expected content of the script. If the script loaded from the CDN has been altered (even by a single character), its hash will not match, and the browser will refuse to execute it.crossorigin="anonymous": This attribute is required for SRI to work. It tells the browser to fetch the resource using CORS (Cross-Origin Resource Sharing) without sending credentials (like cookies), ensuring the integrity check is performed correctly.
How to get the integrity hash:
You typically generate this hash using tools like srihash.org or command-line utilities (e.g., openssl dgst -sha384 -binary some-library.min.js | openssl base64 -A). Always obtain the hash from a trusted source (e.g., the library’s official documentation or CDN provider) and verify it.
Since we don’t have a real external library for this example, the integrity hash is a placeholder. However, understanding its purpose is key.
Mini-Challenge: Harden Your User Profile Display
You’ve just built a new user profile page where users can enter a “Bio” field using a rich text editor. Your task is to display this bio safely.
Challenge:
- Create a new React component called
UserProfileBio.tsx. - Inside this component, define a state variable
userBioinitialized with a string that includes both harmless HTML (e.g.,<b>Hello</b>) and malicious HTML (e.g.,<script>alert('Bio XSS!');</script>). - Render this
userBiousingdangerouslySetInnerHTML. Observe the XSS. - Modify the component to use
DOMPurifyto sanitize theuserBiobefore rendering it withdangerouslySetInnerHTML. Verify that the malicious script no longer executes.
Hint:
- Remember to
import DOMPurify from 'dompurify'; - The
DOMPurify.sanitize()function is your best friend here.
What to observe/learn:
- The critical difference between rendering raw, untrusted HTML and sanitized HTML.
- The effectiveness of
DOMPurifyin stripping out dangerous content.
Common Pitfalls & Troubleshooting
Even with the best intentions, security can be tricky. Here are some common mistakes and how to troubleshoot them:
Forgetting to Sanitize All Untrusted HTML:
- Pitfall: You might sanitize user input in one place (e.g., comments) but forget another (e.g., profile descriptions, forum posts). Or you might only sanitize on the backend, assuming the frontend is safe.
- Troubleshooting: Conduct a thorough audit of your application. Identify every place where user-generated content (or any content from an untrusted source) is rendered as HTML. Each of these points needs robust sanitization, ideally both on the backend and the frontend (defense-in-depth!). Use browser developer tools to inspect the rendered HTML and look for unexpected tags or attributes.
Overly Permissive CSP Directives:
- Pitfall: Using
'unsafe-inline'or'unsafe-eval'inscript-srcorstyle-srcdirectives for production environments significantly weakens your CSP. - Troubleshooting:
- Development vs. Production: Understand that your development CSP might need to be more permissive (e.g., for HMR) than your production CSP.
- CSP Violations: Monitor your browser’s developer console for CSP violation messages. These messages are invaluable for debugging what’s being blocked.
report-uri/report-to: In production, usereport-uriorreport-toto collect violation reports. Analyze these reports to refine your CSP, gradually making it stricter without breaking functionality.- Nonces/Hashes: For inline scripts or styles that must be there, explore using
nonceattributes or content hashes in your CSP rather thanunsafe-inline.
- Pitfall: Using
Storing Authentication Tokens in
localStorageorsessionStorage:- Pitfall: This is a very common mistake. While convenient, it makes your users’ sessions vulnerable to XSS attacks.
- Troubleshooting:
- Audit Your Code: Search for
localStorage.setItem('token', ...)or similar. - Refactor: Migrate access tokens to in-memory storage (React state, Zustand, Redux Toolkit).
- Backend Review: Ensure your backend is correctly setting HTTP-only, Secure, and SameSite cookies for refresh tokens.
- Test: Simulate an XSS attack (e.g., by manually injecting
<script>alert(localStorage.getItem('token'))</script>via browser console) to confirm tokens are no longer accessible.
- Audit Your Code: Search for
Summary
Congratulations! You’ve taken significant steps towards making your React applications more secure. Here’s a quick recap of what we covered:
- XSS Prevention: We learned that React automatically escapes content by default, but
dangerouslySetInnerHTMLis a critical exception. Always sanitize untrusted HTML content using libraries likeDOMPurifybefore rendering. - Content Security Policy (CSP): CSP acts as a powerful second line of defense, whitelisting allowed content sources to mitigate XSS and other injection attacks. Strive for strict CSPs, avoiding
unsafe-inlineandunsafe-evalin production, and usereport-urifor monitoring. - Secure Storage: We differentiated between secure and insecure ways to store sensitive data. Access tokens should be kept in memory, while refresh tokens are best stored in HTTP-only, Secure, SameSite cookies managed by the backend.
- Third-Party Script Isolation: Third-party scripts introduce risks. Employ strict CSPs and Subresource Integrity (SRI) to protect against supply chain attacks.
- Safe HTML Rendering: Always sanitize user-generated HTML to prevent stored XSS attacks, ensuring only safe tags and attributes are rendered.
Remember, frontend security is an ongoing process of vigilance and continuous improvement. By understanding these core principles and applying best practices, you’re building applications that are not just feature-rich, but also trustworthy and resilient.
What’s next? In the next chapter, we’ll shift our focus to Performance and Build Optimization, learning how to make your secure React applications lightning-fast and efficient, providing an excellent user experience alongside robust security.
References
- React Documentation on
dangerouslySetInnerHTML: https://react.dev/reference/react-dom/components/common#dangerouslysetinnerhtml - MDN Web Docs - Content Security Policy (CSP): https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
- MDN Web Docs - Subresource Integrity (SRI): https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity
- OWASP Cheat Sheet Series - XSS Prevention Cheat Sheet: https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html
- DOMPurify GitHub Repository: https://github.com/cure53/DOMPurify
This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.