Un Audit di Sicurezza Multi-Fase: Rafforzare un'Applicazione Next.js
Security audits on web applications tend to find the same categories of problems regardless of the stack: authorization gaps, missing input validation, inadequate HTTP security headers, and timing vulnerabilities in authentication flows. The specific manifestation differs by framework, but the underlying patterns are consistent. What varies is how systematically the audit is structured – an unstructured review is likely to catch the obvious issues and miss the subtle ones.
We conducted a multi-phase audit on a production Next.js application using Supabase as the backend. The application handled user authentication, stored user-linked records, and exposed several API routes used by both the front end and external webhooks. None of the findings were catastrophic, but several were the kind of issue that becomes a serious incident if discovered by someone with the wrong intentions. Here is what the audit covered, what it found, and what changed as a result.
Phase One: Authentication and Authorization
The first phase focused on the authentication layer and the Row Level Security policies governing data access. These are the most critical controls in any Supabase application: if authentication is broken or RLS is misconfigured, no amount of input validation or rate limiting compensates.
The audit started with a full inventory of Supabase tables and their RLS status. Every table should have RLS enabled. Any table without RLS enabled is world-readable by anyone with the anon key – which is public by design and embedded in the client-side bundle. In this application, one table had RLS disabled. It was a lookup table that was treated as reference data, and at the time it was created, it contained only static values. Over time, some rows had been added that included user-linked metadata that should have been protected. The fix was straightforward – enable RLS and add a policy – but the underlying process failure was notable: new tables should have RLS enabled by default, not as a retroactive step.
The RLS policies on the protected tables were then reviewed for correctness. A common mistake is a policy that correctly checks the user ID but does so on the wrong column. For example, a policy on a documents table that checks auth.uid() = created_by correctly restricts access to the creator, but if the application also supports shared documents, a collaborator row in a document_collaborators join table needs its own policy – the check on the primary table is not transitive to joined data.
One policy contained a logic error that had survived code review: it used USING where it should have used both USING and WITH CHECK. The USING clause governs read access; WITH CHECK governs write access. A policy with only USING allows any authenticated user to write to the table as long as they can read from it – which was not the intended behavior. This is a PostgreSQL RLS subtlety that is easy to overlook.
Phase Two: Input Validation
The second phase reviewed every entry point where user-supplied data enters the system: API route handlers in app/api/, server actions, and form submissions. The goal was to confirm that all inputs are validated before being used, and that validation failures produce informative errors rather than unhandled exceptions.
The application was partially using Zod for validation, but inconsistently. Some routes had full schemas with z.parse() at the top of the handler. Others were reading req.body fields directly without validation, relying on TypeScript type assertions that provide no runtime safety:
// Unsafe: TypeScript type assertion, no runtime validation
const { userId, documentId } = req.body as { userId: string; documentId: string };
// Safe: Zod parse, throws on invalid input
const schema = z.object({
userId: z.string().uuid(),
documentId: z.string().uuid(),
});
const { userId, documentId } = schema.parse(req.body);
The remediation was to add Zod schemas to every unvalidated API route and to establish a shared validation middleware that all routes pass through. The middleware pattern is important: it ensures that the validation layer cannot be bypassed by a code path that was added without awareness of the validation requirement.
One route accepted a redirectUrl parameter that was used in a res.redirect() call. This is an open redirect vulnerability – an attacker can craft a link to the application that redirects users to a phishing site after a legitimate-looking interaction. The fix is to validate the redirect target against an allowlist of acceptable paths:
const ALLOWED_REDIRECT_PATHS = ['/dashboard', '/settings', '/profile'];
const redirectPath = schema.parse(req.query).redirectUrl;
if (!ALLOWED_REDIRECT_PATHS.includes(redirectPath)) {
return res.redirect('/dashboard');
}
return res.redirect(redirectPath);
Phase Three: HTTP Security Headers and CSP
Content Security Policy headers were absent from the application entirely. Without a CSP, a successful XSS injection – whether through user-generated content or a compromised third-party script – can execute arbitrary JavaScript in the context of the application, with access to authentication tokens and user data.
Adding a CSP to a Next.js application requires care because Next.js inlines some scripts by default. The recommended approach uses nonces – a cryptographically random value generated per request that is added to the CSP header and applied to each inline script:
// middleware.ts
import { NextResponse } from 'next/server';
import crypto from 'crypto';
export function middleware(request: Request) {
const nonce = crypto.randomBytes(16).toString('base64');
const cspHeader = [
`default-src 'self'`,
`script-src 'self' 'nonce-${nonce}' 'strict-dynamic'`,
`style-src 'self' 'unsafe-inline'`,
`img-src 'self' data: https:`,
`connect-src 'self' https://api.example.com`,
`frame-ancestors 'none'`,
].join('; ');
const response = NextResponse.next();
response.headers.set('Content-Security-Policy', cspHeader);
response.headers.set('X-Frame-Options', 'DENY');
response.headers.set('X-Content-Type-Options', 'nosniff');
response.headers.set('Referrer-Policy', 'strict-origin-when-cross-origin');
return response;
}
The initial CSP deployment used Content-Security-Policy-Report-Only mode, which reports violations without blocking them. This surfaced several legitimate inline scripts and third-party origins that needed to be in the allowlist before switching to enforcing mode. Running in report-only mode for a week before enforcing is the standard approach – enforcing without this step typically breaks functionality that was not visible in testing.
The audit also found that the Strict-Transport-Security header was set by the load balancer but not by the application. This is acceptable if the load balancer is the single entry point, but worth confirming explicitly. Defense in depth means the application should set its own security headers even when infrastructure provides them.
Phase Four: Rate Limiting
The authentication endpoints – sign in, sign up, password reset – had no rate limiting. Without rate limiting, these endpoints accept unlimited requests, which enables brute-force attacks on passwords and enumeration of valid email addresses through the difference in response behavior.
Rate limiting on Next.js API routes can be implemented with a Redis-backed sliding window counter. The key is typically the IP address or, for authenticated endpoints, the user ID. For the authentication endpoints, IP-based limiting is appropriate because the user is not yet authenticated:
// lib/rateLimit.ts
import { Redis } from '@upstash/redis';
const redis = new Redis({ url: process.env.UPSTASH_URL!, token: process.env.UPSTASH_TOKEN! });
export async function checkRateLimit(identifier: string, limit: number, windowSeconds: number) {
const key = `rate_limit:${identifier}`;
const count = await redis.incr(key);
if (count === 1) {
await redis.expire(key, windowSeconds);
}
return { allowed: count <= limit, remaining: Math.max(0, limit - count) };
}
The limits applied were conservative: sign-in attempts are limited per IP per fifteen-minute window, password reset requests are limited per email address per hour. These limits are permissive enough that legitimate users will never encounter them under normal use, and restrictive enough that automated attacks are not feasible.
Phase Five: Timing Attack Protection
Timing attacks on authentication endpoints allow an attacker to infer whether an email address is registered by measuring the difference in response time between “email not found” and “email found, password incorrect.” A lookup that finds a user takes slightly longer than one that does not, because the password comparison (which uses bcrypt or Argon2) only runs when the user is found. This difference – typically in the range of 50–200ms – is measurable with enough requests.
The standard mitigation is to run the password hash comparison unconditionally, using a dummy hash when the user does not exist:
const DUMMY_HASH = '$2b$12$dummyhashvaluethatisvalidbutnevermatchesanypassword';
export async function verifyLogin(email: string, password: string) {
const user = await db.user.findUnique({ where: { email } });
// Always run bcrypt comparison to normalize response time
const hashToCompare = user?.passwordHash ?? DUMMY_HASH;
const isValid = await bcrypt.compare(password, hashToCompare);
if (!user || !isValid) {
return null; // Same code path for both failure cases
}
return user;
}
Supabase’s built-in authentication handles this correctly for its own sign-in endpoint. The issue appeared in a custom authentication route the application had added to handle a legacy integration. Any custom authentication logic that does not use Supabase’s auth functions needs to implement timing normalization explicitly.
What Changed and What Did Not
Of the findings across all five phases, every item was addressed before the audit was considered complete. The prioritization was clear: the RLS misconfiguration and the open redirect were treated as critical and fixed immediately. The CSP deployment and rate limiting were treated as high priority and deployed within the same sprint. Timing attack normalization on the legacy route was medium priority and shipped in the following sprint.
The audit also produced a set of process changes: new tables now require a RLS policy review before the pull request can be merged, API routes have a shared Zod validation wrapper that is part of the route template, and the CSP header is now generated from a centralized configuration rather than inline strings.
The most salient outcome was not any individual fix but the inventory. Understanding exactly what data each table contains, what RLS policy governs it, and which API routes touch it is the foundation for making correct security decisions as the application evolves. Without that inventory, security is reactive – you fix what you find. With it, security becomes a property you can reason about deliberately.
Approfondimenti Correlati
- Disaster Recovery for Self-Hosted Services – The backup and recovery layer that pairs with security hardening – what happens when controls fail despite best efforts.
- Building an Exam Prep App with Thousands of Questions – Supabase RLS policies in a production application, including the multi-tenant access pattern.