tva
← Insights

Token-Refresh-Race-Conditions in Single-Page-Applications verhindern

Users get logged out without explanation. The session token looks valid in localStorage, the last successful request happened seconds ago, and the pattern is infuriatingly inconsistent — except that it tends to cluster around page loads that trigger multiple concurrent API calls. This is one of the more confusing production bugs in single-page applications, and the root cause is almost always the same: a token refresh race condition.

The mechanism is straightforward once you identify it. JWTs expire. When a token expires mid-session, the correct behavior is to exchange the refresh token for a new access token and retry the original request transparently. But in reality, React applications rarely make one request at a time. A dashboard page might simultaneously fetch user data, account summary, notification counts, and recent activity — all on component mount, all in parallel. If the access token expires during that burst, every one of those requests gets a 401. Every one of them independently detects the expired token. Every one of them tries to call the refresh endpoint. The first call succeeds and the old refresh token is consumed. The second call tries the same — now-invalidated — refresh token and fails. The user is logged out.

Warum es in der Entwicklung selten auftaucht

The race condition requires a specific coincidence: multiple truly concurrent requests, combined with a token expiry happening at exactly the wrong moment. Development environments typically have generous token lifetimes, localhost latency is near-zero so requests complete quickly and rarely fully overlap, and most developers test one feature at a time rather than loading data-heavy pages. The result is that this bug hides comfortably in development and surfaces in production under real usage patterns — most often affecting users who have been active long enough for their initial token to expire mid-session.

Reproducing it deliberately requires shortening token lifetimes and navigating to pages with parallel data fetching. With Supabase, the jwt_expiry setting in your project configuration controls access token lifetime. Set it to sixty seconds, open a dashboard page with multiple queries, and wait. The race condition will appear within the first few minutes of active use.

Wie sich das bei Supabase-Auth manifestiert

Supabase's @supabase/supabase-js v2 client handles token refresh internally and includes serialization logic — it maintains a single in-flight refresh promise and queues concurrent callers against it. For applications that exclusively use the Supabase client for all API calls, this built-in handling is often sufficient. But in reality, most production applications reach a point where they make authenticated requests outside the Supabase client: calls to their own backend API, edge functions accessed via fetch, or third-party integrations that need the access token passed as a header. The moment you extract the access token from the session and use it directly in a separate HTTP layer, you have stepped outside the protection of the client's internal serialization.

A second failure mode appears when the Supabase client is not correctly initialized as a singleton. If module evaluation creates multiple client instances — which can happen in certain SSR contexts or in test environments where modules are re-evaluated — those instances do not share their internal refresh state. Two client instances will each independently attempt a refresh when their respective requests return 401, and the race condition reappears regardless of the built-in serialization.

Das Singleton-Promise-Muster

The most elegant solution captures the in-flight refresh as a promise and returns that same promise to all callers until it resolves. Instead of each caller independently initiating a new refresh, they all await a single shared operation. The implementation is compact:

let refreshPromise: Promise<string> | null = null;

async function getValidAccessToken(): Promise<string> {
  const { data } = await supabase.auth.getSession();
  const token = data.session?.access_token;

  if (token && !isTokenExpired(token)) {
    return token;
  }

  if (!refreshPromise) {
    refreshPromise = supabase.auth
      .refreshSession()
      .then(({ data, error }) => {
        if (error || !data.session) {
          throw error ?? new Error("Token refresh failed");
        }
        return data.session.access_token;
      })
      .finally(() => {
        refreshPromise = null;
      });
  }

  return refreshPromise;
}

function isTokenExpired(token: string): boolean {
  try {
    const payload = JSON.parse(atob(token.split(".")[1]));
    // 30-second buffer avoids tokens that are valid now but expire before the request completes
    return payload.exp * 1000 < Date.now() + 30_000;
  } catch {
    return true;
  }
}

The finally block is the salient detail: it clears the shared promise reference after resolution or rejection, so the next refresh cycle starts clean. Without it, a rejected refresh promise would permanently block future refresh attempts for the lifetime of the module. The 30-second buffer in the expiry check addresses a subtle edge case — a token that passes the isTokenExpired check but expires in the next few hundred milliseconds before the request reaches the server will return a 401, triggering another refresh cycle unnecessarily.

Der Interceptor-Queue-Ansatz

For applications using Axios, the standard pattern wraps the same serialization logic in a response interceptor. This approach handles retry automatically — failed requests are queued, the refresh completes, and the queue is drained with the new token applied to each pending request config before they are retried:

import axios, { type AxiosError, type AxiosRequestConfig } from "axios";

let isRefreshing = false;
let pendingQueue: Array<{
  resolve: (token: string) => void;
  reject: (err: unknown) => void;
}> = [];

function drainQueue(error: unknown, token: string | null): void {
  pendingQueue.forEach(({ resolve, reject }) => {
    if (error) reject(error);
    else resolve(token!);
  });
  pendingQueue = [];
}

export const api = axios.create({
  baseURL: import.meta.env.VITE_API_URL,
});

api.interceptors.response.use(
  (response) => response,
  async (error: AxiosError) => {
    const original = error.config as AxiosRequestConfig & { _retry?: boolean };

    if (error.response?.status !== 401 || original._retry) {
      return Promise.reject(error);
    }

    if (isRefreshing) {
      // Queue this request — it will be retried once the in-flight refresh completes
      return new Promise<string>((resolve, reject) => {
        pendingQueue.push({ resolve, reject });
      }).then((token) => {
        original.headers = { ...original.headers, Authorization: `Bearer ${token}` };
        return api(original);
      });
    }

    original._retry = true;
    isRefreshing = true;

    try {
      const { data, error: refreshError } = await supabase.auth.refreshSession();
      if (refreshError || !data.session) throw refreshError ?? new Error("Refresh failed");

      const newToken = data.session.access_token;
      drainQueue(null, newToken);

      original.headers = { ...original.headers, Authorization: `Bearer ${newToken}` };
      return api(original);
    } catch (err) {
      drainQueue(err, null);
      // Clear the local Supabase session so the app reaches a clean unauthenticated state
      await supabase.auth.signOut();
      return Promise.reject(err);
    } finally {
      isRefreshing = false;
    }
  }
);

The _retry flag on the original request config prevents infinite loops — if the retried request also returns a 401 (for example, because the new token itself was somehow invalid), the interceptor passes the error through rather than triggering another refresh. The explicit signOut() on refresh failure ensures Supabase's local session state is cleared, so the application lands in a clean unauthenticated state rather than a limbo where the UI believes the user is logged in but every request fails.

React-Context in Sync halten

Both patterns above manage the token at the HTTP layer, but React applications often need the current session available in context — for conditional rendering, for passing user details to child components, or for non-Axios calls that read the token directly. The challenge is that storing the token in React state and keeping the interceptor in sync creates two sources of truth that can diverge under re-renders.

A cleaner approach keeps the canonical token in a ref that the interceptor can access synchronously, while the React subscription handles UI state separately:

export function AuthProvider({ children }: { children: React.ReactNode }) {
  const [session, setSession] = useState<Session | null>(null);
  const tokenRef = useRef<string | null>(null);

  useEffect(() => {
    supabase.auth.getSession().then(({ data }) => {
      setSession(data.session);
      tokenRef.current = data.session?.access_token ?? null;
    });

    const { data: { subscription } } = supabase.auth.onAuthStateChange(
      (_event, newSession) => {
        setSession(newSession);
        tokenRef.current = newSession?.access_token ?? null;
      }
    );

    return () => subscription.unsubscribe();
  }, []);

  // Pass tokenRef to your interceptor setup so it reads the current token
  // synchronously rather than going async through state
  useEffect(() => {
    setupInterceptors(tokenRef);
  }, []);

  return <AuthContext.Provider value={{ session }}>{children}</AuthContext.Provider>;
}

Using a ref rather than state means the interceptor reads the current token synchronously, without triggering re-renders and without the stale closure problem that arises when callbacks capture a specific state value at the time they were created. The onAuthStateChange subscription keeps both the ref and the React state consistent as the session evolves.

Was Supabase übernimmt – und was nicht

The protection boundary of the Supabase client's internal serialization is worth being explicit about. It covers calls made through the client's own methods: supabase.from(), supabase.rpc(), storage operations, realtime subscriptions, and the auth methods themselves. The client maintains a single refresh promise internally and queues concurrent callers, so supabase.auth.getSession() called concurrently from multiple places will all correctly receive the refreshed session once the single in-flight refresh completes.

What falls outside this protection: authenticated fetch calls constructed manually with the session's access_token, any HTTP client configured independently of the Supabase client, and — as noted above — multiple Supabase client instances. If your architecture extracts the access token to pass to a separate API layer, the race condition protection boundary ends at that extraction point. This is where the patterns above become necessary.

Tab-übergreifendes Verhalten

The race condition has a variant that appears across browser tabs. If a user has the application open in two tabs and both tabs simultaneously detect token expiry, each tab's session management will independently attempt a refresh. Most auth implementations handle this through localStorage events — one tab writes the new tokens to storage, the other tab detects the storage event and updates its local session state rather than initiating a competing refresh. Supabase's onAuthStateChange listener fires in response to these cross-tab storage changes when the client is initialized with detectSessionInUrl: true and storage events are not suppressed.

But in reality, this only works if every active tab has the Supabase client properly initialized and subscribed to auth state changes. Applications that initialize the client lazily, or that cache the session on mount without subscribing to ongoing changes via onAuthStateChange, will miss the cross-tab updates. The second tab will attempt its own refresh with the already-consumed refresh token, and the user will be logged out in that tab. The fix is the same: ensure every tab subscribes to onAuthStateChange on initialization and does not cache the session token in module-level state that persists between route navigations.

Es in der Produktion erkennen

Token refresh race conditions do not announce themselves clearly. They show up in error monitoring as sporadic 401s on endpoints that should be authenticated, as unexpected session terminations in analytics, or as user reports of being randomly logged out. Each individual failure looks like a one-off authentication problem rather than a systemic concurrency issue — which is precisely why teams often spend time investigating token expiry configuration, cookie settings, or network conditions before arriving at the actual cause.

The precondition is specific: parallel requests combined with a token expiry event. Any page that loads data from multiple sources simultaneously is a candidate. Dashboard pages, feed views, or any component tree where multiple children independently call useEffect to fetch their own data will create the parallel request pattern. In React Query or SWR applications, any page where multiple queries are mounted simultaneously produces the same conditions. Once you know what to look for, the fix is straightforward — but it requires approaching the problem as a concurrency issue rather than an authentication configuration problem. The interceptor queue and the singleton promise pattern are both well-established. Either works. The important invariant is that at most one refresh operation can be in flight at any time, and every caller awaits its result rather than initiating a competing refresh.

Verwandte Beiträge

  • JWT structure and expiry — Access tokens and refresh tokens serve different purposes. Access tokens are short-lived and stateless; refresh tokens are long-lived, single-use, and typically stored server-side or with a rotation policy. Understanding the distinction informs how aggressively to set expiry windows and what happens when refresh fails.
  • Supabase Row Level Security and the access token — RLS policies evaluate the JWT claims of the requesting user. A race condition that causes a request to proceed with an expired token, or no token, will produce a permission denied error rather than a 401, which can make the underlying cause harder to identify.
  • React Query and parallel fetching — React Query's useQuery hooks called from multiple components on the same page will all execute concurrently by default. Configuring a shared query client with a custom queryFn that calls getValidAccessToken() ensures all queries route through the same refresh serialization layer.
  • Service workers and token caching — Applications that use service workers to cache API responses introduce a third location where auth state must be kept consistent. A service worker that caches requests with stale authorization headers can serve outdated responses after a token refresh, producing subtle data inconsistencies that are difficult to attribute to auth handling.

Weitere Artikel