Skip to content
April 18, 2020

Separating server cache & application state with React Query

UI state, ephemeral data & React Query

The post has been updated with the latest React Query v3 changes.

The Problem

There was a period where we included everything into our Redux stores. Maybe not all of you, but teams I worked with used to do it. That's what we knew, and what Medium authors were preaching as good practice.

We had plenty of presentational components that displayed information we got from our server. The info we wanted to present would be duplicated across the application, so it made sense to centralize the data fetching implementation and update them all accordingly.

There is nothing wrong with the way I present it now. But ultimately, it gets overwhelming doing it for ever-growing applications. What was supposed to be an excellent solution for state-management, was now responsible for handling loading states and making remote calls.

We introduced redux-thunk, some of you might have even tried redux-saga and its approach with generators. So much more complexity.

And it's not that we only had trouble keeping up with the remote data flow. The actual information that presents how UI state would be suppressed & refactoring would be painful.

What we needed

There is no need to store what the server responds. If the time-to-live of the response is a couple of seconds, we don't want that anywhere near our stores. The moment we get the response, we transform it accordingly and we feed it to the components.

And this is where React Query comes into play.

In short:

  1. If the component wants remote data, React Query fetches them
  2. It exposes the status progress so that the component can give proper feedback
  3. Refetches the data or serves them from the cache, when needed again

Let's see it in action.

function Matches() {
  const {status, data, error} = useQuery('matches', getMatches);
  if (status === 'loading') {
    return <Skeleton />;
  if (status === 'error') {
    // Depends on your implementation
    return <ErrorState error={error} />;
  // something like this
  return (
      { => (
        <MatchesRow match={match} key={} />

In the highlighted line, we use the useQuery hook. We tell React Query that we want to call the getMatches function, and associate the response with the key matches.

We don't store the loading states or the response. Somehow, React Query handles all that and feeds the component just the right amount of data it needs. We don't care if the data are new or cached, or what happened. Only that we should prepare some view for the state of the remote call.

Any other component asking the same query/function combination will get the cached response. Isn't this what we want? It's ok, we can configure it.

What matters is that we have a solution that caches the ephemeral data from the server, and lets us know of the progress. We don't include isXLoading flags in our stores, and we can be happy again.

What about passing params?

They are part of the key of course! Every little variation will get a cached entry of its own.

function Match({matchId}) {
  const {status, data, error} = useQuery(['matches', matchId], getMatch);
  if (status === 'loading') {
    return <Skeleton />;
  if (status === 'error') {
    // Depends on your implementation
    return <ErrorState error={error} />;
  // something like this
  return <Match match={match} />;

And here's how the fetcher accepts them.

export const getMatch = ({
}: QueryFunctionContext<[string, string]>) => {
  const [_key, matchId] = queryKey;
  return axios.get(MATCH_ENDPOINT, {id: matchId}).then((data) => data.match);

I don't want to dive deep into the syntax accept of the library. It has excellent documentation & vibrant community. Take a look. There are plenty of goodies I'm not covering here.

Extract to a custom hook

In this example we want the query to re-run when we change our state variables matchStatus, enabledLeagues, orderBy. There is no need to duplicate the code in every components, so abstracting to a custom hook is an excellent option.

Here's an example:

import React from 'react';
import {useQuery} from 'react-query';
import {pick} from 'lodash-es';
import {useMatchContext} from '../state';
import {getMatches} from '../api/queries';
export const useMatchesQuery = ({queryProps = {}} = {}) => {
  const {state} = useAppContext();
  const {matchStatus, enabledLeagues, orderBy} = state;
  const queryVariables = {matchStatus, enabledLeagues, orderBy};
  // We're using the alternative Object API here.
  // For more expressive calls, I find that it helps with readability
  const results = useQuery({
    queryFn: getMatches,
    queryKey: ['matches', queryVariables],
    staleTime: Infinity,
    onError: (error) => {
      // some reporting if needed
    // override default configuration if needed
  const {status, error} = results;
  const data = results?.matches || [];
  return {status, data, error};

So we can re-use the same query, without having to account for the key or the fetcher function.

function Matches() {
  const {status, data, error} = useMatchesQuery();
  /// ...

Canceling requests

Although we don't need to cancel requests to avoid outdated content (different params, result in different queries), it is good practice to do, so that our server won't have to take the extra load.

Here's an example. By clicking "Next page" 5 times on a paginated table, we'll run 5 queries. Thankfully we won't have conflicts since all of them have distinct keys (due to the page number). Our loyal server will respond 5 times though.

So here's the question. Do we value more having these 4 previous pages cached for the future, or minimizing the load for our server?

It depends. I prefer to cancel the requests as a general rule.

How caching works

Let's take a closer look at the caching decision-making.

  • A new instance of useQuery('matches', getMatches) mounts.
    • Since no other queries have been made with this query + variable combination, this query will show a hard loading state and make a network request to fetch the data.
    • It will then cache the data using 'matches' and getMatches as the unique identifiers for that cache.
    • A stale invalidation is scheduled using the staleTime option as a delay (defaults to 0, or immediately).
  • A second instance of useQuery('matches', getMatches) mounts elsewhere.
    • Because this exact data exist in the cache from the first instance of this query, that data is immediately returned from the cache.
  • Both instances of the useQuery('matches', getMatches) query are unmounted and no longer in use.
    • Since there are no more active instances to this query, a cache timeout is set using cacheTime to delete and garbage collect the query (defaults to 5 minutes).
  • No more instances of useQuery('matches', getMatches) appear within 5 minutes.
    • This query and its data are deleted and garbage collected.

So Stale queries are automatically refetched:

  • Whenever their query keys change (this includes variables used in query key tuples),
  • When they are freshly mounted from not having any instances on the page,
  • Or when they are refetched via the query cache manually.

I like to set the stateTime to a certain amount when I'm confident that the data won't change soon.


After using React Query in an application that is primary using GET requests, I can say that it has helped immensely. Orchestrating remote data fetching can cause a lot of boilerplate, and it makes sense to outsource it. This way we can focus more on writing features.

But what I never considered first, was how much it helps with refactoring. Migrating over to GraphQL endpoints is such a painless task. React-query doesn't care what protocol or client you're using.

Finally, you get to see your UI stores for what they are. I have heard testaments of people dropping their dedicated state management library in favor of simple Context. And it makes sense. Not it every case, but it's a viable path.

I like React Query because it promotes less complexity and more confidence in the code.

Give it a spin