Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Wed, 29 May 2024 02:09:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 225069128 Combining React Server Components with react-query for Easy Data Management https://frontendmasters.com/blog/combining-react-server-components-with-react-query-for-easy-data-management/ https://frontendmasters.com/blog/combining-react-server-components-with-react-query-for-easy-data-management/#comments Fri, 24 May 2024 15:27:11 +0000 https://frontendmasters.com/blog/?p=2378 React Server Components (RSC) are an exciting innovation in web development. In this post we’ll briefly introduce them, show what their purpose and benefits are, as well as their limitations. We’ll wrap up by showing how to pair them with react-query to help solve those limitations. Let’s get started!

Why RSC?

React Server Components, as the name implies, execute on the server—and the server alone. To see why this is significant, let’s take a whirlwind tour of how web development evolved over the last 10 or so years.

Prior to RSC, JavaScript frameworks (React, Svelte, Vue, Solid, etc) provided you with a component model for building your application. These components were capable of running on the server, but only as a synchronous operation for stringifying your components’ HTML to send down to the browser so it could server render your app. Your app would then render in the browser, again, at which point it would become interactive. With this model, the only way to load data was as a side-effect on the client. Waiting until your app reached your user’s browser before beginning to load data was slow and inefficient.

To solve this inefficiency, meta-frameworks like Next, SvelteKit, Remix, Nuxt, SolidStart, etc were created. These meta-frameworks provided various ways to load data, server-side, with that data being injected by the meta-framework into your component tree. This code was non-portable, and usually a little awkward. You’d have to define some sort of loader function that was associated with a given route, load data, and then expect it to show up in your component tree based on the rules of whatever meta-framework you were using.

This worked, but it wasn’t without issue. In addition to being framework-specific, composition also suffered; where typically components are explicitly passed props by whichever component renders them, now there are implicit props passed by the meta-framework, based on what you return from your loader. Nor was this setup the most flexible. A given page needs to know what data it needs up front, and request it all from the loader. With client-rendered SPAs we could just render whatever components we need, and let them fetch whatever data they need. This was awful for performance, but amazing for convenience.

RSC bridges that gap and gives us the best of both worlds. We get to ad hoc request whatever data we need from whichever component we’re rendering, but have that code execute on the server, without needing to wait for a round trip to the browser. Best of all, RSC also supports streaming, or more precisely, out-of-order streaming. If some of our data are slow to load, we can send the rest of the page, and push those data down to the browser, from the server, whenever they happen to be ready.

How do I use them?

At time of writing RSC are mostly only supported in Next.js, although the minimal framework Waku also supports it. Remix and TanStack Router are currently working on implementations, so stay tuned. I’ll show a very brief overview of what they look like in Next; consult those other frameworks when they ship. The ideas will be the same, even if the implementations differ slightly.

In Next, when using the new “app directory” (it’s literally a folder called “app” that you define your various routes in), pages are RSC by default. Any components imported by these pages are also RSC, as well as components imported by those components, and so on. When you’re ready to exit server components and switch to “client components,” you put the "use client" pragma at the top of a component. Now that component, and everything that component imports are client components. Check the Next docs for more info.

How do React Server Components work?

React Server Components are just like regular React Components, but with a few differences. For starters, they can be async functions. The fact that you can await asynchronous operations right in the component makes them well suited for requesting data. Note that asynchronous client components are a thing coming soon to React, so this differentiation won’t exist for too long. The other big difference is that these components run only on the server. Client components (i.e. regular components) run on the server, and then re-run on the client in order to “hydrate.” That’s how frameworks like Next and Remix have always worked. But server components run only on the server.

Server components have no hydration, since, again, they only execute on the server. That means you can do things like connect directly to a database, or use Server-only api’s. But it also means there are many things you can’t do in RSCs: you cannot use effects or state, you cannot set up event handlers, or use browser-specific APIs like localStorage. If you violate any of those rules you’ll get errors.

For a more thorough introduction to RSC, check the Next docs for the app directory, or depending on when you read this, the Remix or TanStack Router docs. But to keep this post a reasonable length, let’s keep the details in the docs, and see how we use them.

Let’s put together a very basic proof of concept demo app with RSC, see how data mutations work, and some of their limitations. We’ll then take that same app (still using RSC) and see how it looks with react-query.

The demo app

As I’ve done before, let’s put together a very basic, very ugly web page for searching some books, and also updating the titles of them. We’ll also show some other data on this page: the various subjects, and tags we have, which in theory we could apply to our books (if this were a real web app, instead of a demo).

The point is to show how RSC and react-query work, not make anything useful or beautiful, so temper your expectations 🙂 Here’s what it looks like:

The page has a search input which puts our search term into the url to filter the books shown. Each book also has an input attached to it for us to update that book’s title. Note the nav links at the top, for the RSC and RSC + react-query versions. While the pages look and behave identically as far as the user can see, the implementations are different, which we’ll get into.

The data is all static, but the books are put into a SQLite database, so we can update the data. The binary for the SQLite db should be in that repo, but you can always re-create it (and reset any updates you’ve made) by running npm run create-db.

Let’s dive in.

A note on caching

At time of writing, Next is about to release a new version with radically different caching APIs and defaults. We won’t cover any of that for this post. For the demo, I’ve disabled all caching. Each call to a page, or API endpoint will always run fresh from the server. The client cache will still work, so if you click between the two pages, Next will cache and display what you just saw, client-side. But refreshing the page will always recreate everything from the server.

Loading the data

There are API endpoints inside of the api folder for loading data and for updating the books. I’ve added artificial delays of a few hundred ms for each of these endpoints, since they’re either loading static data, or running simple queries from SQLite. There’s also console logging for these data, so you can see what’s loading when. This will be useful in a bit.

Here’s what the terminal console shows for a typical page load in either the RSC or RSC + react-query version.

Let’s look at the RSC version

RSC Version

export default function RSC(props: { searchParams: any }) {
  const search = props.searchParams.search || "";

  return (
    <section className="p-5">
      <h1 className="text-lg leading-none font-bold">Books page in RSC</h1>
      <Suspense fallback={<h1>Loading...</h1>}>
        <div className="flex flex-col gap-2 p-5">
          <BookSearchForm />
          <div className="flex">
            <div className="flex-[2] min-w-0">
              <Books search={search} />
            </div>
            <div className="flex-1 flex flex-col gap-8">
              <Subjects />
              <Tags />
            </div>
          </div>
        </div>
      </Suspense>
    </section>
  );
}

We have a simple page header. Then we see a Suspense boundary. This is how out-of-order streaming works with Next and RSC. Everything above the Suspense boundary will render immediately, and the Loading... message will show until all the various data in the various components below have finished loading. React knows what’s pending based on what you’ve awaited. The BooksSubjects and Tags components all have fetches inside of them, which are awaited. We’ll look at one of them momentarily, but first note that, even though three different components are all requesting data, React will run them in parallel. Sibling nodes in the component tree can, and do load data in parallel.

But if you ever have a parent / child component which both load data, then the child component will not (cannot) even start util the parent is finished loading. If the child data fetch depends on the parent’s loaded data, then this is unavoidable (you’d have to modify your backend to fix it), but if the data do not depend on each other, then you would solve this waterfall by just loading the data higher up in the component tree, and passing the various pieces down.

Loading data

Let’s see the Books component”

import { FC } from "react";
import { BooksList } from "../components/BooksList";
import { BookEdit } from "../components/BookEditRSC";

export const Books: FC<{ search: string }> = async ({ search }) => {
  const booksResp = await fetch(`http://localhost:3000/api/books?search=${search}`, {
    next: {
      tags: ["books-query"],
    },
  });
  const { books } = await booksResp.json();

  return (
    <div>
      <BooksList books={books} BookEdit={BookEdit} />
    </div>
  );
};

We fetch and await our data right in the component! This was completely impossible before RSC. We then then pass it down into the BooksList component. I separated this out so I could re-use the main BookList component with both versions. The BookEdit prop I’m passing in is a React component that renders the textbox to update the title, and performs the update. This will differ between the RSC, and react-query version. More on that in a bit.

The next property in the fetch is Next-specific, and will be used to invalidate our data in just a moment. The experienced Next devs might spot a problem here, which we’ll get into very soon.

So you’ve loaded data, now what?

We have a page with three different RSCs which load and render data. Now what? If our page was just static content we’d be done. We loaded data, and displayed it. If that’s your use case, you’re done. RSCs are perfect for you, and you won’t need the rest of this post.

But what if you want to let your user interact with, and update your data?

Updating your data with Server Actions

To mutate data with RSC you use something called Server Actions. Check the docs for specifics, but here’s what our server action looks like

"use server";

import { revalidateTag } from "next/cache";

export const saveBook = async (id: number, title: string) => {
  await fetch("http://localhost:3000/api/books/update", {
    method: "POST",
    body: JSON.stringify({
      id,
      title,
    }),
  });
  revalidateTag("books-query");
};

Note the "use server" pragma at the top. That means the function we export is now a server action. saveBook takes an id, and a title; it posts to an endpoint to update our book in SQLite, and then calls revalidateTag with the same tag we passed to our fetch, before.

In real life, we wouldn’t even need the books/update endpoint. We’d just do the work right in the server action. But we’ll be re-using that endpoint in a bit, when we update data without server actions, and it’s nice to keep these code samples clean. The books/update endpoint opens up SQLite, and executes an UPDATE.

Let’s see the BookEdit component we use with RSC:

"use client";

import { FC, useRef, useTransition } from "react";
import { saveBook } from "../serverActions";
import { BookEditProps } from "../types";

export const BookEdit: FC<BookEditProps> = (props) => {
  const { book } = props;
  const titleRef = useRef<HTMLInputElement>(null);
  const [saving, startSaving] = useTransition();

  function doSave() {
    startSaving(async () => {
      await saveBook(book.id, titleRef.current!.value);
    });
  }

  return (
    <div className="flex gap-2">
      <input className="border rounded border-gray-600 p-1" ref={titleRef} defaultValue={book.title} />
      <button className="rounded border border-gray-600 p-1 bg-blue-300" disabled={saving} onClick={doSave}>
        {saving ? "Saving..." : "Save"}
      </button>
    </div>
  );
};

It’s a client component. We import the server action, and then just call it in a button’s event handler, wrapped in a transition so we can have saving state.

Stop and consider just how radical this is, and what React and Next are doing under the covers. All we did was create a vanilla function. We then imported that function, and called it from a button’s event handler. But under the covers a network request is made to an endpoint that’s synthesized for us. And then the revalidateTag tells Next what’s changed, so our RSC can re-run, re-request data, and send down updated markup.

Not only that, but all this happens in one round trip with the server.

This is an incredible engineering achievement, and it works! If you update one of the titles, and click save, you’ll see updated data show up in a moment (the update has an artificial delay since we’re only updating in a local SQLite instance)

What’s the catch?

This seems too good to be true. What’s the catch? Well, let’s see what the terminal shows when we update a book:

Ummmm, why is all of our data re-loading? We only called revalidateTag on our books, not our subjects or tags. The problem is that revalidateTag doesn’t tell Next what to reload, it tells it what to eject from its cache. The fact is, Next needs to reload everything for the current page when we call revalidateTag. This makes sense when you think about what’s really happening. These server components are not stateful; they run on the server, but they don’t live on the server. The request executes on our server, those RSCs render, and send down the markup, and that’s that. The component tree does not live on indefinitely on the server; our servers wouldn’t scale very well if they did!

So how do we solve this? For a use case like this, the solution would be to not turn off caching. We’d lean on Next’s caching mechanisms, whatever they look like when you happen to read this. We’d cache each of these data with different tags, and invalidate the tag related to the data we just updated.

The whole RSC tree will still re-render when we do that, but the requests for cached data would run quickly. Personally, I’m of the view that caching should be a performance tweak you add, as needed; it should not be a sine qua non for avoiding slow updates.

Unfortunately, there’s yet another problem with server actions: they run serially. Only one server action can be in flight at a time; they’ll queue if you try to violate this constraint.

This sounds genuinely unbelievable; but it’s true. If we artificially slow down our update a LOT, and then quickly click 5 different save buttons, we’ll see horrifying things in our network tab. If the extreme slowdown on the update endpoint seems unfair on my part, remember: you should never, ever assume your network will be fast, or even reliable. Occasional, slow network requests are inevitable, and server actions will do the worst possible thing under those circumstances.

This is a known issue, and will presumably be fixed at some point. But the re-loading without caching issue is unavoidable with how Next app directory is designed.

Just to be clear, server actions are still, even with these limitations, outstanding (for some use cases). If you have a web page with a form, and a submit button, server actions are outstanding. None of these limitations will matter (assuming your form doesn’t depend on a bunch of different data sources). In fact, server actions go especially well with forms. You can even set the “action” of a form (in Next) directly to a server action. See the docs for more info, as well as on related hooks, like useFormStatus hook.

But back to our app. We don’t have a page with a single form and no data sources. We have lots of little forms, on a page with lots of data sources. Server actions won’t work well here, so let’s see an alternative.

react-query

React Query is probably the most mature, well-maintained data management library in the React ecosystem. Unsurprisngly, it also works well with RSC.

To use react-query we’ll need to install two packages: npm i @tanstack/react-query @tanstack/react-query-next-experimental. Don’t let the experimental in the name scare you; it’s been out for awhile, and works well.

Next we’ll make a Providers component, and render it from our root layout

"use client";

import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
import { ReactQueryStreamedHydration } from "@tanstack/react-query-next-experimental";
import { FC, PropsWithChildren, useEffect, useState } from "react";

export const Providers: FC<PropsWithChildren<{}>> = ({ children }) => {
  const [queryClient] = useState(() => new QueryClient());

  return (
    <QueryClientProvider client={queryClient}>
      <ReactQueryStreamedHydration>{children}</ReactQueryStreamedHydration>
    </QueryClientProvider>
  );
};

Now we’re ready to go.

Loading data with react-query

The long and short of it is that we use the useSuspenseQuery hook from inside client components. Let’s see some code. Here’s the Books component from the react-query version of our app.

"use client";

import { FC } from "react";
import { useSuspenseQuery } from "@tanstack/react-query";
import { BooksList } from "../components/BooksList";
import { BookEdit } from "../components/BookEditReactQuery";
import { useSearchParams } from "next/navigation";

export const Books: FC<{}> = () => {
  const params = useSearchParams();
  const search = params.get("search") ?? "";

  const { data } = useSuspenseQuery({
    queryKey: ["books-query", search],
    queryFn: async () => {
      const booksResp = await fetch(`http://localhost:3000/api/books?search=${search}`);
      const { books } = await booksResp.json();

      return { books };
    },
  });

  const { books } = data;

  return (
    <div>
      <BooksList books={books} BookEdit={BookEdit} />
    </div>
  );
};

Don’t let the "use client" pragma fool you. This component still renders on the server, and that fetch also happens on the server during the initial load of the page.

As the url changes, the useSearchParams result changes, and a new query is fired off by our useSuspenseQuery hook, from the browser. This would normally suspend the page, but I wrap the call to router.push in startTransition, so the existing content stays on the screen. Check the repo for details.

Updating data with react-query

We already have the /books/update endpoint for updating a book. How do we tell react-query to re-run whichever queries were attached to that data? The answer is the queryClient.invalidateQueries API. Let’s take a look at the BookEdit component for react-query

"use client";

import { FC, useRef, useTransition } from "react";
import { BookEditProps } from "../types";
import { useQueryClient } from "@tanstack/react-query";

export const BookEdit: FC<BookEditProps> = (props) => {
  const { book } = props;
  const titleRef = useRef<HTMLInputElement>(null);
  const queryClient = useQueryClient();
  const [saving, startSaving] = useTransition();

  const saveBook = async (id: number, newTitle: string) => {
    startSaving(async () => {
      await fetch("/api/books/update", {
        method: "POST",
        body: JSON.stringify({
          id,
          title: newTitle,
        }),
      });

      await queryClient.invalidateQueries({ queryKey: ["books-query"] });
    });
  };

  return (
    <div className="flex gap-2">
      <input className="border rounded border-gray-600 p-1" ref={titleRef} defaultValue={book.title} />
      <button className="rounded border border-gray-600 p-1 bg-blue-300" disabled={saving} onClick={() => saveBook(book.id, titleRef.current!.value)}>
        {saving ? "Saving..." : "Save"}
      </button>
    </div>
  );
};

The saveBook function calls out to the same book updating endpoint as before. We then call invalidateQueries with the first part of the query key, books-query. Remember, the actual queryKey we used in our query hook was queryKey: ["books-query", search]. Calling invalidate queries with the first piece of that key will invalidate everything that’s starts with that key, and will immediately re-fire any of those queries which are still on the page. So if you started out with an empty search, then searched for X, then Y, then Z, and updated a book, this code will clear the cache of all those entries, and then immediately re-run the Z query, and update our UI.

And it works.

What’s the catch?

The downside here is that we need two roundtrips from the browser to the server. The first roundtrip updates our book, and when that finishes, we then, from the browser, call invalidateQueries, which causes react-query to send a new network request for the updated data.

This is a surprisingly small price to pay. Remember, with server actions, calling revalidateTag will cause your entire component tree to re-render, which by extension will re-request all their various data. If you don’t have everything cached (on the server) properly, it’s very easy for this single round trip to take longer than the two round trips react-query needs. I say this from experience. I recently helped a friend / founder build a financial dashboard app. I had react-query set up just like this, and also implemented a server action to update a piece of data. And I had the same data rendered, and updated twice: once in an RSC, and again adjacently in a client component from a useSuspenseQuery hook. I basically fired off a race to see which would update first, certain the server action would, but was shocked to see react-query win. I initially thought I’d done something wrong until I realized what was happening (and hastened to roll back my server action work).

Playing on hard mode

There’s one obnoxious imperfection hiding. Let’s find it, and fix it.

Fixing routing when using react-query

Remember, when we search our books, I’m calling router.push which adds a querystring to the url, which causes useSearchParams() to update, which causes react-query to query new data. Let’s look at the network tab when this happens.

before our books endpoint can be called, it looks like we have other things happening. This is the navigation we caused when we called router.push. Next is basically rendering to a new page. The page we’re already on, except with a new querystring. Next is right to assume it needs to do this, but in practice react-query is handling our data. We don’t actually need, or want Next to render this new page; we just want the url to update, so react-query can request new data. If you’re wondering why it navigates to our new, changed page twice, well, so am I. Apparently, the RSC identifier is being changed, but I have no idea why. If anyone does, please reach out to me.

Next has no solutions for this.

The closest Next can come is to let you use window.history.pushState. That will trigger a client-side url update, similar to what used to be called shallow routing in prior versions of Next. This does in fact work; however, it’s not integrated with transitions for some reason. So when this calls, and our useSuspenseQuery hook updates, our current UI will suspend, and our nearest Suspense boundary will show the fallback. This is awful UI. I’ve reported this bug here; hopefully it gets a fix soon.

Next may not have a solution, but react-query does. If you think about it, we already know what query we need to run, we’re just stuck waiting on Next to finish navigating to an unchanging RSC page. What if we could pre-fetch this new endpoint request, so it’s already running for when Next finally finishes rendering our new (unchanged) page. We can, since react-query has an API just for this. Let’s see how.

Let’s look at the react-query search form component. In particular, the part which triggers a new navigation:

startTransition(() => {
  const search = searchParams.get("search") ?? "";
  queryClient.prefetchQuery({
    queryKey: ["books-query", search],
    queryFn: async () => {
      const booksResp = await fetch(`http://localhost:3000/api/books?search=${search}`);
      const { books } = await booksResp.json();

      return { books };
    },
  });

  router.push(currentPath + (queryString ? "?" : "") + queryString);
});

The call to queryClient.prefetchQueryprefetchQuery takes the same options as useSuspenseQuery, and runs that query, now. Later, when Next is finished, and react-query attempts to run the same query, it’s smart enough to see that the request is already in flight, and so just latches onto that active promise, and uses the result.

Here’s our network chart now:

Now nothing is delaying our endpoint request from firing. And since all data loading is happening in react-query, that navigation to our RSC page (or even two navigations) should be very, very fast.

Removing the duplication

If you’re thinking the duplication between the prefetch and the query itself is gross and fragile, you’re right. So just move it to a helper function. In a real app you’d probably move this boilerplate to something like this:

export const makeBooksSearchQuery = (search: string) => {
  return {
    queryKey: ["books-query", search ?? ""],
    queryFn: async () => {
      const booksResp = await fetch(`http://localhost:3000/api/books?search=${search}`);
      const { books } = await booksResp.json();

      return { books };
    },
  };
};

and then use it:

const { data } = useSuspenseQuery(makeBooksSearchQuery(search));

as needed:

queryClient.prefetchQuery(makeBooksSearchQuery(search));

But for this demo I opted for duplication and simplicity.

Before moving on, let’s take a moment and point out that all of this was only necessary because we had data loading tied to the URL. If we just click a button to set client-side state, and trigger a new data request, none of this would ever be an issue. Next would not route anywhere, and our client-side state update would trigger a new react-query.

What about bundle size?

When we did our react-query implementation, we changed our Books component to be a client component by adding the "use client" pragma. If you’re wondering whether that will cause an increase in our bundle size, you’re right. In the RSC version, that component only ever ran on the server. As a client component, it now has to run in both places, which will increase our bundle size a bit.

Honestly, I wouldn’t worry about it, especially for apps like this, with lots of different data sources that are interactive, and updating. This demo only had a single mutation, but it was just that; a demo. If we were to build this app for real, there’d be many mutation points, each with potentially multiple queries in need of invalidation.

If you’re curious, it’s technically possible to get the best of both worlds. You could load data in an RSC, and then pass that data to the regular useQuery hook via the initialData prop. You can check the docs for more info, but I honestly don’t think it’s worth it. You’d now need to define your data loading (the fetch call) in two places, or manually build an isomorphic fetch helper function to share between them. And then with actual data loading happening in RSCs, any navigations back to the same page (ie for querystrings) would re-fire those queries, when in reality react-query is already running those query udates client side. To fix that so you’d have to be certain to only ever use window.history.pushState like we talked about. The useQuery hook doesn’t suspend, so you wouldn’t need transitions for those URL changes. That’s good since pushState won’t suspend your content, but now you have to manually track all your loading states; if you have three pieces of data you want loaded before revealing a UI (like we did above) you’d have to manually track and aggregate those three loading states. It would work, but I highly doubt the complexity would be worth it. Just live with the very marginal bundle size increase.

Just use client components and let react-query remove the complexity with useSuspenseHook.

Wrapping up

This was a long post, but I hope it was a valuable. Next’s app directory is an incredible piece of infrastructure that let’s us request data on the server, render, and even stream component content from that data, all using the single React component model we’re all used to.

There’s some things to get right, but depending on the type of app you’re building, react-query can simplify things a great deal.

🆕 Update

Since publishing this post it was brought to my attention that these fetch calls from the server will not include cookie info. This is by design in Next, unfortunately. Track this issue for updates.

Unfortunately those cookies are needed in practice, for your auth info to be passed to your data requests on the backend. 

The best workaround here would be to read your cookies in the root RSC, and then pass them to the Providers component we already have, for setting up our react-query provider, to be placed onto context. This, by itself, would expose our secure, likely httpOnly cookies into our client bundle, which is bad. Fortunately there’s a library that allows you to encrypt them in a way that they only ever show up on the server.

You’d read these cookie values in all your client components that use useSuspenseQuery, and pass them along in your fetch calls on the server, and on the client, where those values would be empty, do nothing (and rely on your browser’s fetch to send the cookies along) 

]]>
https://frontendmasters.com/blog/combining-react-server-components-with-react-query-for-easy-data-management/feed/ 2 2378
Prefetching When Server Loading Won’t Do https://frontendmasters.com/blog/prefetching-when-server-loading-wont-do/ https://frontendmasters.com/blog/prefetching-when-server-loading-wont-do/#respond Wed, 15 May 2024 23:26:46 +0000 https://frontendmasters.com/blog/?p=2200 This is a post about a boring* topic: loading data.

(* Just kidding it will be amazing and engaging.)

Not how to load data, but instead we’ll take a step back, and look at where to load data. Not in any particular framework, either, this is going to be more broadly about data loading in different web application architectures, and paricularly how that impacts performance.

We’ll start with client-rendered sites and talk about some of the negative performance characteristics they may have. Then we’ll move on to server-rendered apps, and then to the lesser-known out-of-order streaming model. To wrap up, we’ll talk about a surprisingly old, rarely talked about way to effectively load slow data in a server-rendered application. Let’s get started!

Client Rendering

Application metaframeworks like Next and SvelteKit have become incredibly popular. In addition to offering developer conveniences like file system-based routing and scaffolding of API endoints, they also, more importantly, allow you to server render your application.

Why is server rendering so important? Let’s take a look at how the world looks with the opposite: client-rendered web applications, commonly referred to as “single page applications” or SPAs. Let’s start with a simplified diagram of what a typical request for a page looks like in an SPA.

The browser makes a request to your site. Let’s call it yoursite.io. With an SPA, it usually sends down a single, mostly empty HTML page, which has whatever script and style tags needed to run the site. This shell of a page might display your company logo, your static header, your copyright message in the footer, etc. But mostly it exists to load and run JavaScript, which will build the “real” site.

This is why these sites are called “single page” applications. There’s a single web page for the whole app, which runs code on the client to detect URL changes, and request and render whatever new UI is needed.

Back to our diagram. The inital web page was sent back from the web server as HTML. Now what? The browser will parse that HTML and find script tags. These script tags contain our application code, our JavaScript framework, etc. The browser will send requests back to the web server to load these scripts. Once the browser gets them back, it’ll parse, and execute them, and in so doing, begin executing your application code.

At this point whatever client-side router you’re using (i.e. react-routerTanstack Router, etc) will render your current page.

But there’s no data yet!

So you’re probably displaying loading spinners or skeleton screens or the like. To get the data, your client-side code will now make yet another request to your server to fetch whatever data are needed, so you can display your real, finished page to your user. This could be via a plain old fetchreact-query, or whatever. Those details won’t concern us here.

SSR To The Rescue

There is a pretty clear solution here. The server already has has the URL of the request, so instead of only returning that shell page, it could (should) request the data as well, get the page all ready to go, and send down the complete page.

Somehow.

This is how the web always worked with tools like PHP or asp.net. But when your app is written with a client-side JavaScript framework like React or Svelte, it’s surprisingly tricky. These frameworks all have API’s for stringifying a component tree into HTML on the server, so that markup can be sent down to the browser. But if a component in the middle of that component tree needs data, how do you load it on the server, and then somehow inject it where it’s needed? And then have the client acknowledge that data, and not re-request it. And of course, once you solve these problems and render your component tree, with data, on the server, you still need to re-render this component tree on the client, so your client-side code, like event handlers and such, start working.

This act of re-rendering the app client side is called hydration. Once it’s happened, we say that our app is interactive. Getting these things right is one of the main benefits modern application meta-frameworks like Next and SvelteKit provide.

Let’s take a look at what our request looks like in this server-rendered setup:

That’s great. The user sees the full page much, much sooner. Sure, it’s not interactive yet, but if you’re not shipping down obscene amounts of JavaScript, there’s a really good chance hydration will finish before the user can manage to click on any buttons.

We won’t get into all this, but Google themselves tell you this is much better for SEO as well.

So, what’s the catch? Well, what if our data are slow to load. Maybe our database is busy. Maybe it’s a huge request. Maybe there is a network hiccup. Or maybe you just depend on slow services you can’t control. It’s not rare.

This might be worse than the SPA we started with. Even though we needed multiple round trips to the server to get data, at least we were displaying a shell of a page quickly. Here, the initial request to the server will just hang and wait as long as needed for that data to load on the server, before sending down the full page. To the user, their browser (and your page) could appear unresponsive, and they might just give up and go back.

Out of Order Streaming

What if we could have the best of all worlds. What if we could server render, like we saw. But if some data are slow to load, we ship the rest of the page, with the data that we have, and let the server push down the remaining data, when ready. This is called streaming, or more precisely, out-of-order streaming (streaming, without the out-of-order part, is a separate, much more limited thing which we won’t cover here).

Let’s take a hypothetical example where the data abd, and data xyz are slow to load.

With out-of-order streaming we can load the to-do data load on the server, and send the page with just that data down to the user, immediately. The other two pieces of data have not loaded, yet, so our UI will display some manner of loading indicator. When the next piece of data is ready, the server pushes it down:

What’s the catch?

So does this solve all of our problems? Yes, but… only if the framework you’re using supports it. To stream with Next.js app directory you’ll use Suspense components with RSCWith SvelteKit you just return a promise from your loader. Remix supports this too, with an API that’s in the process of changing, so check their docs. SolidStart will also support this, but as of writing that entire project is still in beta, so check its docs when it comes out.

Some frameworks do not support this, like Astro and Next if you’re using the legacy pages directory.

What if we’re using those projects, and we have some dependencies on data which are slow to load? Are we stuck rendering this data in client code, after hydration?

Prefetching to the rescue

The web platform has a feature called prefetching. This lets us add a <link> tag to the <head> section of our HTML page, with a rel="prefetch" attribute, and an href attribute of the URL we want to prefetch. We can put service endpoint calls here, so long as they use the GET verb. If we need to pre-fetch data from an endpoint that uses POST, you’ll need to proxy it through an endpoint that uses GET. It’s worth noting that you can also prefetch with an HTTP header if that’s more convenient; see this post for more information.

When we do this, our page will start pre-fetching our resources as soon as the browser parses the link tag. Since it’s in the <head>, that means it’ll start pre-fetching at the same time our scripts and stylesheets are requested. So we no longer need to wait until our script tags load, parse, and hydrate our app. Now the data we need will start pre-fetching immediately. When hydration does complete, and our application code requests those same endpoints, the browser will be smart enough to serve that data from the prefetch cache.

Let’s see prefetching in action

To see pre-fetching in action, we’ll use Astro. Astro is a wonderful web framework that doesn’t get nearly enough attention. One of the very few things it can’t do is out-of-order streaming (for now). But let’s see how we can improve life with pre-fetching.

The repo for the code I’ll be showing is here. It’s not deployed anywhere, for fear of this blog posting getting popular, and me getting a big bill from Vercel. But the project has no external dependencies, so you can clone, install, and run locally. You could also deploy this to Vercel yourself if you really want to see it in action.

I whipped up a very basic, very ugly web page that hits some endpoints to pull down a hypothetical list of books, and some metadata about the library, which renders the books once ready. It looks like this:

The endpoints return static data, which is why there’s no external dependencies. I added a manual delay of 700ms to these endpoints (sometimes you have slow services and there’s nothing you can do about it), and I also installed and imported some large JavaScript libraries (d3, framer-motion, and recharts) to make sure hydration would take a moment or two, like with most production applications. And since these endpoints are slow, they’re a poor candidate for server fetching.

So let’s request them client-side, see the performance of the page, and then add pre-fetching to see how that improves things.

The client-side fetching looks like this:

useEffect(() => {
  fetch("/api/books")
    .then((resp) => resp.json())
    .then((books) => {
      setBooks(books);
    });

  fetch("/api/books-count")
    .then((resp) => resp.json())
    .then((booksCountResp) => {
      setCount(booksCountResp.count);
    });
}, []);

Nothing fancy. Nothing particularly resilient here. Not even any error handling. But perfect for our purposes.

Network diagram without pre-fetching

Running this project, deployed to Vercel, my network diagram looks like this:

Notice all of the script and style resources, which need to be requested and processed before our client-side fetches start (on the last two lines).

Adding pre-fetching

I’ve added a second page to this project, called with-prefetch, which is the same as the index page. Except now, let’s see how we can add some <link> tags to request these resources sooner.

First, in the root layout, let’s add this in the head section

<slot name="head"></slot>

this gives us the ability to (but does not require us to) add content to our HTML document’s <head>. This is exactly what we need. Now we can make a PrefetchBooks React component:

import type { FC } from "react";

export const PrefetchBooks: FC<{}> = (props) => {
  return (
    <>
      <link rel="prefetch" href="/api/books" as="fetch" />
      <link rel="prefetch" href="/api/books-count" as="fetch" />
    </>
  );
};

Then render it in our prefetching page, like so

<PrefetchBooks slot="head" />

Note the slot attribute on the React component, which tells Astro (not React) where to put this content.

With that, if we run that page, we’ll see our link tags in the head

Now let’s look at our updated network diagram:

Notice our endpoint calls now start immediately, on lines 3 and 4. Then later, in the last two lines, we see the real fetches being executed, at which point they just latch onto the prefetch calls already in flight.

Let’s put some hard numbers on this. When I ran a webpagetest mobile Lighthouse analysis on the version of this page without the pre-fetch, I got the following.

Note the LCP (Largest Contentful Paint) value. That’s essentially telling us when the page looks finished to a user. Remember, the Lighthouse test simulates your site in the slowest mobile device imagineable, which is why it’s 4.6 seconds.

When I re-run the same test on the pre-fetched version, things improved about a second

Definitely much better, but still not good; but it never will be until you can get your backend fast. But with some intelligent, targetted pre-fetching, you can at least improve things.

Parting thoughts

Hopefully all of your back-end data requirements will be forever fast in your developer journeys. But when they’re not, prefetching resources is a useful tool to keep in your toolbelt.

]]>
https://frontendmasters.com/blog/prefetching-when-server-loading-wont-do/feed/ 0 2200
Sending My Respect to Next.js (and Vercel) https://frontendmasters.com/blog/respect-to-next-js-and-vercel/ https://frontendmasters.com/blog/respect-to-next-js-and-vercel/#comments Tue, 20 Feb 2024 14:20:31 +0000 https://frontendmasters.com/blog/?p=871 Today, I did some maintenance work on a Next.js course website (we have tons of them built on Next.js), and I thought to myself:

“Wow, this framework has been around for a long time and continues to evolve. It is certainly not a one-hit-wonder.”

For context, I’m generally more of a purist, opting to use vanilla JavaScript and building on the web platform in most situations. Even so, I wanted to acknowledge my respect for the framework and those who have worked hard to develop and evolve Next.js (and, more broadly, React). It is certainly giving us new ways to think about building web apps.

Next.js wasn’t always the king.

To new folks in the industry: Next.js wasn’t always on top. For instance, I remember when Gatsby was constantly in the news as one of the first significant meta frameworks built on React. It was the first framework to build static sites with JSX on the front and back end. As folks hit limits in the framework and pushed against its edges, it could not come up with solutions and eventually fell out of favor.

Today, Astro is filling that gap of static sites. But if you want a complete application development ecosystem on this paradigm, Next.js is currently it.

Frontend Masters has been teaching Next.js since 2020.

Next.js has been building for years – our first course on Next.js was released back in 2020. Our Node.js teacher, Scott Moss, loved the framework and convinced us to continue releasing course updates as the Next.js evolved. After the framework released App router and server actions, Scott returned to teach v3 of the Intro to Next.js course.

It takes a lot to remain in the hearts of developers for years.

Remaining in the zeitgeist is always impressive to see. And even more-so now, when everyone is focused on where the framework is going to do Next (see what I did there).

Drawing the boundaries thinner between infrastructure, the server, and, ultimately, the client is a daunting task. Even if it’s not the best approach for every problem, it pushes the boundaries of what’s possible through an approach that respects interactivity as a first-class citizen.

Note that when I say Next is not the best approach for everything, that’s more so because of Node.js as a platform. Frontend Masters is built on Go, which we think fits our needs best for our given set of challenges. These ideas of putting interactivity first will eventually make their way into other frameworks and platforms as time passes.

We are excited by the new ideas emerging from the React/Next.js community!

At Frontend Masters, we have built a lot on Next.js: course websites, full-stack projects, and courses. We will continue to release courses on lower levels of the stack and welcome the ideas that Next.js is bringing to our web developer ecosystem!

Whatever happens from here, I wanted to write this little piece to give my respect and ensure folks in the community. We want developers to build the best apps possible and their dream careers. And if that’s increasingly on Next.js, we’ll be here for it, doing our best to teach the framework and everything underneath it (JavaScript, TypeScript, React, Browser APIs, etc).

✌️

]]>
https://frontendmasters.com/blog/respect-to-next-js-and-vercel/feed/ 3 871