Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Tue, 21 May 2024 22:17:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 225069128 Prefetching When Server Loading Won’t Do https://frontendmasters.com/blog/prefetching-when-server-loading-wont-do/ https://frontendmasters.com/blog/prefetching-when-server-loading-wont-do/#respond Wed, 15 May 2024 23:26:46 +0000 https://frontendmasters.com/blog/?p=2200 This is a post about a boring* topic: loading data.

(* Just kidding it will be amazing and engaging.)

Not how to load data, but instead we’ll take a step back, and look at where to load data. Not in any particular framework, either, this is going to be more broadly about data loading in different web application architectures, and paricularly how that impacts performance.

We’ll start with client-rendered sites and talk about some of the negative performance characteristics they may have. Then we’ll move on to server-rendered apps, and then to the lesser-known out-of-order streaming model. To wrap up, we’ll talk about a surprisingly old, rarely talked about way to effectively load slow data in a server-rendered application. Let’s get started!

Client Rendering

Application metaframeworks like Next and SvelteKit have become incredibly popular. In addition to offering developer conveniences like file system-based routing and scaffolding of API endoints, they also, more importantly, allow you to server render your application.

Why is server rendering so important? Let’s take a look at how the world looks with the opposite: client-rendered web applications, commonly referred to as “single page applications” or SPAs. Let’s start with a simplified diagram of what a typical request for a page looks like in an SPA.

The browser makes a request to your site. Let’s call it yoursite.io. With an SPA, it usually sends down a single, mostly empty HTML page, which has whatever script and style tags needed to run the site. This shell of a page might display your company logo, your static header, your copyright message in the footer, etc. But mostly it exists to load and run JavaScript, which will build the “real” site.

This is why these sites are called “single page” applications. There’s a single web page for the whole app, which runs code on the client to detect URL changes, and request and render whatever new UI is needed.

Back to our diagram. The inital web page was sent back from the web server as HTML. Now what? The browser will parse that HTML and find script tags. These script tags contain our application code, our JavaScript framework, etc. The browser will send requests back to the web server to load these scripts. Once the browser gets them back, it’ll parse, and execute them, and in so doing, begin executing your application code.

At this point whatever client-side router you’re using (i.e. react-routerTanstack Router, etc) will render your current page.

But there’s no data yet!

So you’re probably displaying loading spinners or skeleton screens or the like. To get the data, your client-side code will now make yet another request to your server to fetch whatever data are needed, so you can display your real, finished page to your user. This could be via a plain old fetchreact-query, or whatever. Those details won’t concern us here.

SSR To The Rescue

There is a pretty clear solution here. The server already has has the URL of the request, so instead of only returning that shell page, it could (should) request the data as well, get the page all ready to go, and send down the complete page.

Somehow.

This is how the web always worked with tools like PHP or asp.net. But when your app is written with a client-side JavaScript framework like React or Svelte, it’s surprisingly tricky. These frameworks all have API’s for stringifying a component tree into HTML on the server, so that markup can be sent down to the browser. But if a component in the middle of that component tree needs data, how do you load it on the server, and then somehow inject it where it’s needed? And then have the client acknowledge that data, and not re-request it. And of course, once you solve these problems and render your component tree, with data, on the server, you still need to re-render this component tree on the client, so your client-side code, like event handlers and such, start working.

This act of re-rendering the app client side is called hydration. Once it’s happened, we say that our app is interactive. Getting these things right is one of the main benefits modern application meta-frameworks like Next and SvelteKit provide.

Let’s take a look at what our request looks like in this server-rendered setup:

That’s great. The user sees the full page much, much sooner. Sure, it’s not interactive yet, but if you’re not shipping down obscene amounts of JavaScript, there’s a really good chance hydration will finish before the user can manage to click on any buttons.

We won’t get into all this, but Google themselves tell you this is much better for SEO as well.

So, what’s the catch? Well, what if our data are slow to load. Maybe our database is busy. Maybe it’s a huge request. Maybe there is a network hiccup. Or maybe you just depend on slow services you can’t control. It’s not rare.

This might be worse than the SPA we started with. Even though we needed multiple round trips to the server to get data, at least we were displaying a shell of a page quickly. Here, the initial request to the server will just hang and wait as long as needed for that data to load on the server, before sending down the full page. To the user, their browser (and your page) could appear unresponsive, and they might just give up and go back.

Out of Order Streaming

What if we could have the best of all worlds. What if we could server render, like we saw. But if some data are slow to load, we ship the rest of the page, with the data that we have, and let the server push down the remaining data, when ready. This is called streaming, or more precisely, out-of-order streaming (streaming, without the out-of-order part, is a separate, much more limited thing which we won’t cover here).

Let’s take a hypothetical example where the data abd, and data xyz are slow to load.

With out-of-order streaming we can load the to-do data load on the server, and send the page with just that data down to the user, immediately. The other two pieces of data have not loaded, yet, so our UI will display some manner of loading indicator. When the next piece of data is ready, the server pushes it down:

What’s the catch?

So does this solve all of our problems? Yes, but… only if the framework you’re using supports it. To stream with Next.js app directory you’ll use Suspense components with RSCWith SvelteKit you just return a promise from your loader. Remix supports this too, with an API that’s in the process of changing, so check their docs. SolidStart will also support this, but as of writing that entire project is still in beta, so check its docs when it comes out.

Some frameworks do not support this, like Astro and Next if you’re using the legacy pages directory.

What if we’re using those projects, and we have some dependencies on data which are slow to load? Are we stuck rendering this data in client code, after hydration?

Prefetching to the rescue

The web platform has a feature called prefetching. This lets us add a <link> tag to the <head> section of our HTML page, with a rel="prefetch" attribute, and an href attribute of the URL we want to prefetch. We can put service endpoint calls here, so long as they use the GET verb. If we need to pre-fetch data from an endpoint that uses POST, you’ll need to proxy it through an endpoint that uses GET. It’s worth noting that you can also prefetch with an HTTP header if that’s more convenient; see this post for more information.

When we do this, our page will start pre-fetching our resources as soon as the browser parses the link tag. Since it’s in the <head>, that means it’ll start pre-fetching at the same time our scripts and stylesheets are requested. So we no longer need to wait until our script tags load, parse, and hydrate our app. Now the data we need will start pre-fetching immediately. When hydration does complete, and our application code requests those same endpoints, the browser will be smart enough to serve that data from the prefetch cache.

Let’s see prefetching in action

To see pre-fetching in action, we’ll use Astro. Astro is a wonderful web framework that doesn’t get nearly enough attention. One of the very few things it can’t do is out-of-order streaming (for now). But let’s see how we can improve life with pre-fetching.

The repo for the code I’ll be showing is here. It’s not deployed anywhere, for fear of this blog posting getting popular, and me getting a big bill from Vercel. But the project has no external dependencies, so you can clone, install, and run locally. You could also deploy this to Vercel yourself if you really want to see it in action.

I whipped up a very basic, very ugly web page that hits some endpoints to pull down a hypothetical list of books, and some metadata about the library, which renders the books once ready. It looks like this:

The endpoints return static data, which is why there’s no external dependencies. I added a manual delay of 700ms to these endpoints (sometimes you have slow services and there’s nothing you can do about it), and I also installed and imported some large JavaScript libraries (d3, framer-motion, and recharts) to make sure hydration would take a moment or two, like with most production applications. And since these endpoints are slow, they’re a poor candidate for server fetching.

So let’s request them client-side, see the performance of the page, and then add pre-fetching to see how that improves things.

The client-side fetching looks like this:

useEffect(() => {
  fetch("/api/books")
    .then((resp) => resp.json())
    .then((books) => {
      setBooks(books);
    });

  fetch("/api/books-count")
    .then((resp) => resp.json())
    .then((booksCountResp) => {
      setCount(booksCountResp.count);
    });
}, []);

Nothing fancy. Nothing particularly resilient here. Not even any error handling. But perfect for our purposes.

Network diagram without pre-fetching

Running this project, deployed to Vercel, my network diagram looks like this:

Notice all of the script and style resources, which need to be requested and processed before our client-side fetches start (on the last two lines).

Adding pre-fetching

I’ve added a second page to this project, called with-prefetch, which is the same as the index page. Except now, let’s see how we can add some <link> tags to request these resources sooner.

First, in the root layout, let’s add this in the head section

<slot name="head"></slot>

this gives us the ability to (but does not require us to) add content to our HTML document’s <head>. This is exactly what we need. Now we can make a PrefetchBooks React component:

import type { FC } from "react";

export const PrefetchBooks: FC<{}> = (props) => {
  return (
    <>
      <link rel="prefetch" href="/api/books" as="fetch" />
      <link rel="prefetch" href="/api/books-count" as="fetch" />
    </>
  );
};

Then render it in our prefetching page, like so

<PrefetchBooks slot="head" />

Note the slot attribute on the React component, which tells Astro (not React) where to put this content.

With that, if we run that page, we’ll see our link tags in the head

Now let’s look at our updated network diagram:

Notice our endpoint calls now start immediately, on lines 3 and 4. Then later, in the last two lines, we see the real fetches being executed, at which point they just latch onto the prefetch calls already in flight.

Let’s put some hard numbers on this. When I ran a webpagetest mobile Lighthouse analysis on the version of this page without the pre-fetch, I got the following.

Note the LCP (Largest Contentful Paint) value. That’s essentially telling us when the page looks finished to a user. Remember, the Lighthouse test simulates your site in the slowest mobile device imagineable, which is why it’s 4.6 seconds.

When I re-run the same test on the pre-fetched version, things improved about a second

Definitely much better, but still not good; but it never will be until you can get your backend fast. But with some intelligent, targetted pre-fetching, you can at least improve things.

Parting thoughts

Hopefully all of your back-end data requirements will be forever fast in your developer journeys. But when they’re not, prefetching resources is a useful tool to keep in your toolbelt.

]]>
https://frontendmasters.com/blog/prefetching-when-server-loading-wont-do/feed/ 0 2200
Using Auth.js with SvelteKit https://frontendmasters.com/blog/using-nextauth-now-auth-js-with-sveltekit/ https://frontendmasters.com/blog/using-nextauth-now-auth-js-with-sveltekit/#respond Mon, 29 Apr 2024 15:22:25 +0000 https://frontendmasters.com/blog/?p=1764 SvelteKit is an exciting framework for shipping performant web applications with Svelte. I’ve previously written an introduction on it, as well as a deeper dive on data handling and caching.

In this post, we’ll see how to integrate Auth.js (Previously next-auth) into a SvelteKit app. It might seem surprising to hear that this works with SvelteKit, but this project has gotten popular enough that much of it has been split into a framework-agnostic package of @auth/core. The Auth.js name is actually a somewhat recent rebranding of NextAuth.

In this post we’ll cover the basic config for @auth/core: we’ll add a Google Provider and configure our sessions to persist in DynamoDB.

The code for everything is here in a GitHub repo, but you won’t be able to run it without setting up your own Google Application credentials, as well as a Dynamo table (which we’ll get into).

The initial setup

We’ll build the absolute minimum skeleton app needed to demonstrate authentication. We’ll have our root layout read whether the user is logged in, and show a link to content that’s limited to logged in users, and a log out button if so; or a log in button if not. We’ll also set up an auth check with redirect in the logged in content, in case the user browses directly to the logged in URL while logged out.

Let’s create a SvelteKit project if we don’t have one already, using the insutructions here. Chose “Skeleton Project” when prompted.

Now let’s install some packages we’ll be using:

npm i @auth/core @auth/sveltekit

Let’s create a top-level layout that will use our auth data. First, our server loader, in a file named +layout.server.ts. This will hold our logged-in state, which for now is always false.

export const load = async ({ locals }) => {
  return {
    loggedIn: false,
  };
};

Now let’s make the actual layout, in +layout.svelte with some basic markup

<script lang="ts">
  import type { PageData } from './$types';
  import { signIn, signOut } from '@auth/sveltekit/client';

  export let data: PageData;
  $: loggedIn = data.loggedIn;
</script>

<main>
  <h1>Hello there! This is the shared layout.</h1>

  {#if loggedIn}
    <div>Welcome!</div>
    <a href="/logged-in">Go to logged in area</a>
    <br />
    <br />
    <button on:click={() => signOut()}>Log Out</button>
  {:else}
    <button on:click={() => signIn('google')}>Log in</button>
  {/if}
  <section>
    <slot />
  </section>
</main>

There should be a root +page.svelte file that was generated when you scaffolded the project, with something like this in there

<h1>This is the home page</h1>
<p>Visit <a href="https://kit.svelte.dev">kit.svelte.dev</a> to read the SvelteKit docs</p>

Feel free to just leave it.

Next, we’ll create a route called logged-in. Create a folder in routes called logged-in and create a +page.server.ts which for now will always just redirect you out.

import { redirect } from "@sveltejs/kit";

export const load = async ({}) => {
  redirect(302, "/");
};

Now let’s create the page itself, in +page.svelte and add some markup

<h3>This is the logged in page</h3>

And that’s about it. Check out the GitHub repo to see everything, including just a handful of additional styles.

Adding Auth

Let’s get started with the actual authentication.

First, create an environment variable in your .env file called AUTH_SECRET and set it to a random string that’s at least 32 characters. If you’re looking to deploy this to a host like Vercel or Netlify, be sure to add your environment variable in your project’s settings according to how that host does things.

Next, create a hooks.server.ts (or .js) file directly under src. The docs for this file are here, but it essentially allows you to add application-wide wide side effects. Authentication falls under this, which is why we configure it here.

Now let’s start integrating auth. We’ll start with a very basic config:

import { SvelteKitAuth } from "@auth/sveltekit";
import { AUTH_SECRET } from "$env/static/private";

const auth = SvelteKitAuth({
  providers: [],
  session: {
    maxAge: 60 * 60 * 24 * 365,
    strategy: "jwt",
  },

  secret: AUTH_SECRET,
});

export const handle = auth.handle;

We tell auth to store our authentication info in a JWT token, and configure a max age for the session as 1 year. We provide our secret, and a (currently empty) array of providers.

Adding our provider

Providers are what perform the actual authentication of a user. There’s a very, very long list of options to choose from, which are listed here. We’ll use Google. First, we’ll need to create application credentials. So head on over to the Google Developers console. Click on credentials, and then “Create Credentials”

Click it, then choose “OAuth Client ID.” Choose web application, and give your app a name.

For now, leave the other options empty, and click Create.

Screenshot

Before closing that modal, grab the client id, and client secret values, and paste them into environment variables for your app

GOOGLE_AUTH_CLIENT_ID=....
GOOGLE_AUTH_SECRET=....

Now let’s go back into our hooks.server.ts file, and import our new environment variables:

import { AUTH_SECRET, GOOGLE_AUTH_CLIENT_ID, GOOGLE_AUTH_SECRET } from "$env/static/private";

and then add our provider

providers: [
  GoogleProvider({
    clientId: GOOGLE_AUTH_CLIENT_ID,
    clientSecret: GOOGLE_AUTH_SECRET
  })
],

and then export our auth handler as our hooks handler.

export const handle = auth.handle;

Note that if you had other handlers you wanted SvelteKit to run, you can use the sequence helper:

import { sequence } from "@sveltejs/kit/hooks";

export const handle = sequence(otherHandleFn, auth.handle);

Unfortunately if we try to login now, we’re greeted by an error:

Clicking error details provides some more info:

We need to tell Google that this redirect URL is in fact valid. Go back to our Google Developer Console, open the credentials we just created, and add this URL in the redirect urls section.

And now, after saving (and possibly waiting a few seconds) we can click login, and see a list of our Google accounts available, and pick the one we want to log in with

Choosing one of the accounts should log you in, and bring you right back to the same page you were just looking at.

So you’ve successfully logged in, now what?

Being logged in is by itself useless without some way to check logged in state, in order to change content and grant access accordingly. Let’s go back to our layout’s server loader

export const load = async ({ locals }) => {
  return {
    loggedIn: false,
  };
};

We previously pulled in that locals property. Auth.js adds a getSession method to this, which allows us to grab the current authentication, if any. We just logged in, so let’s grab the session and see what’s there

export const load = async ({ locals }) => {
  const session = await locals.getSession();
  console.log({ session });

  return {
    loggedIn: false,
  };
};

For me, this logs the following:

All we need right now is a simple boolean indicating whether the user is logged in, so let’s send down a boolean on whether the user object exists:

export const load = async ({ locals }) => {
  const session = await locals.getSession();
  const loggedIn = !!session?.user;

  return {
    loggedIn,
  };
};

and just like that, our page updates:

The link to our logged-in page still doesn’t work, since it’s still always redirecting. We could run the same code we did before, and call locals.getSession to see if the user is logged in. But we already did that, and stored the loggedIn property in our layout’s loader. This makes it available to any routes underneath. So let’s grab it, and conditionally redirect based on its value.

import { redirect } from "@sveltejs/kit";

export const load = async ({ parent }) => {
  const parentData = await parent();

  if (!parentData.loggedIn) {
    redirect(302, "/");
  }
};

And now our logged-in page works:

Persisting our authentication

We could end this post here. Our authentication works, and we integrated it into application state. Sure, there’s a myriad of other auth providers (GitHub, Facebook, etc), but those are just variations on the same theme.

But one topic we haven’t discussed is authentication persistence. Right now our entire session is stored in a JWT, on our user’s machine. This is convenient, but it does offer some downsides, namely that this data could be stolen. An alternative is to persist our users’ sessions in an external database. This post discusses the various tradeoffs, but most of the downsides of stateful (i.e. stored in a database) solutions are complexity and the burden of having to reach out to an external storage to grab session info. Fortunately, Auth.js removes the complexity burden for us. As far as performance concerns, we can choose a storage mechanism that’s known for being fast and effective: in our case we’ll look at DynamoDB.

Adapters

The mechanism by which Auth.js persists our authentication sessions is database adapters. As before, there are many to choose from. We’ll use DynamoDB. Compared to providers, the setup for database adapters is a bit more involved, and a bit more tedious. In order to keep the focus of this post on Auth.js, we won’t walk through setting up each and every key field, TTL setting, and GSI—to say nothing of AWS credentials if you don’t have them already. If you’ve never used Dynamo and are curious, I wrote an introduction here. If you’re not really interested in Dynamo, this section will show you the basics of setting up database adapters, which you can apply to any of the (many) others you might prefer to use.

That said, if you’re interested in implementing this yourself, the adapter docs provide CDK and CloudFormation templates for the Dynamo table you need, or if you want a low-dev-ops solution, it even lists out the keys, TTL and GSI structure here, which is pretty painless to just set up.

We’ll assume you’ve got your DynamoDB instance set up, and look at the code to connect it. First, we’ll install some new libraries

npm i @auth/dynamodb-adapter @aws-sdk/lib-dynamodb @aws-sdk/client-dynamodb

First, make sure your dynamo table name, as well as your AWS credentials are in environment variables

Now we’ll go back to our hooks.server.ts file, and whip up some boilerplate (which, to be honest, is mostly copied right from the docs).

import { GOOGLE_AUTH_CLIENT_ID, GOOGLE_AUTH_SECRET, AMAZON_ACCESS_KEY, AMAZON_SECRET_KEY, DYNAMO_AUTH_TABLE, AUTH_SECRET } from "$env/static/private";

import { DynamoDB, type DynamoDBClientConfig } from "@aws-sdk/client-dynamodb";
import { DynamoDBDocument } from "@aws-sdk/lib-dynamodb";
import { DynamoDBAdapter } from "@next-auth/dynamodb-adapter";
import type { Adapter } from "@auth/core/adapters";

const dynamoConfig: DynamoDBClientConfig = {
  credentials: {
    accessKeyId: AMAZON_ACCESS_KEY,
    secretAccessKey: AMAZON_SECRET_KEY,
  },

  region: "us-east-1",
};

const client = DynamoDBDocument.from(new DynamoDB(dynamoConfig), {
  marshallOptions: {
    convertEmptyValues: true,
    removeUndefinedValues: true,
    convertClassInstanceToMap: true,
  },
});

and now we add our adapter to our auth config:

  adapter: DynamoDBAdapter(client, { tableName: DYNAMO_AUTH_TABLE }),

and now, after logging out, and logging back in, we should see some entries in our DynamoDB instance

Authentication hooks

The auth-core package provides a number of callbacks you can hook into, if you need to do some custom processing.

The signIn callback is invoked, predictably, after a successful login. It’s passed an account object from whatever provider was used, Google in our case. One use case with this callback could be to optionally look up, and sync legacy user metadata you might have stored for your users before switching over to OUath authentication with established providers.

async signIn({ account }) {
  const userSync = await getLegacyUserInfo(account.providerAccountId);
  if (userSync) {
    account.syncdId = userSync.sk;
  }

  return true;
},

The jwt callback gives you the ability to store additional info in the authentication token (you can use this regardless of whether you’re using a database adapter). It’s passed the (possibly mutated) account object from the signIn callback.

async jwt({ token, account }) {
  token.userId ??= account?.syncdId || account?.providerAccountId;
  if (account?.syncdId) {
    token.legacySync = true;
  }
  return token;
}

We’re setting a single userId onto our token that’s either the syndId we just looked up, or the providerAccountId already attached to the provider account. If you’re curious about the ??= operator, that’s the nullish coalescing assignment operator.

Lastly, the session callback gives you an opportunity to shape the session object that’s returned when your application code calls locals.getSession()

async session({ session, user, token }: any) {
  session.userId = token.userId;
  if (token.legacySync) {
    session.legacySync = true;
  }
  return session;
}

now our code could look for the legacySync property, to discern that a given login has already sync’d with a legacy account, and therefore know not to ever prompt the user about this.

Extending the types

Let’s say we do extand the default session type, like we did above. Let’s see how we can tell TypeScript about the things we’re adding. Basically, we need to use a TypeScript feature called interface merging. We essentially re-declare an interface that already exists, add stuff, and then TypeScript does the grunt work of merging (hence the name) the original type along with the changes we’ve made.

Let’s see it in action. Go to the app.d.ts file SvelteKit adds to the root src folder, and add this

declare module "@auth/core/types" {
  interface Session {
    userId: string;
    provider: string;
    legacySync: boolean;
  }
}

export {};

We have to put the interface in the right module, and then we add what we need to add.

Note the odd export {}; at the end. There has to be at least one ESM import or export, so TypeScript treats the file correctly. SvelteKit by default adds this, but make sure it’s present in your final product.

Wrapping up

We’ve covered a broad range of topics in this post. We’ve seen how to set up Auth.js in a SvelteKit project using the @auth/core library. We saw how to set up providers, adapters, and then took a look at various callbacks that allow us to customize our authentication flows.

Best of all, the tools we saw will work with SvelteKit or Next, so if you’re already an experienced Next user, a lot of this was probably familiar. If not, much of what you saw will be portable to Next if you ever find yourself using that.

]]>
https://frontendmasters.com/blog/using-nextauth-now-auth-js-with-sveltekit/feed/ 0 1764