Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Tue, 30 Jul 2024 15:15:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 225069128 SVG triangle of compromise https://frontendmasters.com/blog/svg-triangle-of-compromise/ https://frontendmasters.com/blog/svg-triangle-of-compromise/#respond Tue, 30 Jul 2024 15:15:04 +0000 https://frontendmasters.com/blog/?p=3290 tag, it’s cached nicely, but you give up on CSS […]]]> I enjoyed Micah R Ledbetter’s SVG triangle of compromise and generally think it’s a fair analysis of how any-which-way you use SVG on a page, you’re giving up some kind of nice ability it could have. For instance, if you use SVG through an <img> tag, it’s cached nicely, but you give up on CSS reaching in there to style things. If you drop it in as <svg>, you can style, but then it’s not cached well for repeated uses.

Then Scott Jehl chimed in with a way to “have it all”. The crux of it is using the SVG <use> element to reference an SVG file (so you get caching and sizing) and you can set CSS --custom-properties that “pierce” the shadow DOM that <use> creates (that’s right, SVG can have a shadow DOM just like web components) and allow for styling.

This does solve all three angles, the caveats being 1) you can’t cache the SVG (“sprite”, it’s usually called when you combine icons into a single file) on a different domain. 2) it’s a manual pain to set up SVGs to be entirely styled in this way. Scott’s tool might help with 2, but browsers need to help with 1.

]]>
https://frontendmasters.com/blog/svg-triangle-of-compromise/feed/ 0 3290
Web Performance Guide https://frontendmasters.com/blog/web-performance-guide/ https://frontendmasters.com/blog/web-performance-guide/#respond Thu, 18 Jul 2024 22:20:02 +0000 https://frontendmasters.com/blog/?p=3061 I like how working on web performance is so well aligned with other worthy goals. A fast site is site more accessible to other people. A fast site tends to convert better. Using web standards and more native web technologies tends to lead to a faster site.

SpeedCurve has published a pretty beefy and useful guide to Web Performance, and Google’s guide is also good. Of course, our Web Performance Fundamentals course is best for a guided walkthrough of all the most important aspects.

]]>
https://frontendmasters.com/blog/web-performance-guide/feed/ 0 3061
The Pitfalls of In-App Browsers https://frontendmasters.com/blog/the-pitfalls-of-in-app-browsers/ https://frontendmasters.com/blog/the-pitfalls-of-in-app-browsers/#comments Wed, 17 Jul 2024 16:07:47 +0000 https://frontendmasters.com/blog/?p=3008 Developing websites for modern mobile devices has a pitfall you may not be aware of: in-app browsers. These are web browsers embedded directly within native mobile apps. So if a link is clicked within a native app (e.g. Instagram or TikTok), it uses the in-app browser instead of switching apps to a dedicated browser app.

While potentially convenient for mobile developers (i.e. users will never leave our app! the businessmen squeal), we’ll discuss the drawbacks for web developers like yourself and your users.

In-app browsers are also referred to as embedded browsers or WebView. These are interchangeable terms.

The Drawbacks

The drawbacks of in-app browsers can be broadly categorized:

Limited functionality

In-app browsers are considerably stripped down when compared to their fully-featured counterparts and typically lack features like bookmarking, UI controls, settings, extensions, and downloads. For for instance a browser extension that a user depends on or help protect their privacy will not work in an in-app browser.

Privacy & security concerns

Because in-app browsers are embedded within a native mobile app, the app developer has control and visibility into the users’ in-app browsing activity. This even extends into being able to inject code into the in-app browser which is a major privacy and security concern. Users are largely unaware and aren’t able to opt-out even if they are.

Inconsistent UI/UX

Because in-app browser implementations are all different, the UI is inconsistent. Further, browsing data like history and bookmarks aren’t shared so users typically need to sign into services they may already be securely signed into in their devices actual browser. This leads to a fragmented and frustrating user experience.

Worse performance

In-app browsers tend to be running outdated browser internals which can cause slower loading times and compatibility issues. Users on slower Internet connections may have the problem exacerbated.

Author Update: Since Apple doesn’t allow apps, even browsers, to use their own rendering engine only Android has the problem of bundling in a custom in-app browser instead of using the system WebView, which may be outdated and have worse performance. On iOS, the built-in WebView is bundled as part of the iOS WebKit. On Android, the default built-in WebView is based on the Blink version and is updated independently of the OS as part of the Chrome update process via Google Play.

Bad Behavior in In-App Browsers

History

In-app browsers have existed since circa 2016, but it wasn’t until 2019 when Thomas Steiner, a Google engineer, published a blog post that dove into Facebook’s iOS and Android apps that a wider audience was made aware of the privacy and security concerns. Thomas discussed the technical details behind how the apps implemented their in-app browsers and stated how in-app browsers can perform man-in-the-middle (MITM) attacks by injecting arbitrary JavaScript code and intercepting network traffic.

Three years later, in 2022, Felix Krause published two blog posts (1, 2) and a tool, inappbrowser.com, which focused on the privacy concerns of iOS apps. Initially this covered apps by Meta (Facebook, Messenger, Instagram) and then followed up with Android and other social media apps including TikTok. Felix’s findings supported Thomas’ from 3 years earlier and showed concerning findings from iOS Instagram: the arbitrary injection of a pcm.js script which Meta claimed to be an “event aggregator” but was also monitoring user interactions in the form of taps and selections. Further cause for concern was TikTok injecting JavaScript that monitors all keyboard inputs along with taps, which is effectively the functionality of a keylogger on third-party sites. TikTok acknowledged the existence of this code but claimed it’s only used for debugging, troubleshooting, and performance monitoring.

Felix’s findings led to a lawsuit being filed against Meta in September 2022. The case was dismissed in October 2023.

Nothing Has Changed

Let’s revisit the behavior of iOS & Android Instagram’s in-app browser at the time of this writing (July 2024). This is done by sharing the two testing links, inappbrowser.com and inappdebugger.com (we’ll discuss this one more shortly), in the app as a direct message or URL in your profile bio. This is so you can actually click on them, as Instagram prevents clickable URLs in places like the descriptions of posts.

Let’s start with iOS. Below is iOS Instagram opening inappbrowser.com and inappdebugger.com in July 2024:

This shows that iOS Instagram is still injecting arbitrary JavaScript code which listens to user clicks along with JavaScript messages.

(Editor note: when testing this I noted that Instagram also appends URL parameters on outgoing links, which may be used to communicate additional information to this injected JavaScript).

Next, Android.

The story on Android is slightly different: there’s still arbitrary JavaScript being injected but it isn’t necessarily listening to events tightly coupled with user interactions.

Unfortunately, not much has changed since Felix’s findings nearly 3 years ago.

Open Web Advocacy wrote a piece earlier this year following the events of Apple threatening to kill web apps.

Debugging & Detecting In-App Browsers

Leveraging the existing excellent work of Felix Krause and Shalanah Dawson we have strategies for debugging and detecting when our websites are being viewed by in-app browsers.

  • https://inappbrowser.com/
    • Attempts to detect if there’s any injected JavaScript code running.
  • https://inappdebugger.com/
    • Attempts to detect you’re in an in-app browser and, if so, which app it is inside of.
    • Additionally provides some debugging tests for if downloads are possible and escape-hatches for getting to an actual device browser.
    • Leverages both inapp-spy and bowser.
  • https://github.com/bowser-js/bowser
    • A browser detection library providing metadata and filtering based on browser version.
  • https://github.com/shalanah/inapp-spy
    • A TypeScript library written by Shalanah Dawson that aids in detecting in-app browsers.

Escaping

Now that we have some tools, let’s look at a example in JavaScript for detecting and redirecting in Android using an intent: link. You’d do this if you simply do not want your website being opened in an in-app browser, and offer a link to users to open it in their default browser instead.

import InAppSpy from "inapp-spy"
const { isInApp } = InAppSpy()

// Your app's full URL, maybe defined build-time for different environments
const url = `https://example.com`
const intentLink = `intent:${url}#Intent;end`

// 1. Detect in-app
if (isInApp) {

  // 2. Attempt to auto-redirect
  window.location.replace(intentLink)
    
  // 3. Append a native <a> with the same intent link
  const $div = document.createElement("div")
  $div.innerHTML = `
    <p>Tap the button to open in your default browser</p>
    <a href="${intentLink}" target="_blank">Open</a>
  `
  document.body.appendChild($div)
}

It’s not ideal to have to load extra JavaScript for this, but it is reliable. This may be heavy handed, but for those of you working on particularly sensitive sites, it might be worth doing.

To get an idea of a way this can be implemented, Shalanah’s inappdebugger.com provides this functionality under the “Android In-App Escape Links” section.

Test out the Android escape hatch strategy on inappdebugger.com.

Unfortunately, there’s currently no reliable way of handling iOS in-app browsers in terms of a proper escape hatch. Similar to Android, there’s a handful of device-specific URI schemes (that’s technically what the intent: prefix is called), but none of them are reliable to the default browser on a specific URL. A not-so-great workaround is using the x-web-search://? scheme, but the best-case is using the site: search prefix to get close to your actual URL e.g. x-web-search://?site:example.com.

Author Update: a somewhat reliable iOS workaround has been documented and tested by trying to run a Shortcut that doesn’t exist, specifying your URL in an error callback, and opening that in the user’s default browser. In practice, this looks like:

shortcuts://x-callback-url/run-shortcut?name=${crypto.randomUUID()}&x-error=${encodeURIComponent('https://example.com')}

This comes with some side effect caveats: the Shortcuts app is opened on the user’s device and some query parameters are appended to your URL. Read more on GitHub.

A last-ditch effort on iOS would be creating a UI element in your web app that gives the user manual instructions for bailing:

  1. Tapping the “…” menu
  2. Tapping on “Open in browser”
Screenshot of a UI that points to the upper right of the screen saying:

1. Click on the overflow menu (•••)
2. Then click Open in browser

This is considerably more fragile and error-prone, but if you have the metrics to where your user traffic is coming from and which in-app browser is preventing them from converting to your feature-rich PWA then it could be worth considering.

Hopefully, with time, we’ll see the fall of in-app browsers. The privacy and security concerns alone are unacceptable. Couple that with the limited functionality and poor user experience, it’s probably best they just went away. Thanks to groups like the Open Web Advocacy and individuals like Shalanah Dawson and Felix Krause for their work and support for this cause.

Recommended Reading

]]>
https://frontendmasters.com/blog/the-pitfalls-of-in-app-browsers/feed/ 1 3008
View transitions + speculative rules https://frontendmasters.com/blog/view-transitions-speculative-rules/ https://frontendmasters.com/blog/view-transitions-speculative-rules/#respond Thu, 11 Jul 2024 22:21:58 +0000 https://frontendmasters.com/blog/?p=3000 Ryan Seddon makes clear the potential performance problem with cross-page View Transitions:

… on a slow or spotty network, the transition may appear as if the screen is freezing, as the browser waits for the page to load before it can transition smoothly between the two screens—this is not ideal.

But also that our new friend the Speculation Rules API is a potential remedy:

Combining these two helps mitigate the original tradeoff of the “pause” between navigations while the browser loads the next document. With speculative prerender, it can render the page before the user clicks, making the transition near-instant.

Both these APIs are Chrome’n’friends only, so I guess it’s a you-break-it you-fix-it deal. They are also both progressive enhancements so no grave harm in using them now, unless you consider potentially unused pre-renders too wasteful.

]]>
https://frontendmasters.com/blog/view-transitions-speculative-rules/feed/ 0 3000
YouTube Embeds are Bananas Heavy and it’s Fixable https://frontendmasters.com/blog/youtube-embeds-are-bananas-heavy-and-its-fixable/ https://frontendmasters.com/blog/youtube-embeds-are-bananas-heavy-and-its-fixable/#comments Mon, 01 Jul 2024 12:56:07 +0000 https://frontendmasters.com/blog/?p=2881 TL;DR: YouTube Embeds are like 1.3MB in size with no shared resources between multiple embeds. Using a <lite-youtube> Web Component is more like 100k, does share resources, and sacrifices no functionality.

You can put a YouTube video on any website. They help you do it. Under the Share menu right on youtube.com there is an option to <> Embed and you’ll see bit of HTML with an <iframe> in it.

<iframe>s are never wonderful for performance, but they make sense for protected third-party content.

This is what I’m getting as I write:

<iframe 
  width="560" 
  height="315" 
  src="https://www.youtube.com/embed/LN1TQm942_U?si=EfW_M4bEHEO-idL3"
  title="YouTube video player"
  frameborder="0"
  allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
  referrerpolicy="strict-origin-when-cross-origin"
  allowfullscreen>
</iframe>

If I were Team YouTube, I’d get loading="lazy" on there to help with performance right away. No need for videos that aren’t even visible on the page to load right away.

<iframe 
  ...
  loading="lazy"
  >
</iframe>

Plus I’d put some inline styles on there to keep the video fluid and maintain the original aspect ratio. Or you could target these and do that yourself in CSS. Here’s assuming the videos are the standard 16 / 9 aspect ratio:

iframe[src^="https://www.youtube.com/embed/"] {
  inline-size: 100%;
  block-size: auto;
  aspect-ratio: 16 / 9;
}

But… let’s not keep this HTML at all. I’m sure you read this blog post title, but let’s put a point on it:

On a page with literally nothing at all on it other than a YouTube Embed, we’re looking at:

  • 32 requests
  • 1.3 MB of data transfer
  • 2.76s to load the page on my current WiFi connection

Zach Leatherman, equally exasperated by this, noted:

The weight also grows linearly with every embed—resources are not shared: two embeds weigh 2.4 MB; three embeds weigh 3.6 MB (you get the idea).

Wow.

Looks like sizes are up a bit since Zach last looked as well.

The Appearance & Functionality

This is what you get from a YouTube Embed:

  • You see a “poster” image of the video
  • You see the title of the video
  • You see a big play button — click it to play the video

This is very little UI and functionality, which is fine! We can absolutely do all this without this many resources.

Why is it this way? 🤷‍♀️

I don’t think we have any good answers here. In fact, I heard from a little birdie who ran it up the pole that they have tested lighter embeds and found them to reduce engagement. 😭

I’m just gonna straight up say I don’t believe it. It’s like when Google told us that taking up half the screen with AI generated answers led to people clicking on third-party results more, but then refused to show data or allow us to track those clicks ourselves.

And hey — sometimes there are unexpected results in testing. That’s why we test instead of guess. But because this is so counterintuitive and offtrack for so many other similar performance testing situations, this bears deeper scrutiny. It would benefit from an opening of the methodology and data.

Like if you tell me that if you hit people with a stick and they smile more, I’m gonna want you to stop until we can look at what’s going on there.

I really wish I could find a good link for this, but there is a famous story from YouTube engineers way-back-when who made a much lighter video page and put it into testing. They found, quite counterintuitively, that average page load times went up. But with a deeper look, they found that the lighter page was able to reach more people, including people on low-power low-internet-speed devices who were able to actually use YouTube for the first time, and them using it much more slowed those averages. That’s awesome! The speed of using the site was up relatively for everyone. The metric of the average page load speed was a red herring and ultimately not meaningful.

How do we know that’s not the same kind of thing happening here?

Remember the implications of all these resources isn’t just a little inconvenience. YouTube is so enormous we’re talking incredible amounts of wasted electricity and thus carbon output. Pulling a megabyte of data off every single YouTube Embed would be an incredible win all around. I might even say not improving this is environmentally negligent.

The Solution is to Replicate the Embed Experience Another Way. There are Open Source Web Components That Do It Well.

With a little dab of irony, Google’s own performance champion Paul Irish has had a web component doing just this for years and years and years:

lite-youtube-embed

The pitch is solid:

Provide videos with a supercharged focus on visual performance. This custom element renders just like the real thing but approximately 224× faster.

Two hundred and twenty four times faster. Which of course involves much less data transfer.

And I’d like to be very clear, also does the exact same thing as the default embed:

  • You see a “poster” image of the video
  • You see the title of the video
  • You see a big play button — click it to play the video

You lose nothing and gain tons of speed, efficiency, and default privacy.

Using Lite YouTube Embed

  1. Link up the JavaScript to instantiate the Web Component
  2. Use it

You could install it from npm or copy and paste a copy into your own project or whatever. Or link it from a CDN:

import "https://esm.sh/lite-youtube-embed";

That’s like this:

But the best way to use it is right in the README:

Use this as your HTML, load the script asynchronously, and let the JS progressively enhance it.

<script defer src="https://cdnjs.cloudflare.com/ajax/libs/lite-youtube-embed/0.3.2/lite-yt-embed.js"></script>

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/lite-youtube-embed/0.3.2/lite-yt-embed.css" integrity="sha512-utq8YFW0J2abvPCECXM0zfICnIVpbEpW4lI5gl01cdJu+Ct3W6GQMszVITXMtBLJunnaTp6bbzk5pheKX2XuXQ==" crossorigin="anonymous" referrerpolicy="no-referrer" />

<lite-youtube videoid="ogfYd705cRs" style="background-image: url('https://i.ytimg.com/vi/ogfYd705cRs/hqdefault.jpg');">
  <a href="https://youtube.com/watch?v=ogfYd705cRs" class="lty-playbtn" title="Play Video">
    <span class="lyt-visually-hidden">Play Video: Keynote (Google I/O '18)</span>
  </a>
</lite-youtube>

With async loaded JavaScript, note the background-image is put into the HTML so it can all look right before the JavaScript loads.

Alternatives

]]>
https://frontendmasters.com/blog/youtube-embeds-are-bananas-heavy-and-its-fixable/feed/ 11 2881
“a new iteration in the pre-loading journey” https://frontendmasters.com/blog/a-new-iteration-in-the-pre-loading-journey/ https://frontendmasters.com/blog/a-new-iteration-in-the-pre-loading-journey/#respond Thu, 13 Jun 2024 21:44:15 +0000 https://frontendmasters.com/blog/?p=2713 Boris Schapira takes a look at the Speculation Rules API that we just had a poke at around here. Boris notes that this idea of prefetching (or prerendering) the next page that a user might visit has quite a history. One of the players in this game, which is still a pretty good choice as it’s more cross-browser compatible than this new API is, is instant.page, which I’ve used many times successfully.

Note that the Speculation Rules API is not yet a web standard, but the way it works has so little negative implication on browsers that don’t “support” it, it feels pretty safe to use. We put the WordPress plugin in place around here.

]]>
https://frontendmasters.com/blog/a-new-iteration-in-the-pre-loading-journey/feed/ 0 2713
Engineering for Slow Internet https://frontendmasters.com/blog/engineering-for-slow-internet/ https://frontendmasters.com/blog/engineering-for-slow-internet/#respond Tue, 11 Jun 2024 20:40:50 +0000 https://frontendmasters.com/blog/?p=2682 Not everybody has smokin’ fast internet. Wait let me try that again. Most people don’t have smokin’ fast internet, especially not all the time. It’s part of the job to make sure our sites aren’t so slow we’re essentially depriving users access.

To experience your site with slow internet, under the Network tab of DevTools (in all browsers, you have to enable it as an experimental feature in Safari), there is a dropdown to throttle the speed. Pick one of the slower cellular speeds and give your site a whirl. Might just inspire you to find some places to speed things up.

Or, you could go to the South Pole and test. It’s not just about raw speed, it’s about latency and turbulent connections.

  • Round-trip latency averaging around 750 milliseconds, with jitter between packets sometimes exceeding several seconds.
  • Available speeds, to the end-user device, that range from a couple kbps (yes, you read that right), up to 2 mbps on a really good day.
  • Extreme congestion, queueing, and dropped packets, far in excess of even the worst oversaturated ISP links or bufferbloat-infested routers back home.
  • Limited availability, frequent dropouts, and occasional service preemptions.

Ooof da. I bet very few of us think in terms of these extremes of bad internet.

If you’re an app developer reading this, can you tell me, off the top of your head, how your app behaves on a link with 40 kbps available bandwidth, 1,000 ms latency, occasional jitter of up to 2,000 ms, packet loss of 10%, and a complete 15-second connectivity dropout every few minutes?

It’s probably not great!

]]>
https://frontendmasters.com/blog/engineering-for-slow-internet/feed/ 0 2682
Playing with the Speculation Rules API in the Console https://frontendmasters.com/blog/playing-with-the-speculation-rules-api-in-the-console/ https://frontendmasters.com/blog/playing-with-the-speculation-rules-api-in-the-console/#comments Fri, 07 Jun 2024 15:32:51 +0000 https://frontendmasters.com/blog/?p=2587 Improving the loading speed of web pages is often about making series of small incremental improvements: better CSS minification here, higher compression setting there, reducing JavaScript dependencies and so on. Rarely we see an opportunity that has the potential of altering “the game” completely. This post is about one such opportunity of not just making pages load 10% or even 50% faster, but rather make them appear to load immediately.

At the recent Google I/O, announcements around the re-invigoration of page prerendering made quite a splash among the web performance-minded developers. There is a blog post that’s a good start, providing an overview of the Speculation Rules API and some context of how we got here. This post is about trying out the new API in the good ol’ console, just checking it out commitment-free, seeing-is-believing style.

What is the Speculation Rules API?

Imagine if the browser didn’t have to fetch all the resources and render the new page when you click on a link to load a new page. Because… it had already done it! That’s what the Speculation Rules API is all about. It’s bit of JSON data you put inside a <script> tag with a special attribute that contains rules on pages you want to do this rendering ahead of time on. This information can be updated in real time as needed. The point is making new page loads feel instant. It’s a great tool for web performance and thus happy users.

It’s as if you’ve loaded a new page in a hidden iframe and then swapped with the original page when necessary. Except, there are no iframes involved, it’s all done by the browser and comes with a nice API to boot.

According to its specification, the Speculation Rules API is not yet a standard nor it is on the W3C standards track. However it’s already publicly available in various Chromium browsers, totaling over 70% global reach (source: caniuse.com)

Potential Confusion with Prior “Pre” Usage

There’s a potential for confusion here because we have overused the “pre” prefix over the years. We have the <link>’s rel attribute for example, with values preload, prefetch, preconnect, which all do different things. Those will still exist.

One last thing before we begin: I’m using a WordPress blog as a playground because many of us have seen or managed a blog at some point and it’s familiar. However, if you want to use the new API without a care in the world, you can simply install the existing WP plugin created by WP’s perf team. 

Adventure Time

Let’s play with Speculation Rules API, all in the browser console, with no server-side changes necessary.

Step 1) Load a web page, e.g. https://phpied.com

Step 2) Open the browser console and add a new class name to three links of your choosing. In this case the class name is prerenderosa and the links of my choosing are the links to the latest 3 blogposts.

document.querySelectorAll('.entry a[rel=bookmark]')
  .slice(0, 3)
  .forEach(e => e.classList.add('prerenderosa')
);

Step 3) Set up the speculation rules JavaScript object:

const pre = {
  "prerender": [{
    "source": "document", // optional, since Chrome 121
    "where": {
      "and": [
        // prerender pages where the link has the new class name
        { "selector_matches": ".prerenderosa" }, 
      ]
    },
    "eagerness": "immediate", // be eager!
  }]
};

Step 4) Add the speculations rules to the page, using a script element.

const spec = document.createElement('script');
spec.type = "speculationrules";
spec.append(JSON.stringify(pre)); 
document.body.append(spec);

And that’s it! Pages are prerendered. Clicking on any of the 3 links shows the page loaded immediately.

Debugging

How do we know things worked as expected and what happened exactly? DevTools to the rescue. Open the Application tab and find “Speculative loads” in the menu. This is where you can inspect the results.

Prerendering Prerequisites

There are conditions to be met before the prerendering can happen. Most importantly you need to check if any extensions have disabled prefetching. This is common with ad-blocking extensions. This means disable the setting for all sites, not just disable the extension on the current page. For example in uBlock Origin, unselect the option called “Disable pre-fetching”. Research by DebugBear into extensions that harm performance points to two other additional popular extensions (where “popular” means over 1 million installs): Windscribe and Privacy Badger.

Luckily, DevTools provides a link to the extension management area as well as the preload settings (they were off by default for me).

Speculative loads view in devtools

Below is the view you see when you click on “Preload pages settings” link. Note that the “standard” preload setting is good enough, no need for the “extended”.

Preload settings in chrome://settings/performance

You may notice that the toggle is off and disabled so you cannot turn it on. This is a sign that possibly an extension is preventing it. In this case you should be able to see a little puzzle icon. Mouse over that icon should reveal who’s disabling the setting.

Preload settings in chrome://settings/performance

Note that prerendering won’t work if the link is opened in a new browser window/tab (e.g. with target="_blank"). But the benefit of the downloaded resources of the new page (scripts, styles, images) still exists. And support for window targets is coming soon.

Next Steps in Debugging

Now let’s see what happens every step of the way. First we load a page and pop open the new “Speculative loads” and the console.

Normal page load

Now we paste the generic code from above: the one that adds class names to three selected links and adds the speculationrules script element.

Lo and behold: 3 pages are prerendered and ready.

3 pages are prerendered and ready

You can click to the “Speculations” submenu to see which pages were prerendered.

Details on the 3 pages

Also in the Network panel you can now pick which waterfall to explore: the original page or any of the preloaded ones.

Subpages in network panel

If you click on a link to a prerendered page, the load is immediate, because it’s just a swap.

On the next page you can also see that the “Speculative loading status” is a success. This is the prerendered page. It doesn’t have any speculation of its own but was itself loaded speculatively.

Speculative success

Everything in Moderation

The snippet above had "eagerness": "immediate" speculation rule. But you have other options. Let’s try moderate. We keep everything the same, just change one setting:

const pre = {
  "prerender": [{
    "source": "document",
    "where": {
      "and": [
        { "selector_matches": ".prerenderosa" }, 
      ]
    },
    "eagerness": "moderate", // easy there tiger
  }]
};

Now nothing is prerendered, until you mouseover for 200ms over a link that is a prerendering candidate. And as you can see out of the three candidates for prerendering one is indeed prerendered, the other two are not.

Only one page preloaded after 200ms mouseover

This is much less aggressive and easy on your server.

There’s one more eagerness option: “conservative”. With it, the browser only prerenders on mouse/pointer down. There’s still an early start but the effect may not be the same.

Concerns?

If you’re like me, you probably have a lot of questions for example:

  • Can I opt out analytics of prerendered-but-never-seen pages?
  • What about performance measurements?
  • Can I have an API to know when prerendering happens?

I’m happy to say that these concerns are all addressed (see here) and reportedly already supported by Google Analytics.

Go forth and speculate!

As you can see, you can use the new goodness purely on the client-side. There are other options, naturally you can have the speculation rules server-generated and spit out in the HTML. Or you can also have an HTTP header that points to a separate JSON file with all the rules.

But no matter how you do it, the results speak for themselves and the Speculation Rules API is definitely worth exploring.

]]>
https://frontendmasters.com/blog/playing-with-the-speculation-rules-api-in-the-console/feed/ 1 2587
Understanding Interaction to Next Paint (INP) https://frontendmasters.com/blog/understanding-inp/ https://frontendmasters.com/blog/understanding-inp/#respond Tue, 09 Apr 2024 14:38:19 +0000 https://frontendmasters.com/blog/?p=1604 As of March 12th 2023, Interaction to Next Paint (INP) replaces First Input Delay (FID) as a Core Web Vital metric.

FID and INP are measuring the same situation in the browser: how clunky does it feel when a user interacts with an element on the page? The good news for the web—and its users—is that INP provides a much better representation of real-world performance by taking every part of the interaction and rendered response into account.

It’s also good news for you: the steps you’ve already taken to ensure a good score for FID will get you part of the way to a solid INP. Of course, no number—no matter how soothingly green or alarmingly red it may be—can be of any particular use without knowing exactly where they’re coming from. In fact, the best way to understand the replacement is to better understand what was replaced. As is the case with so many aspects of front-end performance, the key is knowing how JavaScript makes use of the main thread. As you might imagine, every browser manages and optimizes tasks a little differently, so this article is going to oversimplify a few concepts—but make no mistake, the more deeply you’re able to understand JavaScript’s Event Loop, the better equipped you’ll be for handling all manner of front-end performance work.

The Main Thread

You might have heard JavaScript described as “single-threaded” in the past, and while that’s not strictly true since the advent of Web Workers, it’s still a useful way to describe JavaScript’s synchronous execution model. Within a given “realm”—like an iframe, browser tab, or web worker—only one task can be executed at a time. In the context of a browser tab, this sequential execution is called the main thread, and it’s shared with other browser tasks—like parsing HTML, some CSS animations, and some aspects of rendering and re-rendering parts of the page.

JavaScript manages “execution contexts”—the code currently being executed by the main thread—using a data structure called the “call stack” (or just “the stack”). When a script starts up, the JavaScript interpreter creates a “global context” to execute the main body of the code—any code that exists outside of a JavaScript function. That global context is pushed to the call stack, where it gets executed.

When the interpreter encounters a function call during the execution of the global context, it pauses the global execution context, creates a “function context” (sometimes “local context”) for that function call, pushes it onto the top of the stack, and executes the function. If that function call contains a function call, a new function context is created for that, pushed to the top of the stack, and executed right away. The highest context in the stack is always the current one being executed, and when it concludes, it gets popped off the stack so the next highest execution context can resume—“last in, first out.” Eventually execution ends up back down at the global context, and either another function call is encountered and execution works its way up and back down through that and any functions that call contains, one at a time, or the global context concludes and the call stack sits empty.

Now, “execute each function in the order they’re encountered, one at a time” were the entire story, a function that performs any kind of asynchronous task—say, fetching data from a server or firing an event handler’s callback function—would be a performance disaster. That function execution context would either end up blocking execution until the asynchronous task completes and that task’s callback function kicked off, or suddenly interrupting whatever function context the call stack happened to be working through when that task completed. So alongside the stack, JavaScript makes use of an event-driven “concurrency model” made up of the “event loop” and “callback queue” (or “message queue”).

When an asynchronous task is completed and its callback function is called, the function context for that callback function is placed in a callback queue instead of at the top of the call stack—it doesn’t take over execution immediately. Sitting between the callback queue and the call stack is the event loop, which is constantly polling for both the presence of function execution contexts in the callback queue and room for it in the call stack. If there’s a function execution context waiting in a callback queue and the event loop determines that the call stack is sitting empty, that function execution context is pushed to the call stack and executed as though it were just called synchronously.

So, for example, say we have a script that uses an old-fashioned setTimeout to log something to the console after 500 milliseconds:

setTimeout( function myCallback() {
    console.log( "Done." );
}, 500 );

// Output: Done.

First, a global context is created for the body of the script and executed. The global execution context calls the setTimeout method, so a function context for setTimeout is created at the top of the call stack, and is executed—so the timer starts ticking. The myCallback function isn’t added to the stack, however, since it hasn’t been called yet. Since there’s nothing else for the setTimeout to do, it gets popped off the stack, and the global execution context resumes. There’s nothing else to do in the global context, so it pops off the stack, which is now empty.

Now, at any point during this sequence of events our timer will elapse, calling myCallback. At that point, the callback function is added to a callback queue instead of being added to the stack and interrupting whatever else was being executed. Once the call stack is empty, the event loop pushes the execution context for myCallback to the stack to be executed. In this case, the main thread is done working long before the timer elapses, and our callback function is added to the empty call stack right away:

const rightNow = performance.now();

setTimeout( () => {
    console.log( `The callback function was executed after ${ performance.now() - rightNow } milliseconds.` );
}, 500);

// Output: The callback function was executed after 501.7000000476837 milliseconds.

Without anything else to do on the main thread our callback fires on time, give or take a millisecond or two. But a complex JavaScript application could have tens of thousands of function contexts to power through before reaching the end of the global execution context—and as fast as browsers are, these things take time. So, let’s fake an overcrowded main thread by keeping the global execution context busy with a while loop that counts to a brisk five hundred million—a long task.

const rightNow = performance.now();
let i = 0;

setTimeout( function myCallback() {
  console.log( `The callback function was executed after ${ performance.now() - rightNow } milliseconds.`);
}, 500);

while( i < 500000000 ) {
  i++;
}
// Output: The callback function was executed after 1119.5999999996275 milliseconds.

Once again, a global execution context is created and executed. A few lines in, it calls the setTimeout method, so a function execution context for the setTimeout is created at the top of the call stack, and the timer starts ticking. The execution context for the setTimeout is completed and popped off the stack, the global execution context resumes, and our while loop starts counting.

Meanwhile, our 500ms timer elapses, and myCallback is added to the callback queue—but this time the call stack isn’t empty when it happens, and the event loop has to wait out the rest of the global execution context before it can move myCallback over to the stack. Compared to the complex processing required to handle an entire client-rendered web page, “counting to a pretty high number” isn’t exactly the heaviest lift for a modern browser running on a modern laptop, but we still see a huge difference in the result: in my case, it took more than twice as long as expected for the output to show up.

Now, we’ve been using setTimeout for the sake of predictability, but event handlers work the same way: when the JavaScript interpreter encounters an event handler in either the global or a function context, the event becomes bound, but the callback function associated with that event listener isn’t added to the call stack because that callback function hasn’t been called yet—not until the event fires. Once the event does fire, that callback function is added to the callback queue, just like our timer running out. So what happens if an event callback kicks in, say, while the main thread is bogged down with long tasks buried in the megabytes’ worth of function calls required to get a JavaScript-heavy page up and running? The same thing we saw when our setTimeout elapsed: a big delay.

If a user clicks on this button element right away, the callback function’s execution context is created and added to the callback queue, but it can’t get moved to the stack until there’s room for it in the stack. A few hundred milliseconds may not seem like much on paper, but any delay between a user interaction and the result of that interaction can make a huge difference in perceived performance—ask anyone that played too much Nintendo as a kid. That’s First Input Delay: a measurement of the delay between the first point where a user could trigger an event handler, and the first opportunity where that event handler’s callback function could be called, as the main thread has become idle. A page bogged down by parsing and executing tons of JavaScript just to get rendered and functional won’t have room in the call stack for event handler callbacks to get queued up right away, meaning a longer delay between a user interaction and the callback function being invoked, and what feels like a slow, laggy page.

That was First Input Delay—an important metric for sure, but it wasn’t telling the whole story in terms of how a user experiences a page.

What is Interaction to Next Paint?

There’s no question that a long delay between an event and the execution of that event handler’s callback function is bad, sure—but in real-world terms, “an opportunity for a callback function’s execution context to be moved to the call stack” isn’t exactly the result a user is looking for when they click on a button. What really matters is the delay between the interaction and the visible result of that interaction.

That’s what Interaction to Next Paint sets out to measure: the delay between a user interaction and the browser’s next paint—the earliest opportunity to present the user with visual feedback on the results of the interaction. Of all the interactions measured during a user’s time on a page, the one with the worst interaction latency is presented as the INP score—after all, when it comes to tracking down and remediating performance issues, we’re better off working with the bad news first.

All told, there are three parts to an interaction, and all of those parts affect a page’s INP: input delay, processing time, and presentation delay.

Chart explaining the three parts of an interaction: Input Delay, Processing Time, and Presentation Delay. 

A long task blocks input delay, then there is processing time (longest bar) and presentation delay, then the Next Paint happens.

Input Delay

How long does it take for our event handlers’ callback functions to find their way from the callback queue to the main thread?

You know all about this one, now—it’s the same metric FID once captured. INP goes a lot further than FID did, though: while FID was only based on a user’s first interaction, INP considers all of a user’s interactions for the duration of their time on the page, in an effort to present a more accurate picture of a page’s total responsiveness. INP tracks any clicks, taps, and key presses on hardware or on-screen keyboards—the interactions most likely to prompt a visible change in the page.

Processing Time

How long does it take for the callback function associated with the event to run its course?

Even if an event handler’s callback function kicks off right away, that callback will be calling functions that call more functions, filling up the call stack and competing with any other work taking place on the main thread.

const myButton = document.querySelector( "button" );
const rightNow = performance.now();

myButton.addEventListener( "click", () => {
    let i = 0;
    console.log( `The button was clicked ${ performance.now() - rightNow } milliseconds after the page loaded.` );
    while( i < 500000000 ) {
        i++;
    }
    console.log( `The callback function was completed ${ performance.now() - rightNow } milliseconds after the page loaded.` );
});

// Output: The button was clicked 615.2000000001863 milliseconds after the page loaded.
// Output: The callback function was completed 927.1000000000931 milliseconds after the page loaded.

Assuming there’s nothing else bogging down the main thread and preventing this event handler’s callback function, this click handler would have a great score for FID—but the callback function itself contains a huge, slow task, and could take a long time to run its course and present the user with a result. A slow user experience, inaccurately summed up by a cheerful green result.

Unlike FID, INP factors in these delays as well. User interactions trigger multiple events—for example, a keyboard interaction will trigger keydown, keyup, and keypress events. For any given interaction, INP will capture a result for the event with the longest “interaction latency”—the delay between the user’s interaction and the rendered response.

Presentation Delay

How quickly can rendering and compositing work take place on the main thread?

Remember that the main thread doesn’t just process our JavaScript, it also handles rendering. The time spent processing all the tasks created by the event handler are now competing with any number of other processes for the main thread, all of which is now competing the layout and style calculations needed to paint the results.

Testing Interaction to Next Paint

Now that you have a better sense what INP is measuring, it’s time to start gathering data out in the field and tinkering in the lab.

For any websites included in the Chrome User Experience Report dataset, PageSpeed Insights is a great place to start getting a sense of your pages’ INP. Your best bet for gathering real-world data from across a unknowable range of connection speeds, device capabilities, and user behaviors is likely to be the Chrome team’s web-vitals JavaScript library (or a performance-focused third-party user monitoring service).

Screenshot of PageSpeed Insights showing a test for frontendmasters.com, showing off all the metrics like LCP, INP, CLS, etc. All Core Web Vitals are "green" / "passed"

Then, once you’ve gained a sense of your pages’ biggest INP offenders from your field testing, the Web Vitals Chrome Extension will allow you to test, tinker, and retest interactions in your browser—not as representative as field data, but vital for getting a handle on any thorny timing issues that turned up in your field testing.

Screenshot of output of Web Vital Chrome Extension tester for Boost showing Largest Contentful Pain, Cumulative Layout Shift, etc.

Optimizing Interaction to Next Paint

Now that you have a better sense of how INP works behind the scenes and you’re able to track down your pages’ biggest INP offenders, it’s time to start getting things in order. In theory, INP is a simple enough thing to optimize: get rid of those long tasks and avoid overwhelming the browser with complex layout re-calculations.

Unfortunately, a simple concept doesn’t translate to any quick, easy tricks in practice. Like most front-end performance work, optimizing Interaction to Next Paint is a game of inches—testing, tinkering, re-testing, and gradually nudging your pages toward something smaller, faster, and more respectful of your users’ time and patience.

]]>
https://frontendmasters.com/blog/understanding-inp/feed/ 0 1604
Capo.js: A five minute web performance boost https://frontendmasters.com/blog/capo-js-a-five-minute-web-performance-boost/ https://frontendmasters.com/blog/capo-js-a-five-minute-web-performance-boost/#comments Fri, 01 Mar 2024 16:25:08 +0000 https://frontendmasters.com/blog/?p=1086 You want a quick web performance win at work that’s sure to get you a promotion? Want it to only take five minutes? Then I got you.

Screenshot of the Capo.js console output showing rows of colored rectangles for the Actual order and Sorted order of elements in the head.

Capo.js is a tool to get your <head> in order. It’s based on some research by Harry Roberts that shows how something seemingly insignificant as the elements in your <head> tag can make your page load up to 7 seconds slower! From pragma directives, to async scripts, to stylesheets, to open graph tags, it’s easy to mess up and can have consequences. Capo.js will show you the specific order of elements to make your <head> and your page a little (or a lotta) bit faster.

Usage

  1. Head over to Capo.js homepage
  2. Install the Capo.js Chrome Extension (you can also use it as a DevTools Snippet or bookmarklet)
  3. Run Capo.js

Capo.js will log two colored bar charts in your JS console; your “Actual” <head> order and a “Sorted” <head> order. You can expand each chart to see more details. If you see a big gray bar in the middle of your “Actual” bar chart, then you’re leaving some quick wins on the table. The “Sorted” dropdown will show you the corrected order and even give you the code. But in the real world you probably need to futz with a layout template or your _header.php to get it reorganized.

Installing Capo.js takes about a minute, rearranging your <head> takes another minute. Honestly the longest part is making the Pull Request.

EDITOR INTERVENTION

[Chris busts through the door.]

OK fine Dave, I’ll give it a shot right here on Boost itself.

I installed the Chrome Extension and ran it and got this little popup:

"Before" sort order, scattered rectangles of various colors

At first I was a little confused, like this was some fancy code that Web Perf people immediately understand but I was out of the loop on. But actually it’s just a visualization of the order of things (top: actual, bottom: ideal). As a little UX feedback, it should say “Open your console for more information” because that’s where all the useful stuff is.

I found it most useful to look at the “Sorted” output (which is what you should be doing) and then try to get my source code to match that. I think I generally did OK:

"After" sort order, scattered rectangles of various colors, slightly less scattered than the previous image

I wasn’t able to get it perfect because of WordPress. A decent chunk of what goes into your <head> in WordPress comes from the output of the <php wp_head(); ?> function. I’m sure it’s technically possible to re-order output in there, but that was a more effort that I felt was worth it right at this minute.

Take your wins, that’s what I always say.

]]>
https://frontendmasters.com/blog/capo-js-a-five-minute-web-performance-boost/feed/ 7 1086