Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Fri, 09 Aug 2024 18:54:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 225069128 Exploring the Possibilities of Native JavaScript Decorators https://frontendmasters.com/blog/exploring-the-possibilities-of-native-javascript-decorators/ https://frontendmasters.com/blog/exploring-the-possibilities-of-native-javascript-decorators/#comments Fri, 09 Aug 2024 18:54:47 +0000 https://frontendmasters.com/blog/?p=3381 We’ve known it for a while now, but JavaScript is eventually getting native support for decorators. The proposal is in stage 3 — it’s inevitable! I’m just coming around to explore the feature, and I’m kinda kicking myself for waiting so long, because I’m finding it to be tremendously helpful. Let’s spend some time exploring it.

The Pattern vs The Feature

It’s probably worth clarifying what’s meant by a “decorator.” Most of the time, people are talking about one of two things:

The decorator design pattern

This is the higher-level concept of augmenting or extending a function’s behavior by “decorating” it. Logging is a common example. You might want to know when and with what parameters it’s called, so you wrap it with another function:

function add(a, b) {
  return a + b;
}

function log(func) {
  return function (...args) {
    console.log(
      `method: ${func.name} | `,
      `arguments: ${[...args].join(", ")}`
    );
    return func.call(this, ...args);
  };
}

const addWithLogging = log(add);

addWithLogging(1, 2);
// adding 1 2

There’s no new language-specific feature here. One function simply accepts another as an argument and returns a new, souped-up version. The original function has been decorated.

Decorators as a feature of the language

The decorator feature is a more tangible manifestation of the pattern. It’s possible you’ve seen an older, unofficial version of this before. We’ll keep using the logging example from above, but we’ll first need to refactor a bit because language-level decorators can only be used on class methods, fields, and on classes themselves.

// The "old" decorator API:

function log(target, key, descriptor) {
  const originalMethod = descriptor.value;

  descriptor.value = function (...args) {
    console.log(
      `method: ${originalMethod.name} | `,
      `arguments: ${[...args].join(", ")}`
    );

    return originalMethod.apply(this, args);
  };

  return descriptor;
}

class Calculator {
  @log // <-- Decorator applied here.
  add(a, b) {
    return a + b;
  }
}

new Calculator().add(1, 2); // method: add | arguments: 1, 2

Despite being non-standard, there are a number of popular, mature libraries out there that have used this implementation. TypeORMAngular, and NestJS are just a few of the big ones. And I’m glad they have. It’s made building applications with them feel cleaner, more expressive, and easier to maintain.

But because it’s non-standard, it could become problematic. For example, there’s some nuance between how it’s implemented by Babel and TypeScript, which probably caused frustration for engineers moving between applications with different build tooling. Standardization would serve them well.

The Slightly Different Official API

Fortunately, both TypeScript (as of v5) and Babel (via plugin) now support the TC39 version of the API, which is even simpler:

function log(func, context) {
  return function (...args) {
    console.log(
      `method: ${func.name} | `,
      `arguments: ${[...args].join(", ")}`
    );

    func.call(this, ...args);
  };
}

class Calculator {
  @log
  add(a, b) {
    return a + b;
  }
}

new Calculator().add(1, 2); // method: add | arguments: 1, 2

As you can see, there’s much less of a learning curve, and it’s fully interchangeable with many functions that have been used as decorators until now. The only difference is that it’s implemented with new syntax.

Exploring the Use Cases

There’s no shortage of scenarios in which this feature will be handy, but let’s try out a couple that come to mind.

Debouncing & Throttling

Limiting the number of times an action occurs in a given amount of time is an age-old need on the web. Typically, that’s meant reaching for a Lodash utility or rolling an implementation yourself.

Think of a live search box. To prevent user experience issues and network load, you want to debounce those searches, only firing a request when the user has stopped typing for a period of time:

function debounce(func) {
  let timeout = null;

  return function (...args) {
    clearTimeout(timeout);

    timeout = setTimeout(() => {
      func.apply(this, args);
    }, 500);
  };
}

const debouncedSearch = debounce(search);

document.addEventListener('keyup', function(e) {
  // Will only fire after typing has stopped for 500ms.
  debouncedSearch(e.target.value);
});  

But decorators can only be used on a class or its members, so let’s flesh out a better example. You’ve got a ViewController class with a method for handling keyup events:

class ViewController {
  async handleSearch(query) {
    const results = await search(query);

    console.log(`Update UI with:`, results);
  }
}

const controller = new ViewController();

input.addEventListener('keyup', function (e) {
  controller.handleSearch(e.target.value);
});

Using the debounce() method we wrote above, implementation would be clunky. Focusing in on the ViewController class itself:

class ViewController {
  handleSearch = debounce(async function (query) {
    const results = await search(query);

    console.log(`Got results!`, results);
  });
}

You not only need to wrap your entire method, but you also need to switch from defining a class method to an instance property set to the debounced version of that method. It’s a little invasive.

Updating to a Native Decorator

Turning that debounce() function into an official decorator won’t take much. In fact, the way it’s already written fits the API perfectly: it accepts the original function and spits out the augmented version. So, all we need to do is apply it with the @ syntax:

class ViewController {
  @debounce
  async handleSearch(query) {
    const results = await search(query);

    console.log(`Got results!`, results);
  }
}

That’s all it takes — a single line — for the exact same result.

We can also make the debouncing delay configurable by making debounce() accept a delay value and return a decorator itself:

// Accept a delay:
function debounce(delay) {
  let timeout = null;

  // Return the configurable decorator:
  return function (value) {
    return function (...args) {
      clearTimeout(timeout);

      timeout = setTimeout(() => {
        value.call(this, ...args);
      }, delay);
    };
  };
}

Using it just means calling our decorator wrapper as a function and passing the value:

class ViewController {
  @debounce(500)
  async handleSearch(query) {
    const results = await search(query);

    console.log(`Got results!`, results);
  }
}

That’s a lot of value for minimal code wrangling, especially support being provided by TypeScript and Babel — tools already well-integrated in our build processes.

Memoization

Whenever I think of great memoization that’s syntactically beautiful, Ruby first comes to mind. I’ve written about how elegant it is in the past; the ||= operator is all you really need:

def results
  @results ||= calculate_results
end

But with decorators, JavaScript’s making solid strides. Here’s a simple implementation that caches the result of a method, and uses that value for any future invocations:

function memoize(func) {
  let cachedValue;

  return function (...args) {
    // If it's been run before, return from cache.
    if (cachedValue) {
      return cachedValue;
    }

    cachedValue = func.call(this, ...args);

    return cachedValue;
  };
}

The nice thing about this is that each invocation of a decorator declares its own scope, meaning you can reuse it without risk of the cachedValue being overwritten with an unexpected value.

class Student {
  @memoize
  calculateGPA() {
    // Expensive computation...
    return 3.9;
  }

  @memoize
  calculateACT() {
    // Expensive computation...
    return 34;
  }
}

const bart = new Student();

bart.calculateGPA();
console.log(bart.calculateGPA()); // from cache: 3.9

bart.calculateACT();
console.log(bart.calculateACT()); // from cache: 34

Going further, we could also memoize based on the parameters passed to a method:

function memoize(func) {
  // A place for each distinct set of parameters.
  let cache = new Map();

  return function (...args) {
    const key = JSON.stringify(args);

    // This set of parameters has a cached value.
    if (cache.has(key)) {
      return cache.get(key);
    }

    const value = func.call(this, ...args);

    cache.set(key, value);

    return value;
  };
}

Now, regardless of parameter usage, memoization can become even more flexible:

class Student {
  @memoize
  calculateRank(otherGPAs) {
    const sorted = [...otherGPAs].sort().reverse();

    for (let i = 0; i <= sorted.length; i++) {
      if (this.calculateGPA() > sorted[i]) {
        return i + 1;
      }
    }

    return 1;
  }

  @memoize
  calculateGPA() {
    // Expensive computation...
    return 3.4;
  }
}

const bart = new Student();

bart.calculateRank([3.5, 3.7, 3.1]); // fresh
bart.calculateRank([3.5, 3.7, 3.1]); // cached
bart.calculateRank([3.5]); // fresh

That’s cool, but it’s also worth noting that you could run into issues if you’re dealing with parameters that can’t be serialized (undefined, objects with circular references, etc.). So, use it with some caution.

Memoizing Getters

Since decorators can be used on more than just methods, a slight adjustment means we can memoize getters too. We just need to use context.name (the name of the getter) as the cache key:

function memoize(func, context) {
  let cache = new Map();

  return function () {
    if (cache.has(context.name)) {
      return cache.get(context.name);
    }

    const value = func.call(this);

    cache.set(context.name, value);

    return value;
  };
}

Implementation would look the same:

class Student {
  @memoize
  get gpa() {
    // Expensive computation...
    return 4.0;
  }
}

const milton = new Student();

milton.gpa // fresh
milton.gpa // from the cache

That context object contains some useful bits of information, by the way. One of those is the “kind” of field being decorated. That means we could even take this a step further by memoizing the getters and methods with the same decorator:

function memoize(func, context) {
  const cache = new Map();

  return function (...args) {
    const { kind, name } = context;

    // Use different cache key based on "kind."
    const cacheKey = kind === 'getter' ? name : JSON.stringify(args);

    if (cache.has(cacheKey)) {
      return cache.get(cacheKey);
    }

    const value = func.call(this, ...args);

    cache.set(cacheKey, value);

    return value;
  };
}

You could take this much further, but we’ll draw the line there for now, and instead shift to something a little more complex.

Dependency Injection

If you’ve worked with a framework like Laravel or Spring Boot, you’re familiar with dependency injection and the “inversion of control (IoC) container” for an application. It’s a useful feature, enabling you to write components more loosely coupled and easily testable. With native decorators, it’s possible to bring that core concept to vanilla JavaScript as well. No framework needed.

Let’s say we’re building an application needing to send messages to various third-parties. Triggering an email, sending an analytics event, firing a push notification, etc. Each of these are abstracted into their own service classes:

class EmailService {
  constructor() {
    this.emailKey = process.env.EMAIL_KEY;
  }
}

class AnalyticsService {
  constructor(analyticsKey) {
    this.analyticsKey = analyticsKey;
  }
}

class PushNotificationService {
  constructor() {
    this.pushNotificationKey = process.env.PUSH_NOTIFICATION_KEY;
  }
}

Without decorators, it’s not difficult to instantiate those yourself. It might look something like this:

class MyApp {
  constructor(
    emailService = new EmailService(),
    analyticsService = new AnalyticsService(),
    pushNotificationService = new PushNotificationService()
  ) {
    this.emailService = emailService;
    this.analyticsService = analyticsService;
    this.pushNotificationService = pushNotificationService;

    // Do stuff...
  }
}

const app = new MyApp();

But now you’ve cluttered your constructor with parameters that’ll never otherwise be used during runtime, and you’re taking on full responsibility for instantiating those classes. There are workable solutions out there (like relying on separate modules to create singletons), but it’s not ergonomically great. And as complexity grows, this approach will become more cumbersome, especially as you attempt to maintain testability and stick to good inversion of control.

Dependency Injection with Decorators

Now, let’s create a basic dependency injection mechanism with decorators. It’ll be in charge of registering dependencies, instantiating them when necessary, and storing references to them in a centralized container.

In a separate file (container.js), we’ll build a simple decorator used to register any classes we want to make available to the container.

const registry = new Map();

export function register(args = []) {
  return function (clazz) {
    registry.set(clazz, args);
  };
}

There’s not much to it. We’re accepting the class itself and optional constructor arguments needed to spin it up. Next up, we’ll create a container to hold the instances we create, as well as an inject() decorator.

const container = new Map();

export function inject(clazz) {
  return function (_value, context) {
    context.addInitializer(function () {
      let instance = container.get(clazz);

      if (!instance) {
        instance = Reflect.construct(clazz, registry.get(clazz));
        container.set(clazz, instance);
      }

      this[context.name] = instance;
    });
  };
}

You’ll notice we’re using something else from the decorator specification. The addInitializer() method will fire a callback only after the decorated property has been defined. That means we’ll be able to lazily instantiate our injected dependencies, rather than booting up every registered class all at once. It’s a slight performance benefit. If a class uses the EmailService for example, but it’s never actually instantiated, we won’t unnecessarily boot up an instance of EmailService either.

That said, here’s what’s going on when the decorator is invoked:

  • We check for any active instance of the class in our container.
  • If we don’t have one, we create one using the arguments stored in the registry, and store it in the container.
  • That instance is assigned to the name of the field we’ve decorated.

Our application can now handle dependencies a little more elegantly.

import { register, inject } from "./container";

@register()
class EmailService {
  constructor() {
    this.emailKey = process.env.EMAIL_KEY;
  }
}
@register()
class AnalyticsService {
  constructor(analyticsKey) {
    this.analyticsKey = analyticsKey;
  }
}
@register()
class PushNotificationService {
  constructor() {
    this.pushNotificationKey = process.env.PUSH_NOTIFICATION_KEY;
  }
}

class MyApp {
  @inject(EmailService)
  emailService;

  @inject(AnalyticsService)
  analyticsService;

  @inject(PushNotificationService)
  pushNotificationService;

  constructor() {
    // Do stuff.
  }
}

const app = new MyApp();

And as an added benefit, it’s straightforward to substitute those classes for mock versions of them as well. Rather than overriding class properties, we can less invasively inject our own mock classes into the container before the class we’re testing is instantiated:

import { vi, it } from 'vitest';
import { container } from './container';
import { MyApp, EmailService } from './main';

it('does something', () => {
  const mockInstance = vi.fn();
  container.set(EmailService, mockInstance);

  const instance = new MyApp();
  
  // Test stuff.
});

That makes for less responsibility on us, tidy inversion of control, and straightforward testability. All made easy by a native feature.

Just Scratching the Surface

If you read through the proposal, you’ll see that the decorator specification is far deeper than what’s been explored here, and will certainly open up some novel use cases in the future, especially once more runtimes support it. But you don’t need to master the depths of the feature in order to benefit. At its foundation, the decorator feature is still firmly seated on the decorator pattern. If you keep that in mind, you’ll be in a strong position to greatly benefit from it in your own code.

]]>
https://frontendmasters.com/blog/exploring-the-possibilities-of-native-javascript-decorators/feed/ 1 3381
Control JavaScript Promises from Anywhere Using Promise.withResolvers() https://frontendmasters.com/blog/control-javascript-promises-from-anywhere-using-promise-withresolvers/ https://frontendmasters.com/blog/control-javascript-promises-from-anywhere-using-promise-withresolvers/#comments Wed, 05 Jun 2024 13:54:26 +0000 https://frontendmasters.com/blog/?p=2530 Promises in JavaScript have always had a firm grip on their own destiny. The point at which one resolves or rejects (or, more colloquially, “settles”) is up to the executor function provided when the promise is constructed. A simple example:

const promise = new Promise((resolve, reject) => {
  setTimeout(() => {
    if(Math.random() < 0.5) {
      resolve("Resolved!")      
    } else {
      reject("Rejected!");
    }
  }, 1000);
});

promise
  .then((resolvedValue) => {
    console.log(resolvedValue);
  })
  .catch((rejectedValue) => {
    console.error(rejectedValue);
  });

The design of this API impacts how we structure asynchronous code. If you’re using a promise, you need to be OK with it owning the execution of that code.

Most of the time, that model is fine. But occasionally, there are cases when it would be nice to control a promise remotely, resolving or rejecting it from outside the constructor. I was going to use “remote detonation” as a metaphor here, but hopefully your code is doing something less… destructive. So let’s go with this instead: you hired an accountant to do your taxes. They could follow you around, crunching numbers as you go about your day, and they let you know when they are finished. Or, they could do it all from their office across town and ping you with the results. The latter is what I’m getting at here.

Typically, this sort of thing has been accomplished by reassigning variables from an outer scope and then using them when needed. Building on that example from earlier, this is what that outer scope method is like:

let outerResolve;
let outerReject;

const promise = new Promise((resolve, reject) => {
  outerResolve = resolve;
  outerReject = reject;
});

// Settled from _outside_ the promise!
setTimeout(() => {
  if (Math.random() < 0.5) {
    outerResolve("Resolved!")      
  } else {
    outerReject("Rejected!");
  }
}, 1000);

promise
  .then((resolvedValue) => {
    console.log(resolvedValue);
  })
  .catch((rejectedValue) => {
    console.error(rejectedValue);
  });

It gets the job done, but it feels a little ergonomically off, particularly since we need to declare variables in a broader scope, only for them to be reassigned later on.

A More Flexible Way to Settle Promises

The new Promise.withResolvers() method makes remote promise settlement much more concise. The method returns an object with three properties: a function for resolving, a function for rejecting, and a fresh promise. Those properties can be easily destructured and made ready for action:

const { promise, resolve, reject } = Promise.withResolvers();

setTimeout(() => {
  if (Math.random() < 0.5) {
    resolve('Resolved!');
  } else {
    reject('Rejected!');
  }
}, 1000);

promise
  .then((resolvedValue) => {
    console.log(resolvedValue);
  })
  .catch((rejectedValue) => {
    console.error(rejectedValue);
  });

Since they come from the same object, the resolve() and reject() functions are bound to that particular promise, meaning they can be called wherever you like. You’re no longer tied to a constructor, and there’s no need to reassign variables from a different scope.

Exploring Some Examples

It’s a simple feature, but one that can breathe fresh air into how you design some of your asynchronous code. Let’s look at a few examples.

Slimming Down Promise Construction

Let’s say we’re triggering a job managed by a web worker for some resource-heavy processing. When a job begins, we want to represent it with a promise, and then handle the outcome based on its success. To determine that outcome, we’re listening for three events: messageerror, and messageerror. Using a traditional promise, that’d mean wiring up something like this:

const worker = new Worker("/path/to/worker.js");

function triggerJob() {
  return new Promise((resolve, reject) => {
      worker.postMessage("begin job");
  
      worker.addEventListener('message', function (e) {
        resolve(e.data);
      });
  
      worker.addEventListener('error', function (e) {
         reject(e.data);
      });
  
      worker.addEventListener('messageerror', function(e) {
         reject(e.data);
      });
  });
}

triggerJob()
  .then((result) => {
    console.log("Success!");
  })
  .catch((reason) => {
    console.error("Failed!");
  });

That’ll work, but we’re stuffing a lot into the promise itself. The code becomes a more laborious to read, and you’re bloating the responsibility of the triggerJob() function (there’s more than just “triggering” going on here).

But with Promise.withResolvers() we have more options for tidying this up:

const worker = new Worker("/path/to/worker.js");

function triggerJob() {
  worker.postMessage("begin job");
  
  return Promise.withResolvers();
}

function listenForCompletion({ resolve, reject, promise }) {
  worker.addEventListener('message', function (e) {
    resolve(e.data);
  });

  worker.addEventListener('error', function (e) {
     reject(e.data);
  });

  worker.addEventListener('messageerror', function(e) {
     reject(e.data);
  });
  
  return promise;
}

const job = triggerJob();

listenForCompletion(job)
  .then((result) => {
    console.log("Success!");
  })
  .catch((reason) => {
    console.error("Failed!");
  })

This time, triggerJob() really is just triggering the job, and there’s no constructor stuffing going on. Unit testing is likely easier too, since the functions are more narrow in purpose with fewer side effects.

Waiting for User Action

This feature can also make handling user input more interesting. Let’s say we have a <dialog> prompting a user to review a new blog comment. When the user opens the dialog, “approve” and “reject” buttons appear. Without using any promises, handling those button clicks might look like this:

reviewButton.addEventListener('click', () => dialog.show());

rejectButton.addEventListener('click', () => {
  // handle rejection
  dialog.close();
});

approveButton.addEventListener('click', () => {
  // handle approval 
  dialog.close();
});

Again, it works. But we can centralize some of that event handling using a promise, while keeping our code relatively flat:

const { promise, resolve, reject } = Promise.withResolvers();

reviewButton.addEventListener('click', () => dialog.show());
rejectButton.addEventListener('click', reject);
approveButton.addEventListener('click', resolve);

promise
  .then(() => {
    // handle approval
  })
  .catch(() => {
    // handle rejection
  })
  .finally(() => {
    dialog.close();
  });

Here’s how more fleshed-out implementation might look:

With this change, the handlers for the user’s actions don’t need to be sprinkled across multiple event listeners. They can be colocated more easily, and save a bit of duplicate code too, since we can place anything that needs to run for every action in a single .finally().

Reducing Function Nesting

Here’s one more example highlighting the subtle ergonomic benefit of this method. When debouncing an expensive function, it’s common to see everything self-contained to that single function. There’s usually no value being returned.

Think of a live search form. Both the request and UI updates are likely handled in the same invocation.

function debounce(func) {
  let timer;
  
  return function (...args) {
    clearTimeout(timer);
    
    timer = setTimeout(() => {
      func.apply(this, args);
    }, 1000);
  };
}

const debouncedHandleSearch = debounce(async function (query) {
  // Fetch data.
  const results = await search(query);
  
  // Update UI.
  updateResultsList(results);
});

input.addEventListener('keyup', function (e) {
  debouncedHandleSearch(e.target.value);
});

But you might have good reason to debounce only the asynchronous request, rather than lumping the UI updates in with it.

This means augmenting debounce() to return a promise that’d sometimes resolve to the result (when the request is permitted to go through). It’s not very different from the simpler timeout-based approach. We just need to make sure we properly resolve or reject a promise as well.

Prior to Promise.withResolvers() being available, the code would’ve looked very… layered:

function asyncDebounce(callback) {
  let timeout = null;
  let reject = null;

  return function (...args) {
    reject?.('rejected_pending');
    clearTimeout(timeout);
    
    return new Promise((res, rej) => {
      reject = rej;   
      
      timeout = setTimeout(() => {
        res(callback.apply(this, args));
      }, 500);
    });
  };
}

That’s a dizzying amount of function nesting. We have a function that returns a function, which constructs a promise accepting a function containing a timer, which takes another function. And only in that function can we call the resolver, finally invoking the function provided like 47 functions ago.

But now, we could streamline things at least a little bit:

function asyncDebounce(callback) {
  let timeout = null;
  let resolve, reject, promise;

  return function (...args) {
    reject?.('rejected_pending');
    clearTimeout(timeout);

    ({ promise, resolve, reject } = Promise.withResolvers());
    
    timeout = setTimeout(() => {
      resolve(callback.apply(this, args));
    }, 500);

    return promise;
  };
}

Updating the UI while discarding the rejected invocations could then look something like this:

input.addEventListener('keyup', async function (e) {
  try {
    const results = await debouncedSearch(e.target.value);

    appendResults(results);
  } catch (e) {
    // Discard exceptions from intentionally rejected
    // promises, but let everything else throw.
    if(e !== 'rejected_pending') {
      throw e;
    }
  }
});

And we’d get the same desired experience, without bundling everything up into a single void function:

It’s not a dramatic change, but one that smooths over some of the rough edges in accomplishing such a task.

A Tool for Keeping More Options Open

As you can see, there’s nothing conceptually groundbreaking introduced with this feature. Instead, it’s one of those “quality of life” improvements. Something to ease the occasional annoyance in architecting asynchronous code. Even so, I’m surprised by how frequently I’m beginning to see more use cases for this tool in my day-to-day, along with many of the other Promise properties introduced in the past few years.

If anything, I think it all verifies how foundational and valuable Promise-based, asynchronous development has become, whether it’s run in the browser or on a server. I’m eager to see how much we can continue to level-up the concept and its surrounding APIs in the future.

]]>
https://frontendmasters.com/blog/control-javascript-promises-from-anywhere-using-promise-withresolvers/feed/ 2 2530