Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Fri, 09 Jan 2026 13:24:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 225069128 Beyond the Mouse: Animating with Mobile Accelerometers https://frontendmasters.com/blog/beyond-the-mouse-animating-with-mobile-accelerometers/ https://frontendmasters.com/blog/beyond-the-mouse-animating-with-mobile-accelerometers/#respond Fri, 09 Jan 2026 13:24:47 +0000 https://frontendmasters.com/blog/?p=8178 Adding user interactions is a powerful way to elevate a design, bringing an interface to life with subtle movements that follow the mouse and creating an effect that seemingly dances with the cursor.

I’ve done dozens of demos and written several articles exploring these exact types of effects, but one thing has always bothered me: the moment a user switches to a mobile device, that magic vanishes, leaving behind a static and uninspiring experience.

See: The Deep Card Conundrum

In a mobile-first world, we shouldn’t have to settle for these ‘frozen’ fallbacks. By leveraging the built-in accelerometers and motion sensors already in our users’ pockets, we can bridge this gap, breathing new life into our animations and creating a dynamic, tactile experience that moves with the user, literally.

A quick note before we jump in: while I usually recommend viewing my examples on a large desktop screen, the effects we are discussing today are purpose-built for mobile. So to see the magic in action, you’ll need to open these examples on a mobile device. A link to a full-page preview is provided below each demo.

Identifying the Environment

Before we dive into the code, let’s take the simple example above of the 3D effect, where the objects tilt and turn based on the cursor’s position. It creates a satisfying effect with a nice sense of depth on a desktop, but on mobile, it’s just a flat, lifeless image.

To bridge this gap, our code first needs to be smart enough to detect the environment, determine which interaction model to use, and switch between the mouse and the accelerometer at a reliable way.

While we could just check if the DeviceOrientationEvent exists, many modern laptops actually include accelerometers, which might lead our code to expect motion on a desktop. A more robust approach is to check for a combination of motion support and touch capabilities. This ensures that we only activate the motion logic on devices where it actually makes sense:

const supportsMotion = typeof window.DeviceMotionEvent !== 'undefined';
const isTouchDevice = 'ontouchstart' in window || navigator.maxTouchPoints > 0;

if (supportsMotion && isTouchDevice) {
  // Initialize mobile motion sensors
  initMotionExperience();
} else {
  // Fallback to desktop mouse tracking
  initMouseFollow();
}

By making this distinction, we can tailor the experience to the hardware. If we detect a mobile device, we move to our first real hurdle: getting the browser’s permission to actually access those sensors.

You might be tempted to use the User Agent to detect mobile devices, but that is a slippery slope. Modern browsers, especially on tablets, often masquerade as desktop versions. By checking for specific features like touch support and sensor availability instead, we ensure our code works on any device that supports the interaction, regardless of its model or brand.

The Gatekeeper: Handling Permissions

Now that we’ve identified we are on a mobile device, you might expect the sensors to start streaming data immediately. However, to protect user privacy, mobile browsers (led by iOS) now require explicit user consent before granting access to sensor data.

This creates a split in our implementation:

  • The “Strict” Environment (iOS): Access must be requested via a specific method, and this request must be triggered by a “user gesture” (like clicking a button).
  • The “Open” Environment (Android & Others): The data is often available immediately, but for consistency and future-proofing, we should treat the permission flow as a standard part of our logic.

The best way to handle this is to create a “Start” or “Enable Motion” interaction. This ensures that the user isn’t startled by sudden movements and satisfies the browser’s requirement for a gesture. Here is a clean way to handle the permission flow for both scenarios:

// call on user gesture
async function enableMotion() {
  // Check if the browser requires explicit permission (iOS 13+)
  if (typeof DeviceOrientationEvent.requestPermission === "function") {
    try {
      const permissionState = await DeviceOrientationEvent.requestPermission();

      if (permissionState === "granted") {
        window.addEventListener("devicemotion", handleMotion);
      } else {
        console.warn("Permission denied by user");
      }
    } catch (error) {
      console.error("DeviceMotion prompt failed", error);
    }
  } else {
    // Non-iOS devices or older browsers
    window.addEventListener("devicemotion", handleMotion);
  }
}

By wrapping the logic this way, your app stays robust. On Android, the event listener attaches immediately. On iOS, the browser pauses and presents the user with a system prompt. Once they click “Allow,” the magic begins.

Understanding Mobile Motion Sensors

Now that we know we’re on mobile and have the necessary permissions, we start receiving data. This information comes from a set of motion sensors found in almost every smartphone.

In the browser, these sensors are exposed through two main APIs: DeviceOrientation, which provides the absolute physical orientation of the device (its position in space), and DeviceMotion, which provides real-time data about the device’s acceleration and rotation.

For the first step, we want to focus on the movement itself, so we will start with the DeviceMotion API. This API provide us with two distinct types of data:

  • Linear Motion (Acceleration): This measures forces along the three primary axes: X, Y, and Z. It’s what detects when you are shaking the phone, dropping it, or walking. Within this property we can access three values (xy, and z) that describe the change in acceleration along each specific axis.
  • Rotational Motion (Rotation Rate): This measures how fast the device is being tilted, flipped, or turned. This is where the magic happens for most UI effects, as it captures the “intent” of the user’s movement. The rotationRate property provides three values:
    • alpha: Rotation around the X axis, from front to back (tilting the top of the phone away from you).
    • beta: Rotation around the Y axis, from left to right (tilting the phone from side to side).
    • gamma: Rotation around the Z axis, perpendicular to the screen (spinning the phone on a table).

By listening to these rates of change, we can mirror the physical movement of the phone directly onto our digital interface, creating a responsive and tactile experience.

Mapping Motion to CSS Variables

Now that we are receiving a steady stream of data via our handleMotion function, it’s time to put it to work. The goal is to take the movement of the phone and map it to the same visual properties we used for the desktop version.

Inside the function, our first step is to capture the rotation data:

function handleMotion(event) {
  const rotation = event.rotationRate;
}

Now we can map the Alpha, Beta, and Gamma values to CSS variables that will rotate our rings.

In the desktop implementation, the rings responds to the mouse using two CSS variables: --rotateX and --rotateY. To support mobile, we can simply “piggyback” on these existing variables and add --rotateZ to handle the third dimension of movement.

Here is how the logic splits between the two worlds:

// Desktop: Mapping mouse position to rotation
window.addEventListener('mousemove', (event) => {
  rings.style.setProperty('--rotateX', `${event.clientY / window.innerHeight * -60 + 30}deg`);
  rings.style.setProperty('--rotateY', `${event.clientX / window.innerWidth * 60 - 30}deg`);
});

// Mobile: Mapping rotation rate to CSS variables
function handleMotion(event) {
  const rotation = event.rotationRate;
    
  // We multiply by 0.2 to dampen the effect for a smoother feel. 
  // A higher number will make the rotation more intense.
  // Notice that the Y-axis is multiplied by a negative number to align with physical movement.
  rings.style.setProperty('--rotateX', `${rotation.alpha * 0.2}deg`);
  rings.style.setProperty('--rotateY', `${rotation.beta * -0.2}deg`);
  rings.style.setProperty('--rotateZ', `${rotation.gamma * 0.2}deg`);
}

By multiplying the values by 0.2, we “calm down” the sensor’s sensitivity, creating a more professional and controlled animation. Feel free to experiment with this multiplier to find the intensity that fits your design.

The final step is updating the CSS. Since --rotateX and --rotateY are already in use, we just need to add the Z-axis rotation:

.rings {
  position: relative;
  transform: 
    rotateX(var(--rotateX, 0deg)) 
    rotateY(var(--rotateY, 0deg)) 
    rotateZ(var(--rotateZ, 0deg));
  
  /* The transition is key for smoothing out the sensor data */
  transition: transform 0.4s ease-out;
}

Now that all the pieces are in place, we have a unified experience: elegant mouse-tracking on desktop and dynamic, motion-powered interaction on mobile.

(Demo above in a full page preview, for mobile.)

Adding Physical Depth with Acceleration

To take the effect even further, we can go beyond simple rotation. By using the acceleration property from the DeviceMotion event, we can make the object physically move across the screen as we move our hands.

Inside our handleMotion function, we’ll capture the acceleration data along the X, Y, and Z axes:

function handleMotion(event) {
  const rotation = event.rotationRate;
  const acceleration = event.acceleration;

  // Rotation logic (as before)
  rings.style.setProperty('--rotateX', `${rotation.alpha * 0.2}deg`);
  rings.style.setProperty('--rotateY', `${rotation.beta * -0.2}deg`);
  rings.style.setProperty('--rotateZ', `${rotation.gamma * 0.2}deg`);

  // Translation logic: moving the object in space
  rings.style.setProperty('--translateX', `${acceleration.x * -25}px`);
  rings.style.setProperty('--translateY', `${acceleration.y * 25}px`);
  rings.style.setProperty('--translateZ', `${acceleration.z * -25}px`);
}

By multiplying the acceleration by 25, we amplify the small movements of your hand into visible shifts on the screen.

Finally, we update our CSS to include the translate property. Notice that we use a slightly longer transition for the translation (0.7s) than for the rotation (0.4s). This slight mismatch creates a “lag” effect that feels more organic and less mechanical:

.rings {
  position: relative;
  
  /* Applying both motion and rotation */
  translate: 
    var(--translateX, 0px) 
    var(--translateY, 0px) 
    var(--translateZ, 0px);
    
  transform: 
    rotateX(var(--rotateX, 0deg)) 
    rotateY(var(--rotateY, 0deg)) 
    rotateZ(var(--rotateZ, 0deg));
  
  /* Different speeds for different movements create a more fluid feel */
  transition: 
    translate 0.7s ease-out, 
    transform 0.4s ease-out;
}

With these additions, our rings now not only tilt and spin with the phone’s movement but also shift position in 3D space, creating a rich, immersive experience that feels alive and responsive.

(Demo above in a full page preview, for mobile.)

The “Wobble” Factor: Tilt vs. Movement

One key distinction to keep in mind is how the experience differs conceptually between devices. On desktop, we are tracking position. If you move your mouse to the corner and stop, the rings stay tilted. The effect is absolute.

On mobile, by using the DeviceMotion, we are tracking movement. If you tilt your phone and hold it still, the rings will float back to the center, because the speed of rotation is now zero. The rings only react while the device is in motion.

This difference stems naturally from the different ways we interact with a desktop versus a mobile device. Actually, my experience shows that in most cases involving visual interactions, like card angles or parallax control, this “reactionary” behavior actually looks better. Despite the inconsistency with the desktop version, it simply feels more natural in the hand.

However, if your design strictly requires a static behavior where the element locks to the device’s angle (similar to the mouse position), that is not a problem. This is exactly what DeviceOrientation is for.

Using Device Orientation for Absolute Angles

Remember earlier when we mentioned DeviceOrientation provides the absolute physical orientation? This is the place to use it. First, in our setup and permission checks, we would switch from listening to devicemotion to deviceorientation.

window.addEventListener('deviceorientation', handleOrientation);

Then, inside our handler, the mapping changes:

function handleOrientation(event) {
  rings.style.setProperty('--rotateZ', `${event.alpha}deg`);
  rings.style.setProperty('--rotateX', `${event.beta}deg`);
  rings.style.setProperty('--rotateY', `${event.gamma * -1}deg`);
}

Pay close attention here: the mapping of Alpha, Beta, and Gamma to the X, Y, and Z axes is different in DeviceOrientation compared to DeviceMotion (WHY?!).

  • Alpha maps to the Z-axis rotation.
  • Beta maps to the X-axis rotation.
  • Gamma maps to the Y-axis rotation (which we again multiply by -1 to align the movement with the physical world).

Here is a demo using DeviceOrientation where we track the absolute angle of the device, creating a behavior that more closely mimics the desktop mouse experience.

(Demo of the above in a full page preview, for mobile.)

If you want the object to start aligned with the screen regardless of how the user is holding their phone, you can capture a baseOrientation on the first event. This allows you to calculate the rotation relative to that initial position rather than the absolute world coordinates.

let baseOrientation = null;

function handleMotion(event) {

  if (!baseOrientation) {
    baseOrientation = {
      alpha: event.alpha,
      beta: event.beta,
      gamma: event.gamma,
    };    
  }

  rings.style.setProperty('--rotateZ', `${event.alpha - baseOrientation.alpha}deg`);
  rings.style.setProperty('--rotateX', `${event.beta - baseOrientation.beta}deg`);
  rings.style.setProperty('--rotateY', `${(event.gamma - baseOrientation.gamma) * -1}deg`);
}

If you want to let the user re-center the view, you can easily reset the baseOrientation with a simple interaction:

rings.addEventListener('click', () => { baseOrientation = null; });

With this approach, you can create a mobile experience that feels both intuitive and consistent with your desktop design, all while leveraging the powerful capabilities of modern smartphones.

Demo above in a full page preview, for mobile. Please note that using absolute values can sometimes feel a bit jittery, so use it with caution.

Going Further: The Cube Portal

Here is an example borrowed from my last article. In this case, the phone’s angles are not only used to rotate the outer cube, but also to determine the inner perspective and its origin:

.card {
  transform:
    rotateX(var(--rotateX, 0deg))
    rotateY(var(--rotateY, 0deg))
    rotateZ(var(--rotateZ, 0deg));
}

.card-content {
  perspective: calc(cos(var(--rotateX, 0)) * cos(var(--rotateY, 0)) * var(--perspective));
  perspective-origin:
    calc(50% - cos(var(--rotateX, 0)) * sin(var(--rotateY, 0)) * var(--perspective))
    calc(50% + sin(var(--rotateX, 0)) * var(--perspective));
}

(Demo above in a full page preview, for mobile.)

Final Thoughts: Breaking the Fourth Wall

It is easy to treat mobile screens as static canvases, skipping the rich interactivity we build for desktop simply because the mouse is missing. But the devices in our pockets are powerful, sophisticated tools aware of their physical place in the world. And we can use it.

By tapping into these sensors, we do more than just “fix” a missing hover state. We break the fourth wall. We turn a passive viewing experience into a tactile, physical interaction where the user doesn’t just watch the interface, but influences it.

The technology is there. The math is accessible. The only limit is our willingness to experiment. So next time you build a 3D effect or a parallax animation, don’t just disable it for mobile. Ask yourself: “How can I make this move?”

Go ahead, pick up your phone, and start tilting.


Bonus: Bringing the Mobile Feel to Desktop

We spent a lot of time discussing how to adapt mobile behavior to desktop standards, but there are cases where we might want the opposite: to have the desktop experience mimic the dynamic, movement-based nature of the mobile version.

To achieve this, instead of looking at the mouse’s position, we look at its movement.

If you want to implement this, it is quite straightforward. We simply define a lastMousePosition variable and use it to calculate the CSS variables based on the difference between frames:

let lastMousePosition = null;

function initMouseFollow() {
  
  window.addEventListener('mousemove', (e) => {
    rings.style.setProperty('--rotateX', `${(e.clientY - lastMousePosition.y) / window.innerHeight * -720}deg`);
    rings.style.setProperty('--rotateY', `${(e.clientX - lastMousePosition.x) / window.innerWidth * 720}deg`);
    
    lastMousePosition.x = e.clientX;
    lastMousePosition.y = e.clientY;
  });
}

This creates an effect on desktop that responds to the speed and direction of the mouse, rather than its specific location.

(Demo above in a full page preview, for mobile.)

]]>
https://frontendmasters.com/blog/beyond-the-mouse-animating-with-mobile-accelerometers/feed/ 0 8178
RSCs https://frontendmasters.com/blog/rscs/ https://frontendmasters.com/blog/rscs/#respond Tue, 06 Jan 2026 22:28:58 +0000 https://frontendmasters.com/blog/?p=8185 Despite some not-great recent news about security vulnerabilities, React Server Components (RSCs) are likely in pretty high volume use around the internet thanks to default usage within Next.js, perhaps without users even really knowing it. I enjoyed Nadia Makarevich’s performance-focuced look at them in Bundle Size Investigation: A Step-by-Step Guide to Shrinking Your JavaScript. The how/when/why to take advantage of RSCs is not exactly crystal clear. Myself, I feel like a “basically get it” but sometimes the more I read the more confused I get 🙃. Dan Ambrov’s writing can be helpful.

]]>
https://frontendmasters.com/blog/rscs/feed/ 0 8185
Preserve State While Moving Elements in the DOM https://frontendmasters.com/blog/preserve-state-while-moving-elements-in-the-dom/ https://frontendmasters.com/blog/preserve-state-while-moving-elements-in-the-dom/#respond Wed, 31 Dec 2025 23:03:28 +0000 https://frontendmasters.com/blog/?p=8131 Bramus wrote this almost a year ago, but I’d still call it a relatively new feature of JavaScript and one very worth knowing about.

With Node.prototype.moveBefore you can move elements around a DOM tree, without resetting the element’s state.

You don’t need it to maintain event listeners, but, as Bramus notes, it’ll keep an iframe loaded, animations running, dialogs open, etc.

]]>
https://frontendmasters.com/blog/preserve-state-while-moving-elements-in-the-dom/feed/ 0 8131
How I Write Custom Elements with lit-html https://frontendmasters.com/blog/custom-elements-with-lit-html/ https://frontendmasters.com/blog/custom-elements-with-lit-html/#comments Mon, 29 Dec 2025 14:11:35 +0000 https://frontendmasters.com/blog/?p=8102 When I started learning more about web development, or more specifically about front-end frameworks, I thought writing components was so much better and more maintainable than calling .innerHTML() whenever you need to perform DOM operations. JSX felt like a great way to mix HTML, CSS, and JS in a single file, but I wanted a more vanilla JavaScript solution instead of having to install a JSX framework like React or Solid.

So I’ve decided to go with lit-html for writing my own components.

Why not use the entire lit package instead of just lit-html?

Honestly, I believe something like lit-html should be a part of vanilla JavaScript (maybe someday?). So by using lit-html, I basically pretend like it is already. It’s my go-to solution when I want to write HTML in JavaScript. For more solid reasons, you can refer to the following list:

  • Size difference. This often does not really matter for most projects anyway.)
    • lit-html – 7.3 kb min, 3.1 kb min + gzip
    • lit – 15.8 kb min, 5.9 kb min + gzip
  • LitElement creates a shadow DOM by default. I don’t want to use the shadow DOM when creating my own components. I prefer to allow styling solutions like Tailwind to work instead of having to rely on solutions like CSS shadow parts to style my components. The light DOM can be nice.
  • import { html, render } from "lit-html" is all you need to get started to write lit-html templates whereas Lit requires you to learn about decorators to use most of its features. Sometimes you may want to use Lit directives if you need performant renders but it’s not necessary to make lit-html work on your project.

I will be showing two examples with what I consider to be two distinct methods to create a lit-html custom element. The first example will use what I call a “stateless render” because there won’t be any state parameters passed into the lit-html template. Usually this kind of component will only call the render method once during its lifecycle since there is no state to update. The second example will use a “stateful render” which calls the render function every time a state parameter changes.

Stateless Render

For my first example, the custom-element is a <textarea> wrapper that also has a status bar similar to Notepad++ that shows the length and lines of the content inside the <textarea>. The status bar will also display the position of the cursor and span of the selection if any characters are selected. Here is a picture of what it looks like for those readers that haven’t used Notepad++ before.

A screenshot of a text editor displaying an excerpt about Lorem Ipsum, highlighting the text in yellow and showing line and character counts.

I used a library called TLN (“Textarea with Line Numbers”) to make the aesthetic of the textarea feel more like Notepad++, similar to the library’s official demo. Since the base template has no state parameters, I’m using plain old JavaScript events to manually modify the DOM in response to changes within the textarea. I also used the render function again to display the updated status bar contents instead of user .innerHTML() to keep it consistent with the surrounding code.

Using lit-html to render stateless components like these is useful, but perhaps not taking full advantage of the power of lit-html. According to the official documentation:

When you call render, lit-html only updates the parts of the template that have changed since the last render. This makes lit-html updates very fast.

You may ask: “Why should you use lit-html in examples like this where it won’t make that much of a difference performance wise? Since the root render function is really only called once (or once every connectedCallback()) in the custom elements lifecycle.”

My answer is that, yes, it’s not necessary if you just want rendering to the DOM to be fast. The main reason I use lit-html is that the syntax is so much nicer to me compared to setting HTML to raw strings. With vanilla JavaScript, you have to perform .createElement(), .append(), and .addEventListener() to create deeply nested HTML structures. Calling .innerHTML() = `<large html structure></>` is much better, but you still need to perform .querySelector() to lookup the newly created HTML and add event listeners to it.

The @event syntax makes it much more clear where the event listener is located compared to the rest of the template. For example…

class MyElement extends LitElement {
  ...
  render() {
    return html`
      <p><button @click="${this._doSomething}">Click Me!</button></p>
    `;
  }
  _doSomething(e) {
    console.log("something");
  }
}

It also makes it much more apparent to me on first glance that event.currentTarget can only be the HTMLElement where you attached the listener and event.target can be the same but also may come from any child of the said HTMLElement. The template also calls .removeEventListener() on its own when the template is removed from the DOM so that’s also one less thing to worry about.

The Status Bar Area

Before I continue explaining the change events that make the status bar work, I would like to highlight one of the drawbacks of the “stateless render”: there isn’t really a neat way to render the initial state of HTML elements. I could add placeholder content for when the input is empty and no selection was made yet, but the render() function only appends the template to the given root. It doesn’t delete siblings within the root so the status bar text would end up being doubled. This could be fixed if I call an initial render somewhere in the custom element, similar to the render calls within the event listeners, but I’ve opted to omit that to keep the example simple.

The input change event is one of the more common change events. It’s straightforward to see that this will be the change event used to calculate and display the updated input length and the number of newlines that the input has.

I thought I would have a much harder time displaying the live status of selected text, but the selectionchange event provides everything I need to calculate the selection status within the textarea. This change event is relatively new too, having only been a part of baseline last September 2024.

Since I’ve already highlighted the two main events driving the status bar, I’ll proceed to the next example.

Stateful Render

My second example is a <pokemon-card> custom-element. The pokemon card component will generate a random Pokémon from a specific pokemon TCG set. The specifications of the web component are as follows:

  • The placeholder will be this generic pokemon card back.
  • A Generate button that adds a new Pokémon card from the TCG set.
  • Left and right arrow buttons for navigation.
  • Text that shows the name and page of the currently displayed Pokémon.

In this example, only two other external libraries were used for the web component that weren’t related to lit and lit-html. I used shuffle from es-toolkit to make sure the array of cards is in a random order each time the component is instantiated. Though the shuffle function itself is likely small enough that you could just write your own implementation in the same file if you want to minimize dependencies.

I also wanted to mention es-toolkit in this article for readers that haven’t heard about it yet. I think it has a lot of useful utility functions so I included it in my example. According to their introduction, “es-toolkit is a modern JavaScript utility library that offers a collection of powerful functions for everyday use.” It’s a modern alternative to lodash, which used to be a staple utility library in every JavaScript project especially during the times before ES6 was released.

There are many ways to implement a random number generator or how to randomly choose an item from a list. I decided to just create a list of all possible choices, shuffle it, then use the pop method so that it’s guaranteed no card will get generated twice. The es-toolkit shuffle type documentation states that it “randomizes the order of elements in an array using the Fisher-Yates algorithm”.

Handling State using Signals

Vanilla JavaScript doesn’t come with a state management solution. While LitElement’s property and state decorators do count as solutions, I want to utilize a solution that I consider should be a part of Vanilla JavaScript just as with lit-html. The state management solution for the component will be JavaScript Signals. Unlike lit-html, signals are already a Stage 1 Proposal so there is a slightly better chance it will become a standard part of the JavaScript specification within the next few years.

As you can see from the Stage 1 Proposal, explaining JavaScript Signals from scratch can be very long that it might as well be its own multi-part article series so I will just give a rundown on how I used it in the <pokemon-card> custom-element. If you’re interested in a quick explanation of what signals are, the creator of SolidJS, which is a popular framework that uses signals, explains their thoughts here.

Signals need an effect implementation to work which is not a part of the proposed signal API, since according to the proposal, it ties into “framework-specific state or strategies which JS does not have access to”. I will be copy and pasting the watcher code in the example despite the comments recommending otherwise. My components are also too basic for any performance related issues to happen anyways. I also used the @lit-labs/signals to keep the component “lit themed” but you can also just use the recommended signal-polyfill directly too.

Signal Syntax

The syntax I used to create a signal state in my custom HTMLElement are as follows:

#visibleIndex = new Signal.State(0)

get visibleIndex() {
  return this.#visibleIndex.get()
}

set visibleIndex(value: number) {
  this.#visibleIndex.set(value)
}

There is a much more concise way to define the above example which involves auto accessors and decorators. Unfortunately, CodePen only supports TypeScript 4.1.3 as of writing, so I’ve opted to just use long-hand syntax in the example. An example of the accessor syntax involving signals is also shown in the signal-polyfill proposal.

Card Component Extras

The Intersection Observer API was used to allow the user to navigate the card component via horizontal scroll bar while also properly updating the state of the current page being displayed.

There is also a keydown event handler present to also let the user navigate between the cards via keyboard presses. Depending on the key being pressed, it calls either the handlePrev() or handleNext() method to perform the navigation.

Finally, while entirely optional, I also added a feature to the component that will preload the next card in JavaScript to improve loading times between generating new cards.

]]>
https://frontendmasters.com/blog/custom-elements-with-lit-html/feed/ 1 8102
Default parameters: your code just got smarter https://frontendmasters.com/blog/default-parameters-your-code-just-got-smarter/ https://frontendmasters.com/blog/default-parameters-your-code-just-got-smarter/#respond Fri, 12 Dec 2025 15:22:11 +0000 https://frontendmasters.com/blog/?p=8036 Matt Smith with wonderfully straightforward writing on why default parameters for functions are a good idea. I like the tip where you can still do it with an object-style param.

function createUser({ name = 'Anonymous', age = 24 } = {}) {
  console.log(`${name} is ${age} years old.`);
}

createUser(); // Anonymous is 24 years old.

]]>
https://frontendmasters.com/blog/default-parameters-your-code-just-got-smarter/feed/ 0 8036
Stop Using CustomEvent https://frontendmasters.com/blog/stop-using-customevent/ https://frontendmasters.com/blog/stop-using-customevent/#respond Thu, 20 Nov 2025 00:03:05 +0000 https://frontendmasters.com/blog/?p=7809 A satisfying little rant from Justin Fagnani: Stop Using CustomEvent.

One point is that you’re forcing the consumer of the event to know that it’s custom and you have to get data out of the details property. Instead, you can subclass Event with new properties and the consumer of that event can pull that data right off the event itself.

]]>
https://frontendmasters.com/blog/stop-using-customevent/feed/ 0 7809
There are a lot of ways to break up long tasks in JavaScript. https://frontendmasters.com/blog/there-are-a-lot-of-ways-to-break-up-long-tasks-in-javascript/ https://frontendmasters.com/blog/there-are-a-lot-of-ways-to-break-up-long-tasks-in-javascript/#respond Mon, 17 Nov 2025 19:26:11 +0000 https://frontendmasters.com/blog/?p=7779 Alex MacArthur shows us there are a lot of ways to break up long tasks in JavaScript. Seven ways, in this post.

That’s a senior developer thing: knowing there are lots of different ways to do things all with different trade-offs. Depending on what you need to do, you can hone in on a solution.

]]>
https://frontendmasters.com/blog/there-are-a-lot-of-ways-to-break-up-long-tasks-in-javascript/feed/ 0 7779
Introducing TanStack Start Middleware https://frontendmasters.com/blog/introducing-tanstack-start-middleware/ https://frontendmasters.com/blog/introducing-tanstack-start-middleware/#comments Fri, 24 Oct 2025 18:59:02 +0000 https://frontendmasters.com/blog/?p=7452 TanStack Start is one of the most exciting full-stack web development frameworks I’ve seen. I’ve written about it before.

In essence, TanStack Start takes TanStack Router, a superb, strongly-typed client-side JavaScript framework, and adds server-side support. This serves two purposes: it gives you a place to execute server-side code, like database access; and it enables server-side rendering, or SSR.

This post is all about one particular, especially powerful feature of TanStack Start: Middleware.

The elevator pitch for Middleware is that it allows you to execute code in conjunction with your server-side operations, executing code on both the client and the server, both before and after your underlying server-side action, and even passing data between the client and server.

This post will be a gentle introduction to Middleware. We’ll build some very rudimentary observability for a toy app. Then, in a future post, we’ll really see what Middleware can do when we use it to achieve single-flight mutations.

Why SSR?

SSR will usually improve LCP (Largest Contentful Paint) render performance compared to a client-rendered SPA. With SPAs, the server usually sends down an empty shell of a page. The browser then parses the script files, and fetches your application components. Those components then render and, usually, request some data. Only then can you render actual content for your user.

These round trips are neither free nor cheap; SSR allows you to send the initial content down directly, via the initial request, which the user can see immediately, without needing those extra round trips. See the post above for some deeper details; this post is all about Middleware.

Prelude: Server Functions

Any full-stack web application will need a place to execute code on the server. It could be for a database query, to update data, or to validate a user against your authentication solution. Server functions are the main mechanism TanStack Start provides for this purpose, and are documented here. The quick introduction is that you can write code like this:

import { createServerFn } from "@tanstack/react-start";

export const getServerTime = createServerFn().handler(async () => {
  await new Promise(resolve => setTimeout(resolve, 1000));
  return new Date().toISOString();
});

Then you can call that function from anywhere (client or server), to get a value computed on the server. If you call it from the server, it will just execute the code. If you call that function from the browser, TanStack will handle making a network request to an internal URL containing that server function.

Getting Started

All of my prior posts on TanStack Start and Router used the same contrived Jira clone, and this one will be no different. The repo is here, but the underlying code is the same. If you want to follow along, you can npm i and then npm run dev and then run the relevant portion of the app at http://localhost:3000/app/epics?page=1.

The epics section of this app uses server functions for all data and updates. We have an overview showing:

  • A count of all tasks associated with each individual epic (for those that contain tasks).
  • A total count of all epics in the system.
  • A pageable list of individual epics which the user can view and edit.
A web application displaying an epics overview with a list of projects, their completion status, and navigation buttons.
This is a contrived example. It’s just to give us a few different data sources along with mutations.

Our Middleware Use Case

We’ll explore middleware by building a rudimentary observability system for our Jira-like app.

What is observability? If you think of basic logging as a caterpillar, observability would be the beautiful butterfly it matures into. Observability is about setting up systems that allow you to holistically observe how your application is behaving. High-level actions are assigned a globally unique trace id, and the pieces of work that action performs are logged against that same trace id. Then your observability system will allow you to intelligently introspect that data, and discover where your problems or weaknesses are.

I’m no observability expert, so if you’d like to learn more, Charity Majors co-authored a superb book on this very topic. She’s the co-founder of Honeycomb IO, a mature observability platform.

We won’t be building a mature observability platform here; we’ll be putting together some rudimentary logging with trace id’s. What we’ll be building is not suitable for use in a production software system, but it will be a great way to explore TanStack Start’s Middleware.

Our First Server Function

This is a post about Middleware, which is applied to server functions. Let’s take a very quick look at a server function

export const getEpicsList = createServerFn({ method: "GET" })
  .inputValidator((page: number) => page)
  .handler(async ({ data }) => {
    const epics = await db
      .select()
      .from(epicsTable)
      .offset((data - 1) * 4)
      .limit(4);
    return epics;
  });

This is a simple server function to query our epics. We configure it to use the GET http verb. We specify and potentially validate our input, and then the handler function runs our actual code, which is just a basic query against our SQLite database. This particular code uses Drizzle for the data access, but you can of course use whatever you want.

Server functions by definition always run on the server, so you can do things like connect to a database, access secrets, etc.

Our First Middleware

Let’s add some empty middleware so we can see what it looks like.

import { createMiddleware } from "@tanstack/react-start";

export const middlewareDemo = createMiddleware({ type: "function" })
  .client(async ({ next, context }) => {
    console.log("client before");

    const result = await next({
      sendContext: {
        hello: "world",
      },
    });

    console.log("client after", result.context);

    return result;
  })
  .server(async ({ next, context }) => {
    console.log("server before", context);

    await new Promise(resolve => setTimeout(resolve, 1000));

    const result = await next({
      sendContext: {
        value: 12,
      },
    });

    console.log("server after", context);

    return result;
  });

Let’s step through it.

export const middlewareDemo = createMiddleware({ type: "function" });

This declares the middleware. type: "function" means that this middleware is intended to run against server “functions” – there’s also “request” middleware, which can run against either server functions, or server routes (server routes are what other frameworks sometimes call “API routes”). But “function” middleware has some additional powers, which is why we’re using them here.

.client(async ({ next, context }) => {

This allows us to run code on the client. Note the arguments: next is how we tell TanStack to proceed with the rest of the middlewares in our chain, as well as the underlying server function this middleware is attached to. And context holds the mutable “context” of the middleware chain.

console.log("client before");

const result = await next({
  sendContext: {
    hello: "world",
  },
});

console.log("client after", result.context);

We do some logging, then tell TanStack to run the underlying server function (as well as any other middlewares we have in the chain), and then, after everything has run, we log again.

Note the sendContext we pass into the call to next

sendContext: {
  hello: "world",
},

This allows us to pass data from the client, up to the server. Now this hello property will be in the context object on the server.

And of course don’t forget to return the actual result.

return result;

You can return next(), but separating the call to next with the return statement allows you to do additional work after the call chain is finished: modify context, perform logging, etc.

And now we essentially restart the same process on the server.

  .server(async ({ next, context }) => {
    console.log("server before", context);

    await new Promise(resolve => setTimeout(resolve, 1000));

    const result = await next({
      sendContext: {
        value: 12
      }
    });

    console.log("server after", context);

    return result;

We do some logging and inject an artificial delay of one second to simulate work. Then, as before, we call next() which triggers the underlying server function (as well as any other Middleware in the chain), and then return the result.

Note again the sendContext.

const result = await next({
  sendContext: {
    value: 12,
  },
});

This allows us to send data from the server back down to the client.

Let’s Run It

We’ll add this middleware to the server function we just saw.

export const getEpicsList = createServerFn({ method: "GET" })
  .inputValidator((page: number) => page)
  .middleware([middlewareDemo])
  .handler(async ({ data }) => {
    const epics = await db
      .select()
      .from(epicsTable)
      .offset((data - 1) * 4)
      .limit(4);
    return epics;
  });

When we run it, this is what the browser’s console shows:

client before
client after {value: 12}

With a one second delay before the final client log, since that was the time execution was on the server with the delay we saw.

Nothing too shocking. The client logs, then sends execution to the server, and then logs again with whatever context came back from the server. Note we use result.context to get what the server sent back, rather than the context argument that was passed to the client callback. This makes sense: that context was created before the server was ever invoked with the next() call, so there’s no way for it to magically, mutably update based on whatever happens to get returned from the server. So we just read result.context to get what the server sent back.

The Server

Now let’s see what the server console shows.

server before { hello: 'world' }
server after { hello: 'world' }

Nothing too interesting here, either. As we can see, the server’s context argument does in fact contain what was sent to it from the client.

When Client Middleware Runs on the Server

Don’t forget, TanStack Start will server render your initial path by default. So what happens when a server function executes as a part of that process, with Middleware? How can the client middleware possibly run, when there’s no client in existence yet—only a request, currently being executed on the server.

During SSR, client Middleware will run on the server. This makes sense: whatever functionality you’re building will still work, but the client portion of it will run on the server. So be sure not to use any browser-only APIs like localStorage.

Let’s see this in action, but during the SSR run. The prior logs I showed were the result of browsing to a page via navigation. Now I’ll just refresh that page, and show the server logs.

client before
server before { hello: 'world' }
server after { hello: 'world' }
client after { value: 12 }

This is the same as before, but now server, and client logs are together, since this code all runs during the server render phase. The server function is called from the server, while it generates the HTML to send down for the initial render. And as before, there’s a one second delay while the server is working.

Building Real Middleware

Let’s build some actual logging Middleware with an observability flair. If you want to look at real observability solutions, please check out the book I mentioned above, or a real Observability solution like Honeycomb. But our focus will be on TanStack Middleware, not a robust observability solution.

The Client

Let’s start our Middleware with our client section. It will record the local time that this Middleware began. This will allow us to measure the total end-to-end time that our action took, including server latency.

export const loggingMiddleware = (name: string) =>
  createMiddleware({ type: "function" })
    .client(async ({ next, context }) => {
      console.log("middleware for", name, "client", context);

      const clientStart = new Date().toISOString();

Now let’s call the rest of our Middleware chain and our server function.

const result = await next({
  sendContext: {
    clientStart,
  },
});

Once the await next completes, we know that everything has finished on the server, and we’re back on the client. Let’s grab the date and time that everything finished, as well as a logging id that was sent back from the server. With that in hand, we’ll call setClientEnd, which is just a simple server function to update the relevant row in our log table with the clientEnd time.

const clientEnd = new Date().toISOString();
const loggingId = result.context.loggingId;

await setClientEnd({ data: { id: loggingId, clientEnd } });

return result;

For completeness, that server function looks like this:

export const setClientEnd = createServerFn({ method: "POST" })
  .inputValidator((payload: { id: string; clientEnd: string }) => payload)
  .handler(async ({ data }) => {
    await db.update(actionLog).set({ clientEnd: data.clientEnd }).where(eq(actionLog.id, data.id));
  });

The Server

Let’s look at our server handler.

    .server(async ({ next, context }) => {
      const traceId = crypto.randomUUID();

      const start = +new Date();

      const result = await next({
        sendContext: {
          loggingId: "" as string
        }
      });

We start by creating a traceId. This is the single identifier that represents the entirety of the action the user is performing; it’s not a log id. In fact, for real observability systems, there will be many, many log entries against a single traceId, representing all the sub-steps involved in that action.

For now, there’ll just be a single log entry, but in a bit we’ll have some fun and go a little further.

Once we have the traceId, we note the start time, and then we call await next to finish our work on the server. We add a loggingId to the context we’ll be sending back down to the client. It’ll use this to update the log entry with the clientEnd time, so we can see the total end-to-end network time.

const end = +new Date();

const id = await addLog({
  data: { actionName: name, clientStart: context.clientStart, traceId: traceId, duration: end - start },
});
result.sendContext.loggingId = id;

return result;

Next we get the end time after the work has completed. We add a log entry, and then we update the context we’re sending back down to the client (the sendContext object) with the correct loggingId. Recall that the client callback used this to add the clientEnd time.

And then we return the result, which then finishes the processing on the server, and allows control to return to the client.

The addLog function is pretty boring; it just inserts a row in our log table with Drizzle.

export const addLog = createServerFn({ method: "POST" })
  .inputValidator((payload: AddLogPayload) => payload)
  .handler(async ({ data }) => {
    const { actionName, clientStart, traceId, duration } = data;

    const id = crypto.randomUUID();
    await db.insert(actionLog).values({
      id,
      traceId,
      clientStart,
      clientEnd: "",
      actionName,
      actionDuration: duration,
    });

    return id as string;
  });

The value of clientEnd is empty, initially, since the client callback will fill that in.

Let’s run our Middleware. We’ll add it to a serverFn that updates an epic.

export const updateEpic = 
  createServerFn({ method: "POST" })
    .middleware([loggingMiddleware("update epic")])
    .inputValidator((obj: { id: number; name: string }) => obj)
    .handler(async ({ data }) => { await new Promise(resolve => setTimeout(resolve, 1000 * Math.random()));

  await db.update(epicsTable)
    .set({ name: data.name })
    .where(eq(epicsTable.id, data.id));
});

And when this executes, we can see our logs!

A database logging table displaying columns for id, trace_id, client_start, client_end, action_name, and action_duration, with several entries showing recorded data.

The Problem

There’s one small problem: we have a TypeScript error.

Here’s the entire middleware, with the TypeScript error pasted as a comment above the offending line

import { createMiddleware } from "@tanstack/react-start";
import { addLog, setClientEnd } from "./logging";

export const loggingMiddleware = (name: string) =>
  createMiddleware({ type: "function" })
    .client(async ({ next, context }) => {
      console.log("middleware for", name, "client", context);

      const clientStart = new Date().toISOString();

      const result = await next({
        sendContext: {
          clientStart,
        },
      });

      const clientEnd = new Date().toISOString();
      // ERROR: 'result.context' is possibly 'undefined'
      const loggingId = result.context.loggingId;

      await setClientEnd({ data: { id: loggingId, clientEnd } });

      return result;
    })
    .server(async ({ next, context }) => {
      const traceId = crypto.randomUUID();

      const start = +new Date();

      const result = await next({
        sendContext: {
          loggingId: "" as string,
        },
      });

      const end = +new Date();

      const id = await addLog({
        data: { actionName: name, clientStart: context.clientStart, traceId: traceId, duration: end - start },
      });
      result.sendContext.loggingId = id;

      return result;
    });

Why does TypeScript dislike this line?

We call it on the client, after we call await next. Our server does in fact add a loggingId to its sendContext object. And it’s there: the value is logged.

The problem is a technical one. Our server callback can see the things the client callback added to sendContext. But the client callback is not able to “look ahead” and see what the server callback added to its sendContext object. The solution is to split the Middleware up.

Here’s a version 2 of the same Middleware. I’ve added it to a new loggingMiddlewareV2.ts module.

I’ll post the entirety of it below, but it’s the same code as before, except all the stuff in the .client handler after the call to await next has been moved to a second Middleware. This new, second Middleware, which only contains the second half of the .client callback, then takes the other Middleware as its own Middleware input.

Here’s the code:

import { createMiddleware } from "@tanstack/react-start";
import { addLog, setClientEnd } from "./logging";

const loggingMiddlewarePre = (name: string) =>
  createMiddleware({ type: "function" })
    .client(async ({ next, context }) => {
      console.log("middleware for", name, "client", context);

      const clientStart = new Date().toISOString();

      const result = await next({
        sendContext: {
          clientStart,
        },
      });

      return result;
    })
    .server(async ({ next, context }) => {
      const traceId = crypto.randomUUID();

      const start = +new Date();

      const result = await next({
        sendContext: {
          loggingId: "" as string,
        },
      });

      const end = +new Date();

      const id = await addLog({
        data: { actionName: name, clientStart: context.clientStart, traceId: traceId, duration: end - start },
      });
      result.sendContext.loggingId = id;

      return result;
    });

export const loggingMiddleware = (name: string) =>
  createMiddleware({ type: "function" })
    .middleware([loggingMiddlewarePre(name)])
    .client(async ({ next }) => {
      const result = await next();

      const clientEnd = new Date().toISOString();
      const loggingId = result.context.loggingId;

      await setClientEnd({ data: { id: loggingId, clientEnd } });

      return result;
    });

We export that second Middleware. It takes the other one as its own middleware. That runs everything, as before. But now when the .client callback calls await next, it knows what’s in the resulting context object. It knows this because that other Middleware is now input to this Middleware, and the typings can readily be seen.

Going Deeper

We could end the post here. I don’t have anything new to show with respect to TanStack Start. But let’s make our observability system just a little bit more realistic, and in the process see a cool Node feature that’s not talked about enough, and also has the distinction of being the worst named API in software engineering history: asyncLocalStorage.

You’d be forgiven for thinking asyncLocalStorage was some kind of async version of your browser’s localStorage. But no: it’s a way to set and maintain context for the entirety of an async operation in Node.

When Server Functions Call Server Functions

Let’s imagine our updateEpic server function also wants to read the epic it just updated. It does this by calling the getEpic serverFn. So far so good, but if our getEpic serverFn also has logging Middleware configured, we really would want it to use the traceId we already created, rather than create its own.

Think about React context: it allows you to put arbitrary state onto an object that can be read by any component in the tree. Well, Node’s asyncLocalStorage allows this same kind of thing, except instead of being read anywhere inside of a component tree, the state we set can be read anywhere within the current async operation. This is exactly what we need.

Note that TanStack Start did have a getContext / setContext set of api’s in an earlier beta version, which maintained state for the current, entire request, but they were removed. If they wind up being re-added at some point (possibly with a different name) you can just use them.

Let’s start by importing AsyncLocalStorage, and creating an instance.

import { AsyncLocalStorage } from "node:async_hooks";

const asyncLocalStorage = new AsyncLocalStorage();

Now let’s create a function for reading the traceId that some middleware higher up in our callstack might have added

function getExistingTraceId() {
  const store = asyncLocalStorage.getStore() as any;
  return store?.traceId;
}

All that’s left is to read the traceId that was possibly set already, and if none was set, create one. And then, crucially, use asyncLocalStorage to set our traceId for any other Middleware that will be called during our operation.

    .server(async ({ next, context }) => {
      const priorTraceId = getExistingTraceId();
      const traceId = priorTraceId ?? crypto.randomUUID();

      const start = +new Date();

      const result = await asyncLocalStorage.run({ traceId }, async () => {
        return await next({
          sendContext: {
            loggingId: "" as string
          }
        });
      });

The magic line is this:

const result = await asyncLocalStorage.run({ traceId }, async () => {
  return await next({
    sendContext: {
      loggingId: "" as string,
    },
  });
});

Our call to next is wrapped in asyncLocalStorage.run, which means virtually anything that gets called in there can see the traceId we set. There are a few exceptions at the margins, for things like WorkerThreads. But any normal async operations which happen inside of the run callback will see the traceId we set.

The rest of the Middleware is the same, and I’ve saved it in a loggingMiddlewareV3 module. Let’s take it for a spin. First, we’ll add it to our getEpic serverFn.

export const getEpic = createServerFn({ method: "GET" })
  .middleware([loggingMiddlewareV3("get epic")])
  .inputValidator((id: string | number) => Number(id))
  .handler(async ({ data }) => {
    const epic = await db.select().from(epicsTable).where(eq(epicsTable.id, data));
    return epic[0];
  });

Now let’s add it to updateEpic, and update it to also call our getEpic server function.

export const updateEpic = createServerFn({ method: "POST" })
  .middleware([loggingMiddlewareV3("update epic")])
  .inputValidator((obj: { id: number; name: string }) => obj)
  .handler(async ({ data }) => {
    await new Promise(resolve => setTimeout(resolve, 1000 * Math.random()));
    await db.update(epicsTable).set({ name: data.name }).where(eq(epicsTable.id, data.id));

    const updatedEpic = await getEpic({ data: data.id });
    return updatedEpic;
  });

Our server function now updates our epic, and then calls the other serverFn to read the newly updated epic.

Let’s clear our logging table, then give it a run. I’ll edit, and save an individual epic. Opening the log table now shows this:

A screenshot of a database table displaying log entries with columns for id, trace_id, client_start, client_end, action_name, and action_duration.

Note there’s three log entries. In order to edit the epic, the UI first reads it. That’s the first entry. Then the update happens, and then the second read, from the updateEpic serverFn. Crucially, notice how the last two rows, the update and the last read, both share the same traceId!

Our “observability” system is pretty basic right now. The clientStart and clientEnd probably don’t make much sense for these secondary actions that are all fired off from the server, since there’s not really any end-to-end latency. A real observability system would likely have separate, isolated rows just for client-to-server latency measures. But combining everything together made it easier to put something simple together, and showing off TanStack Start Middleware was the goal, not creating a real observability system.

Besides, we’ve now seen all the pieces you’d need if you wanted to actually build this into something more realistic: TanStack’s Middleware gives you everything you need to do anything you can imagine.

Parting Thoughts

We’ve barely scratched the surface of Middleware. Stay tuned for a future post where we’ll push middleware to its limit and achieve single-flight mutations.

]]>
https://frontendmasters.com/blog/introducing-tanstack-start-middleware/feed/ 1 7452
For Your Convenience, This CSS Will Self-Destruct https://frontendmasters.com/blog/for-your-convenience-this-css-will-self-destruct/ https://frontendmasters.com/blog/for-your-convenience-this-css-will-self-destruct/#respond Wed, 22 Oct 2025 22:53:33 +0000 https://frontendmasters.com/blog/?p=7492 In A Progressive Enhancement Challenge, I laid out a situation where the hardest thing to do is show a button you never want to show at all if the JavaScript loads and executes properly. I wrote of this state:

It seems like the ideal behavior would be “hide the interactive element for a brief period, then if the relevant JavaScript isn’t ready, show the element.” But how?! We can’t count on JavaScript for this behavior, which is the only technology I’m aware of that could do it. Rock and a hard place!

Scott Jehl blogged For Your Convenience, This CSS Will Self-Destruct, including an idea that fits the bill. It’s a @keyframes animation that hides-by-default then fades in after 2s. With this in place, in your JavaScript, you’d include a bit that ensures the button stays hidden, with a new class. That’s a win!

… a site’s JavaScript files can often take many seconds to load and execute, so it’s great to have something like this ready to bail out of anything fancy in favor of giving them something usable as soon as possible.

]]>
https://frontendmasters.com/blog/for-your-convenience-this-css-will-self-destruct/feed/ 0 7492
Browser Speech Input & Output Buttons https://frontendmasters.com/blog/browser-speech-input-output-buttons/ https://frontendmasters.com/blog/browser-speech-input-output-buttons/#respond Tue, 21 Oct 2025 03:15:38 +0000 https://frontendmasters.com/blog/?p=7467 All sorts of inputs have little microphone buttons within them that you can press to talk instead of type. Honestly, I worry my daughter will never learn to type because of them. But I get it from a UX perspective, it’s convenient. We can put those in our web apps, too. Pamela Fox has an article about all this.

There are two approaches we can use to add speech capabilites to our apps:

  1. Use the built-in browser APIs: the SpeechRecognition API and SpeechSynthesis API.
  2. Use a cloud-based service, like the Azure Speech API.

Which one to use? The great thing about the browser APIs is that they’re free and available in most modern browsers and operating systems. The drawback of the APIs is that they’re often not as powerful and flexible as cloud-based services, and the speech output often sounds much more robotic.

I like that she whipped it up into a Web Component.

]]>
https://frontendmasters.com/blog/browser-speech-input-output-buttons/feed/ 0 7467