Only time will tell

Why is there a short wait if you press a button on your headphone remote or your AirPods to pause the music? Because the interface has to let a bit of time pass to figure out if you’re going to press the button again, making it a double press (advance to next track) instead of a single press.

This kind of disambiguation delay is everywhere for simple gestures.

Why is there a short wait if you press a button twice in that situation? The double press processing also has to be delayed, because there is a chance it might become a triple press (go to previous track).

Why is there a short wait if you press a button to go to the next track on your car’s steering wheel? It’s a delay of a different kind, but the same principle: the function cannot kick in on press down, because press down and hold mean “fast forward.” So, software has to wait for button up event to go to the next track (which feels a bit slower than button down), or for enough time to pass so we’re certain it’s a button-down hold rather than a slow press. Here, both interactions experience a penalty for coexisting.

The most infamous of those disambiguation delays exists in mobile browsers. Since every double tap can zoom into the page ever since that famous 2007 iPhone presentation, every single tap on a link or elsewhere has to be delayed by about 300ms. This has been a source of contention since it does make the web feel a bit slower, and today browsers suspend double tapping on sites designed for mobile, trading zooming affordances for higher interaction speed – after all, you can still zoom in by pinching. But if you always wondered why older websites tend to be a bit sluggish to interact with, now you know.

Different tradeoffs are possible. In the Finder, clicking on icons isn’t slowed down even though double clicking exists, because selecting an icon is compatible with opening it! So in effect it’s not a choice between a faster A and a slower B – it’s A or A+B.

Even in the iPhone presentation above, you can see the interface highlights the link on double tap, to at least make it feel snappier, at the expense of the highlight being “wrong” and potentially distracting – or even confusing – when you end up double tapping. (You can imagine smartphones pausing on the first remote/​headset button press, too. It feels like it would be compatible with advancing to the next track, but I think it might also feel too “choppy,” too chaotic, in practice.)

Lastly, why is there a short wait if you press a button on your hotel TV to increase the volume? Oh, I think that one is just sluggish for no good reason.

“If you use your computer to do important work, you deserve fast software.”

Two great posts about interaction latency on the hardware and software side. First is from Ink & Switch:

There is a deep stack of technology that makes a modern computer interface respond to a user’s requests. Even something as simple as pressing a key on a keyboard and having the corresponding character appear in a text input box traverses a lengthy, complex gauntlet of steps, from the scan rate of the keyboard, through the OS and framework processing layers, through the graphics card rendering and display refresh rate.

There is reason for this complexity, and yet we feel sad that computer users trying to be productive with these devices are so often left waiting, watching spinners, or even just with the slight but still perceptible sense that their devices simply can’t keep up with them.

We believe fast software empowers users and makes them more productive. We know today’s software often lets users down by being slow, and we want to do better. We hope this material is helpful for you as you work on your own software.

I loved the slow-motion videos comparing what is normally impossible to notice:

Dan Luu has a complementary post digging a bit more into computer hardware latency from the 1970s to now:

I’ve had this nagging feeling that the computers I use today feel slower than the computers I used as a kid. As a rule, I don’t trust this kind of feeling because human perception has been shown to be unreliable in empirical studies, so I carried around a high-speed camera and measured the response latency of devices I’ve run into in the past few months.

I feel both of these essays are fantastic, and important to develop some sense of what are specific numeric thresholds separating fast and slow, also in the context of being able to have an informed conversation with a front-end engineer. (Luu subsequently links to even more articles in the “Other posts on latency measurement” section, if you are curious.)

Otherwise, from my observation, the two most quoted laws of user-facing latency are still Jakob Nielsen’s response time limits, and the Doherty Threshold. But the Jakob Nielsen 100/1000/10000ms rule is from 1993 and as far as I understand is concerned primarily with UX flows: reactions to clicking a button, responses to typing a command, and so on. And the Doherty Threshold is even older. Both are simply not enough, especially not for things related to typing, multitouch, or mousing, where for a great experience you have to go way below 100ms, occasionally even down to single-digit milliseconds.

(My internal yardstick is “10 for touch, 30 for mousing, 50 for typing.” Milliseconds, of course.)

At the end of his essay, Luu writes:

It’s not clear what force could cause a significant improvement in the default experience most users see.

Perhaps one challenge is that these posts are dense and informative, but only appeal to people who care? Maybe latency eradication needs a PR strategy, with a few memorable rules and – perhaps arbitrary, but well-informed – numbers that come with some great names attached? I know in the context of web loading some of the metric names like FCP (First Contentful Paint) broke through at least to some extent, but those still feel more on the nerdy side. Even Nielsen’s otherwise fun 2019 video about response time limits didn’t stick the landing – why focus on slowing down an arbitrary label appearing above the glass when the ping sound was right there for the taking?!

I can’t help but dream of interaction speed’s “enshittification” moment.

“Distinct absence of anything that takes away screen real-estate”

Neil Panchal writing in 2020 about a cool little page called diskprices.com:

The performance of this website is stellar. It loads almost instantly. And the list (although it’s not sortable) gets the job done, it is sorted by price already which is the most important attribute.

Diskprices.com deserves the UI/UX award of the decade. We’ve lost our ability to design user interfaces laser-focused on the user. Instead, we have purple gradients, scroll jacking, responsive bullshit, emojis, animations, and many other things designers do today. The utilitarian approach of Diskprices.com is refreshing, although the contemporary designers cast it off as ‘brutalist design’, thereby marking it as a statement of fashion.

But both the creators of the page and Panchal might be getting this wrong:

Do you need a graphic designer?
No. This site is designed to maximize information density, accessibility, and performance. More whitespace, colors, and icons won’t help.

I think this is incorrect. The creator of the page is a graphic designer, that just happens to be the perfect graphic designer for the job.

“Everything possible to make this website as fast as they can”

This 13-minute video from Wes Bos analyzes this today-almost-mythical McMaster-Carr website and figures out why it’s so fast.

It’s perhaps more technical than what I usually link to, but shows what can happen if someone really cares about performance. What’s interesting to me is that the author posits that it’s actually not an old website that is fast because it’s old… it’s actually kind of a melange of various techniques throughout the decades, from vintage solutions like spriting images, to more modern like JavaScript’s page history API, or pre-caching DNS.

Just visiting the website and clicking around can be inspiring because it reminds one that we gained a lot of computing power and network speed over the last decades, but most websites squander it. Not this one.

And it’s sad this kind of approach of a website appearing and not changing (no reflow, no pop-ups, no endless spinners, no infinite scrolls) feels so rare.

However, two caveats:

At around 7:35, Wes says “nothing else moves”… Oh yeah, it does. It’s perhaps my curse that I notice these things.

Also, the homepage now has an animated, delayed green banner you can see at the photo above. I hope they’re not losing their way.

Jan 12, 2026

“Every aspect of the machine operates as quickly as the user can move.”

Evergreen and inspiring from Craig Mod, a 2019 plea for fast software:

Google Maps is dying a tragic, public death by a thousand cuts of slowness. Google has added animations all over Google Maps. They are nice individually, but in aggregate they are very slow. Google Maps used to be a fast, focused tool. It’s now quite bovine. If you push the wrong button, it moos. Clunky, you could say. Overly complex. Unnecessarily layered. Perhaps it’s trying to do too much? To back out of certain modes — directions, for example — a user may have to tap four or five different areas and endure as many slow animations.

Funnily enough, I feel that way about Apple Maps. I abandoned it since small things felt heavy, mired in superfluous swipey animations that felt like driving a 1960s car. Luckily, this was at the time Google Maps redesign its tiles to match Apple’s, so I got what I wanted to begin with, although in a slightly shady way.

I miss Sublime Text and might take it again for a spin (VS Code and Atom felt slow, Nova is delightful but also struggles in performance, even on simple things).

I miss Notes feeling lightning fast.