“If you use your computer to do important work, you deserve fast software.”

Two great posts about interaction latency on the hardware and software side. First is from Ink & Switch:

There is a deep stack of technology that makes a modern computer interface respond to a user’s requests. Even something as simple as pressing a key on a keyboard and having the corresponding character appear in a text input box traverses a lengthy, complex gauntlet of steps, from the scan rate of the keyboard, through the OS and framework processing layers, through the graphics card rendering and display refresh rate.

There is reason for this complexity, and yet we feel sad that computer users trying to be productive with these devices are so often left waiting, watching spinners, or even just with the slight but still perceptible sense that their devices simply can’t keep up with them.

We believe fast software empowers users and makes them more productive. We know today’s software often lets users down by being slow, and we want to do better. We hope this material is helpful for you as you work on your own software.

I loved the slow-motion videos comparing what is normally impossible to notice:

Dan Luu has a complementary post digging a bit more into computer hardware latency from the 1970s to now:

I’ve had this nagging feeling that the computers I use today feel slower than the computers I used as a kid. As a rule, I don’t trust this kind of feeling because human perception has been shown to be unreliable in empirical studies, so I carried around a high-speed camera and measured the response latency of devices I’ve run into in the past few months.

I feel both of these essays are fantastic, and important to develop some sense of what are specific numeric thresholds separating fast and slow, also in the context of being able to have an informed conversation with a front-end engineer. (Luu subsequently links to even more articles in the “Other posts on latency measurement” section, if you are curious.)

Otherwise, from my observation, the two most quoted laws of user-facing latency are still Jakob Nielsen’s response time limits, and the Doherty Threshold. But the Jakob Nielsen 100/1000/10000ms rule is from 1993 and as far as I understand is concerned primarily with UX flows: reactions to clicking a button, responses to typing a command, and so on. And the Doherty Threshold is even older. Both are simply not enough, especially not for things related to typing, multitouch, or mousing, where for a great experience you have to go way below 100ms, occasionally even down to single-digit milliseconds.

(My internal yardstick is “10 for touch, 30 for mousing, 50 for typing.” Milliseconds, of course.)

At the end of his essay, Luu writes:

It’s not clear what force could cause a significant improvement in the default experience most users see.

Perhaps one challenge is that these posts are dense and informative, but only appeal to people who care? Maybe latency eradication needs a PR strategy, with a few memorable rules and – perhaps arbitrary, but well-informed – numbers that come with some great names attached? I know in the context of web loading some of the metric names like FCP (First Contentful Paint) broke through at least to some extent, but those still feel more on the nerdy side. Even Nielsen’s otherwise fun 2019 video about response time limits didn’t stick the landing – why focus on slowing down an arbitrary label appearing above the glass when the ping sound was right there for the taking?!

I can’t help but dream of interaction speed’s “enshittification” moment.

“I’m hoping that the listeners out there, when they hear it, they’ll feel seen.”

This 25-minute segment on MKBHD’s Waveform podcast (video or audio, segment starts at 40:21) is from November 2024, and is a nice counterpart to the post about favourite well-made apps and sites.

The original theme is “what is an app that you use all the time, and like to use, but is actually a bad app?” but it quickly moves to a more general conversation about good and bad mobile apps.

It’s always interesting to me to see what themes emerge and what other people think is important. Here’s the list where I linked to relevant apps as long as I could find them:

Bad apps:

  • Google Messages – dinged for unreliable spam and lack of organization/​filtering
  • Notion (on mobile) – hard to orient yourself and some direct manipulation is wonky
  • many smart home accessory apps – bad and redundant with Google Home, but have to keep for emergencies
  • Netgear Orbi (network router) – specific functionality and bad password recovery
  • Hatch (white noise machine for babies) – simple things are hard to discover
  • Nest app/Nest Yale Smart Lock – bad integration
  • Goodreads – stale

Good apps:

Two nice moments from MoMA in New York

To be fair, I am traveling and haven’t looked for solid evidence or citation that this works for people, but I personally like this approach: in lieu of a separate language selector button, each option here itself is both a language selector and a commit button.

The labels themselves are not the name of the language, but a call to action; I imagine recognizing the one label that means something to you should be easy if the other nine look like gibberish.

And, a thoughtful moment by one exhibit: Not only showing you where you are in the sequence of three videos, but even within the currently-playing video.

(I’m less of a fan of stretched type, though.)

From dawn (or dusk) till dusk (or dawn)

This iPhone UI for dark/​light theme is doing something clever:

Ostensibly, there are two modes here:

  • automatic, for when you want the theme to match the time of day
  • manual, for when you want to keep one of the themes forever

But check out what happens when I am in automatic mode, but toggle the theme by hand anyway:

More rigid or less thoughtful interfaces would either disable manual changes when you’re in automatic mode, or understand a manual theme switch to mean “I want to turn off automatic.”

But here, iOS is quietly putting me in a temporary hybrid mode: a manual theme override until the theme catches up with what automatic mode would do, at which point it snaps back (I’m resisting very hard calling this rubber banding) to automatic mode.

What I think is clever is that this isn’t presented as a third mode – which could be more confusing than helpful – but the design simply reuses the existing Options field to set the expectations.

One has to be careful designing in shades of gray; once you enter the space you really have to commit to it and see it through. My go-to analogy is symmetry vs. asymmetry. Symmetry in visual design is usually easier and safer. If you venture into asymmetry you have to make an effort to make it work. The highs of asymmetry will be higher than anything symmetry can provide, but getting to those highs can be arduous and sometimes might even be impossible.

I thought this particular example was really nicely done and the team found a great balance. (I think Apple’s previous shade of gray – “Disconnecting Nearby Wi-Fi Until Tomorrow” – ended up slightly less successful.)

“Which is definitely not good to do to it.”

The year is 1981. Your IBM PC is equipped with a tragic speaker that sounds awful for anything except occasional beeps. (Those beeps sound awful, too.)

You can’t afford a sound card and besides, sound cards for your PC have not been invented yet. You can’t even afford a floppy drive, so you’re one of the rare people who actually uses an audio cassette player as a storage device – a technique usually reserved for more primitive machines that have half the bits your new PC does.

But there’s a silver lining. Your cassette player has a little relay that controls its motor. You can engage and disengage the relay at will.

So, someone figured out that toggling the relay kind of sounds like a metronome. Like percussion. It’s a hack, but in the sonic landscape inhabited solely by your sorry speaker, it’s a breath of fresh air (scroll to 7:26 if you don’t land there automatically):

The year is 2026. Your computer itself is the size of an audio cassette, fits in your pocket, has better storage, graphics, sound, pretty much everything compared to a 1981 PC. It even has a special haptic motor. Except, that motor can only be controlled by native apps, and there is no official API to do it from a browser.

But there’s a silver lining. Tapping any checkbox on a site generates a haptic pulse. And that apparently works even if the checkbox is hidden and if the computer is doing the tapping.

So, someone figured out a way to use that to build a library that gives websites powers to provide haptic feedback. It’s a hack, but damn if it’s not one someone took to its logical conclusion.

I love these kinds of hacks, and I wonder what’s going to happen to this one. Will it fly under a radar, or will some websites start abusing it? If so, will Safari clamp it down, or will it actually give people a proper API for haptics?

Trust your fingers

For a few months now, when re-running search queries in Bluesky’s iOS app, I ended up occasionally arriving on the wrong search, and it happened enough that I started suspecting something’s afoot. (Ahand?)

So I opened the app on my Mac via iPhone Mirroring, and started clicking testing carefully. This is what I saw:

Turns out there was something wrong there – the touch targets are so vertically lopsided you’ll often end up tapping the item below by accident.

I reported the bug to Bluesky, and a few days later I saw Norbert Heger doing a similar thing vis-a-vis the macOS Tahoe rounded corner bug (previously):

Heger’s method is automated and a lot smarter than mine, but I enjoyed seeing these parallel efforts.

What’s the lesson here? I think it’s this: Trust your fingers, and occassionally speak for them as they can’t speak for themselves.

Make yourself at home

This is a nice way iOS Safari behaves the moment you tap one of the font size buttons – it immediately ejects all the other chrome:

After Liquid Glass specifically, we seem to be going through an interesting re-evaluation of whether “the content is the king; it should feel expansive and UI should get out of the way at all costs,” so seductive as a principle, is ultimately the right approach. Liquid Glass-sporting operating systems have so many contrast and blending and distraction issues that I wonder if they alone are radicalizing people, making them appreciate traditional rigid toolbars with solid backgrounds and fortified borders.

But here? Here letting contents shine and putting the UI atop feels like the absolutely right thing to do, since you are redesigning your reading experience.

Contrast this with Books:

It’s not even that the crossfaded transitions feel awkward. It’s mostly that the interface takes up so much room that the content preview slice becomes almost claustrophobic. And it’s even weirder when you tap the Customize button, and whatever was visible gets inexplicably replaced by a pop-up with… largely the same content anyway.

How will the entire page feel? For that you have to use your imagination – or keep tapping back and forth.

Out of sight

If you choose to remove the app names from the springboard, a small thing Apple could do would be to show the app name in the long-press menu here. Otherwise, I found it feels really easy to forget the name over time! (It would be a small riff on this disambiguation detail.)

Slow, fast, third thing

Let’s say you are in Reeder (an RSS reader for iOS), looking at the list of posts, and already from the title you know you don’t care, and you want to mark it as read.

You can tap to see it and then swipe back the moment it shows. This is the slow path.

There is a faster path. Reeder enables you to slide right or left on the item. You get nice haptic feedback, and many apps support this kind of an interaction.

But there is an even faster path.

You can tap to see it and immediately swipe back. Your thumb is already there on the left anyway, and the distance is a lot shorter now.

Like every advanced gesture this takes a bit of practice, but I noticed I started doing it instinctively, without even thinking.

This happening required two small design details: The original slide transition to be interruptible at any moment, and the app to support swatting/​draging the incoming item away even if my finger was nowhere near it. Both are clever, and both feel very welcome, because they enabled this emerging (to me) behaviour that made going through the list snappy without me even realizing.

This might be a good modus operandi: Think of the slow interaction. Think of its fast version. Then, think some more.

Nicely done, Reeder team. (Or, if this is a default iOS behaviour, nicely done, Apple!)

A new (old) kind of keyboard

The first iPhone famously introduced the soft keyboard, which could change its shape depending on the need. Sometimes it would mean becoming a keypad (for numeric entries), and sometimes something subtler, like introducing a “.com” key to the bottom row, or adding a new column of keys and making the keys a bit more narrow for a few languages that need that.

Bear (the note-taking app) does something interesting: after a button press, it replaces the onscreen QWERTY keyboard with a “funpad” or a “function keypad” (like StreamDeck or Figma Creator Micro). This achieves a similar result to a scrolling toolbar above the keyboard (see: Apple Notes), but in a different way. I haven’t seen anything like this before, and I think it’s really clever and it has worked well for me in practice.

(It also cleverly closes itself upon some actions like introducing a divider, but stays put for bolding, indentation, etc.)

A nice moment in screenshotting on iOS

In iOS, I like how cropping quietly snaps to things that look like borders, with gentle haptics, without announcing anything: