The beauty and the terror of oddly-specific commands

Right next to the generic function to delete photos by going through them one by one, my camera has a specific version – Delete All With This Date:

Below the actions to close the tab, and close all other tabs, Chrome has a specific version called Close Tabs To The Right:

In After Effects, next to typical save options, there is this – Increment And Save – which saves a file and changes the number at the end to be one notch higher (Project 2 → Project 3, and so on):

I’m mildly fascinated by these strangely specific accelerators.

The one in the camera is genuinely useful. Photo projects are often day-long affairs where you download the photos at the end of workday, but might still keep them on the card just in case. Allowing to quickly delete a day’s worth of photos makes a lot of sense, saving you from having to go through them one by one in an interface not suited for that kind of operation.

Chrome’s “Close Tabs to the Right” takes a bit of figuring out, but I believe it’s meant to make it easy to clean up after a fruitful research session where you kept ⌘-clicking and opening tabs to learn more, and those tabs now fulfilled their purpose. (Curiously, Firefox also has “Close Tabs To Left” which I don’t understand.)

After Effects’s “Increment and Save” is… I don’t know. Maybe it’s cheap? Maybe it’s honest? A proper version history would be nicer, but that’s a tall order. This is simple and, most importantly, reliable. I still often do the “poor man’s version control” elsewhere…

…so this works for me.

It’s always interesting to me to think whether these kinds of oddly-specific examples are nice gestures toward the user, or treating symptoms in lieu of fixing actual problems. Either way, I don’t think an interface can survive too many of these, as their obscurity and weirdness add up and can contaminate the entire UI.

Would love if you sent me more of these kinds of commands from the apps you use!

“We can have the best of all worlds.”

A fun 24-minute video from Technology Connections about designed sounds in real life: elevator dings, airplane chimes, railway crossing dings, and so on.

While I am sympathetic to the notion that sound pollution is a thing we need to be concerned with, the choice between silence and sound pollution is a false choice. There’s a lot of those happening these days, probably because we’re so stuck in binary thinking. But as airplanes show us, we can design sounds which aren’t obtrusive, but which are helpful. And when you get yourself out of binary thinking, you can do things like make your most obnoxious apps be silent while your important ones make themselves known, and in ways which are meaningful to you and pleasant to everyone else.

It is an interesting parallel to the post about syntax highlighting from a while back, and one of the posts about cartography design I shared recently; they all explore how you can create a richer space capable of conveying more information without overwhelming people, by being intentional about the design.

In search for a more precise cursor

One of the casualties of Apple’s otherwise brilliantly executed transition to retina pixels has been the mouse pointer, which remains aligned to what “traditional pixels” used to be, rather than the retina/​physical/smaller pixels.

Turn on the zoom gesture from a few weeks ago, and you can see the challenge. The gridlines are ½ logical pixel and 1 physical pixel wide:

This limitation is inherited by most tools: Photoshop, Affinity, xScope, even the built-in Digital Color Meter. It’s not the end of the world, of course, but it can be maddening if you are trying to sample a color from a “half pixel” and the cursor stubbornly skips it no matter how delicately you move. Here it is in Figma:

Of the few tools I tested, only Pixelmator allows to sample at the correct, precise level:

I was curious how would a truly precise cursor feel in general – would there be any disadvantages? – so I built a little simulator that allows a regular arrow cursor to be aligned to “half pixels” or “retina pixels.”

In the process, I discovered that both Chrome and Firefox already receive sub-traditional-pixel measurements for mousing events, so this was even easier to build than I expected. Now, precise targeting in Chrome and Firefox becomes possible:

I don’t personally see any big difference in terms of either upsides or downsides, and I’m curious if you do. iPadOS and its Safari already seems to support the precise mouse pointer, too. That makes me curious: why isn’t it available in macOS? I imagine you could even turn it on by default for apps – or, if you want to be more conservative, make it opt-in.

Pixelmator also shows that the apps can do it without waiting for macOS as the data is already there; they would just need to render the cursor on their own with more precision.

“Deere charges six figures for a tractor. But the farmers were still the product.”

Cory Doctorow, in 2022, wrote an essay about how John Deere – a farm tractor manufacturer – restrict repairs by owners or third-parties:

Deere is one of many companies that practice “VIN-locking,” a practice that comes from the automotive industry (“VIN” stands for “vehicle identification number,” the unique serial number that every automotive manufacturer stamps onto the engine block and, these days, encodes in the car’s onboard computers).

VIN locks began in car-engines. Auto manufacturers started to put cheap microcontrollers into engine components and subcomponents. A mechanic could swap in a new part, but the engine wouldn’t recognize it — and the car wouldn’t drive — until an authorized technician entered an unlock code into a special tool connected to the car’s internal network.

Big Car sold this as a safety measure, to prevent unscrupulous mechanics from installing inferior refurbished or third-party parts in unsuspecting drivers’ cars. But the real goal was eliminating the independent car repair sector, and the third-party parts industry, allowing car manufacturers to monopolize the repair and parts revenues, charging whatever the traffic would bear (literally).

The same tactic was used by John Deere, forcing farmers to hack the tractors they purchased just so they could repair them.

In a decision that bolsters right-to-repair movement, John Deere and farmers reached a settlement that has the company pay $99 million to repay prior inflated repair costs, and requires it to share software required for maintenance and repair with farmers.

Just because I was curious and you might be also, here’s an example of a modern tractor interface:

The story reminded me of an ongoing battle in Poland where a train manufacturer Newag used VIN locking and coupled it with GPS hardcoding in an even more brazen attempt to prevent third-party repair: if a train spent too much time at a location of another train repair company, it’d simply stop running – not by some hardware fault, but by a simple if condition in code.

“This is quite a peculiar part of the story—when SPS was unable to start the trains and almost gave up on their servicing, someone from the workshop typed “polscy hakerzy” (“Polish hackers”) into Google,” the team from Dragon Sector, made up of Jakub Stępniewicz, Sergiusz Bazański, and Michał Kowalczyk, told me in an email. “Dragon Sector popped up and soon after we received an email asking for help.”

The (white-hat) hackers helped unbrick the train, but since European law is stricter on DRM, the case gets murkier. The article above is from 2023, and contains this quote:

Newag said that they will sue us, but we doubt they will - their defense line is really poor and they would have no chance defending it, they probably just want to sound scary in the media.

However, in 2025, the manufacturer proceed to sue the hacker group and the train repair company. As far as I can tell, the case is still in courts.

The three hackers explained their work in this 45-minute conference talk. It’s honestly not the most polished presentation, but it goes into a lot of engrossing details and if the intersection of hacking and trains hardware interests you, check it out! I had fun looking double checking this presented code by punching in the lat/long coordinates into Google Maps and verifying they’re exactly the locations of competitive repair shops:

Is this the latest?

Found in an archive of font design (for Olivetti typewriters) and smiled:

Handoff problems were there before us and will remain after we’re gone.

This, too:

“So I wrote a script that takes monthly screenshots of Google and Apple Maps.”

From 2010 to 2021, Justin O’Beirne had been writing about online cartography, specifically in Google Maps and Apple Maps.

While both of these services changed a lot since the essays, they are still worth reading. They might be the closest to modern reviews of software as I can think of, and the way the essays are done also teaches us storytelling lessons – from nice visualizations and comparisons, to rich footnotes. There is also a great balance of high-level overview, and then jumping into specifics that reinforce it.

Here’s one example of cool tooling O’Beirne used to make his points more sticky:

I wrote a script that takes monthly screenshots of Google and Apple Maps. And thirteen months later, we now have a year’s worth of images:

The result is informative and mesmerizing:

Among the essays, I’d particularly recommend these:

  • The back-and-forth of Google Maps’s Moat and New Apple Maps: Reverse engineering areas of interest, thinking of how the slow changes in visuals lead up to strategy, good visual comparison of competition, and small fascinating anecdotes of places like Parkfield, California. (And a great example of the old adage: don’t get into the business of predicting the future as this will age your writing the most.)

There are also book recommendations and a memorable user story.

Only time will tell

Why is there a short wait if you press a button on your headphone remote or your AirPods to pause the music? Because the interface has to let a bit of time pass to figure out if you’re going to press the button again, making it a double press (advance to next track) instead of a single press.

This kind of disambiguation delay is everywhere for simple gestures.

Why is there a short wait if you press a button twice in that situation? The double press processing also has to be delayed, because there is a chance it might become a triple press (go to previous track).

Why is there a short wait if you press a button to go to the next track on your car’s steering wheel? It’s a delay of a different kind, but the same principle: the function cannot kick in on press down, because press down and hold mean “fast forward.” So, software has to wait for button up event to go to the next track (which feels a bit slower than button down), or for enough time to pass so we’re certain it’s a button-down hold rather than a slow press. Here, both interactions experience a penalty for coexisting.

The most infamous of those disambiguation delays exists in mobile browsers. Since every double tap can zoom into the page ever since that famous 2007 iPhone presentation, every single tap on a link or elsewhere has to be delayed by about 300ms. This has been a source of contention since it does make the web feel a bit slower, and today browsers suspend double tapping on sites designed for mobile, trading zooming affordances for higher interaction speed – after all, you can still zoom in by pinching. But if you always wondered why older websites tend to be a bit sluggish to interact with, now you know.

Different tradeoffs are possible. In the Finder, clicking on icons isn’t slowed down even though double clicking exists, because selecting an icon is compatible with opening it! So in effect it’s not a choice between a faster A and a slower B – it’s A or A+B.

Even in the iPhone presentation above, you can see the interface highlights the link on double tap, to at least make it feel snappier, at the expense of the highlight being “wrong” and potentially distracting – or even confusing – when you end up double tapping. (You can imagine smartphones pausing on the first remote/​headset button press, too. It feels like it would be compatible with advancing to the next track, but I think it might also feel too “choppy,” too chaotic, in practice.)

Lastly, why is there a short wait if you press a button on your hotel TV to increase the volume? Oh, I think that one is just sluggish for no good reason.

“Approximately 21 times the estimated age of the universe”

A few years ago, some sort of a bug at my work caused all of the timestamps appear as “54 years ago,” a seemingly arbitrary date. It took me a bit to realize: “Wait, you know what year was 54 years ago? 1970!” “Why is 1970 important?” asked another designer. I explained that by convention, Linux time counts up from Jan 1, 1970 – and so if the time “value” is zero or unavailable, as it was because of the bug, it would be rendered not as an error, but as that specific day long ago.

Computing is filled with all sorts of arbitrary numbers like these. The most famous one was Y2K (99 + 1 = 00 if you only allocate two digits), Pac-Man’s kill screen was number 256, people still bring up the infamous and likely non-existent “640 kilobytes should be enough for everybody” quote, and the Deep Impact space probe died a lonely and undignified death after its timers overflowed the two pairs of bytes given to them.

Here’s a new magic number to remember: macOS Tahoe has, for a while at least, a kill screen of its own – after 49 days, 17 hours, 2 minutes, and 47 seconds (or, 4,294,967,295 milliseconds), one of its time counters overflows and no new network connections can be made, rendering the machine rather useless. The only solution is a reboot. Talk about a deadline!

(Well, new-ish. In perhaps a bit of karmic payback, Windows 95 and 98 once had a similar problem with the exact same threshold of 49.7 days.)

Wikipedia has a nice list of other time storage bugs. The next big one? The problem of the year 2038. The technical fix, as always, is to give the numbers a bit more room to breathe. This is, in a way, kicking the can down the road, but that might be okay since the road is rather long:

Modern systems and software updates address this problem by using signed 64-bit integers, which will take 292 billion years to overflow—approximately 21 times the estimated age of the universe.

However, as always, the technical side won’t be the hard part.

Apr 10, 2026

“We’re trying to copy this old machine, weirdness and all.”

I’ve loved Chris Staecker’s videos about calculating devices and machinery for years now, and I finally have a reason to link to one here. This is a fascinating 12-minute review of The Kensington Adding Machine from 1993:

It’s a fun (as always) watch, but as a UX designer, it’s also interesting to try to figure out what are the underpinnings of the things Staecker lists as strange from today’s perspective.

I believe that “CE/T” (clearing and totaling) coexisting on one key is a nod to professional accounting use of adding machines where you wouldn’t want to accidentally enter something into the record twice – so totaling also automatically resets the value and prevents you from making a mistake.

I also believe the strange [+=] rule is only because the keypad has to look forward at the same time it is looking back: it needs to serve as a universal computer keypad where [+] and [=] are separate key, but it also needs to pretend to be an adding machine where one key served both purposes.

(You can spot that the back of the box just allows you to swap the [+] key to be something else.)

Overall, the video is a fascinating tale of an “in-betweener” product that was stuck not just in the middle of a transition from physical devices into apps, but also at the intersection of calculators and adding machines (once two very different lines of products), themselves trying to learn from each other. It also serves as a great reminder that skeuomorphism is not just about visuals and sounds, but also behaviours: tearing off the tape, details of specific keys, nuances of rounding.

It’s not a thing of the past, either. In my post about determinism I linked to Apple’s recent travails with the deterministic Clear button (part one, two, and three). A few years ago, Apple also changed the built-in iPhone calculator from its “desktop calculator” roots to a more modern model where you get to input the entire equation before you see the result. But that change had bigger consequences; for example the [=] key could no longer repeat an addition. People complained, and Apple added it back – but the change feels incompatible with the new system and potentially confusing:

Elsewhere, the entire iPhone is an in-betweener, as the keypad coming from calculators is incompatible with the keypad coming from phones.

At this point it seems the calculator keypad will win, but transition has been over a century in the making. Staecker’s video is a good reminder how important, but also hard it is when you try to make these transitions happen faster.

“Software is a unique art because it is so reactive.”

Paul Ford in 2014:

As far as I can tell, no truly huge world-shifting software product has ever existed in only one version (even Flappy Bird had updates). Just about every global software product of longevity grows, changes, adapts, and reacts to other software over time.

So I set myself the task of picking five great works of software. The criteria were simple: How long had it been around? Did people directly interact with it every day? Did people use it to do something meaningful?

I came up with:

  • the office suite Microsoft Office,
  • the image editor Photoshop,
  • the videogame Pac-Man,
  • the operating system Unix,
  • and the text editor Emacs.

Ford’s criteria felt more interesting than those of the other similar lists:

I propose a different kind of software canon: Not about specific moments in time, or about a specific product, but rather about works of technology that transcend the upgrade cycle, adapting to changing rhythms and new ideas, often over decades.

This – about Unix – also caught my attention:

There’s a sad tendency in most manuals and programming guides to congratulate people simply for thinking. Not here; you’re expected to think. That can be very exciting when you’re used to being patronized, and it’s one of the best things about Unix.

Blink comparators in photo editing apps

One of the readers (thank you, Peter!) reminded me that there is a version of a blink comparator that all of us are exposed to perhaps every day: many photo editing apps – Apple Photos, Darkroom, Aphera, I imagine others – allow you to quickly compare the photo as-shot and with your edits. Sometimes it’s a tap, sometimes an onscreen button, and in the case of Lightroom it is a backslash key. Here’s that feature on a color graded photo with some dust removed:

But these blink comparators are smart. If you e.g. rotate the photo, the comparison will be with the original also rotated so the pixels still map to each other 1:1 – even if you rotated the photo as the last step in your editing process:

I think this is a brilliant example of understanding the spirit of a feature rather than its letter. A naïve blink comparator would show an unrotated photo, but in this way it would cease being a blink comparator.

“Prototyping turned into an excuse for not thinking”

The 2016 launch of No Man’s Sky and the 2020 launch of Cyberpunk 2077 were catastrophes. No Man’s Sky fell so incredibly short of the promises the founder shared over the years – from smaller ones like rivers on the surface of planets, to huge ones like seeing other players – that some people felt it must have been a scam all along.

The other game was a simpler case study: Cyberpunk was buggy as hell. Not just the abysmal performance, but also the overall quality. People called it “the Hindenburg of videogames” and made YouTube compilations and listicles of its often hilarious bugs: cars exploding for no reason with perfect comedic timing, intimate body parts protruding through the clothes, and the infamous T poses.

In an unprecedented move, Microsoft slapped a big warning atop Cyberpunk’s app store listing, and Sony pulled it from their store altogether.

But it is 2026 now, and both games redeemed themselves. For years after the launch, the No Man’s Sky team worked hard on adding promised features:

Over a decade on from its initial reveal, No Man’s Sky both manages to remain the same game it was at launch while also bringing almost every single missing feature (and dozens of new surprise ones) into the title – implementing them intelligently and with great consideration for how it will affect the core of the game. They achieved their redemption years ago, yet continue diligently with massive update after massive update.

No other title has done what Hello Games have managed to achieve. And the best part? Every single update, patch and addition to the game was and is 100% free, with no falsified hype or build-up to each update.

Cyberpunk 2077 had a redemptive arc of its own, too, highlighted and contextualized in this 17-minute video from gameranx. Today, both games are rated “very positive” on Steam, and are actually still gaining daily players.

So, wonderful comeback stories, right? Depends on how you look at it. It’s great that both these games ended up being good products, but perhaps not as great that it was all happening in the open.

The videogame industry tried to get creative about it and established an idea of “early access”: being able to purchase an incomplete game earlier, and watch it get better while the publisher receives funding to keep going. But for every Minecraft there is Godus, and for every Kerbal Space Program there is The Day Before. Plus, neither No Man’s Sky nor Cyberpunk launched in early access with attendant caveats and discounts. (By the way, Wikipedia’s entry for early access is worth checking out – it’s so eloquent I’m surprised not to see any warning boxes.)

There seems to be ongoing and perhaps rising frustration with companies releasing software products too early and fixing them in flight, if at all. Already in 1996, Geoff Duncan wrote about his annoyances with that:

What Beta Means Now: […] In many cases – particularly with Mac Internet software – “beta” doesn’t mean anything close to what it used to. We’ve seen programs in public beta that not only contain innumerable known bugs the developers are aware of and plan to fix, but also accumulate major new features through subsequent releases. Similarly, we’ve seen products that change fundamental system and technology requirements during beta – details which should have been etched in stone long before. Beta often means what “alpha” or even “development build” used to mean.

Subsequently, Google and other web-first companies diluted the meaning of beta labels even more.

The trend of premature launches extended to devices, too. About two years ago, AI assistant gizmos from Humane and Rabbit were pilloried by audiences for launching in an effectively unfinished form. Both devices failed in the market; MKBHD’s video reviews of Humane AI Pin and Rabbit R1 remain both entertaining and informative watches.

AI complicates this even further in many ways. I enjoyed Pavel Samsonov’s recent post on his blog Product Picnic analyzing another disastrous launch: Grammarly’s writing advice feature that replicated well-known authors who never agreed for their likenesses to be used this way:

Reading between the lines, Mehotra’s interview paints a picture that I think many tech workers will find familiar: features are conceived, coded, and shipped as quickly as possible. He is happy to admit that the feature was a mistake… in retrospect. But in the moment it actually mattered, critical thinking was swept away by the false urgency of pushing things out.

It is worth reading in full and following the links, too; I watched the mentioned (tense) interview, and was similarly frustrated with the CEO’s lack of accountability or even a hint of an explanation of why the feature was launched to begin with. Key line from Samsonov’s post:

If you don’t know what are you trying to learn when you ship a prototype, do not ship a prototype.

This becomes even more important as the difference between a prototype and a final product is now thinner than a retina pixel. Both No Man’s Sky and Cyberpunk had, at least, well thought through foundations.

I understand that for some people gen AI software building tools is a discovery – perhaps for the first time – of a genuine joy of creation. But there’s also the other, newish side, a sort of “cult of velocity” where people show screens filled with agents coding things as if the world needed every possible app right this second.

Velocity and urgency can be important, but it’s hard to be careful and thoughtful when you’re going really fast; unsurprisingly, some don’t know what to do with that newfound AI-powered speed or realize the importance of thinking about crucial aspects other than time to market. (When digital cameras came around, the barrier to entry for photography was drastically lowered – it was possible to take a lot of photos without worrying about cost or quality. Tons of people took tons of objectively subpar photos; some were the end goal, some were a stepping stone toward more photographic mastery. However, I am not sure I remember people on either side ever bragging “I took over 1,200 photos today!”)

All this could be contrasted with movement of slow software (the name is part of a bigger slow movement although has unfortunate connotations in tech – it’s slow as in “speech,” not slow as in “beer”). Jared White in 2023 defined it as:

  • Sustainable software. Architecting and writing code in ways which are easily understandable and maintainable over time, requiring few dependencies and a rate of change that is healthy for the underlying ecosystem.
  • Thoughtful software. Working through feature development and making decisions based on what will benefit the userbase over the long term, placing mental and social health as priority over immediate gains or selfish interests.
  • Careful software. Seeking to understand the ways software might be used for harm, or itself be harmful by taking attention away from more important concerns in the broader culture.
  • Humanist software. Recognizing that most software—at least in application development—is primarily written for humans to understand and reason about with ease across a wide array of skill levels, and that relying on complex code generators or “generative AI” tooling to resolve complexity instead of simply building simpler human-scale tools is an industry dead-end.
  • Open software. Looking to established collaborative software movements like open source and the standards bodies responsible for open protocols to inspire how we build and maintain software (regardless of licensing).

I don’t really have a conclusion for this meandering post, as I am not sure a snappy conclusion is possible. Perhaps some of the links above can provide inspiration or food for thought about urgency, reputation, and doing things in the open.

Some patterns I’m noticing are:

  • Velocity is never an end goal.
  • Velocity is only one of many ingredients of software building.
  • It is necessary to think of people who will experience your work-in-progress as it is, not as it might one day be.

“Every step they take, in every single direction, is right on top of a rake.”

Just like the video I shared last week, this 20-minute video by Mariana Colín at The Morbid Zoo is sharper than most, and also extremely entertaining:

Colín is not “in tech,” and the video is of “the king is naked” variety which is very, very refreshing.

Among many good observations, this caught my attention as relating to this blog’s topic:

It’s a little weird to have this almost adversarial relationship with your customer base. They’re not trying to solve a problem customers have. They’re trying to convince people that the product on offer is something more than it clearly is.

What VR is, is a fun parlor trick. What they want VR to be is literal reality.

It does indeed feel Meta’s version of VR/metaverse has always been cargo-culting real world in a particularly awkward fashion, which Colín analyzes deeper.

Too many quotable laugh-out-loud moments, so maybe just this one more:

Down here in the real world, there are really only two things a media technology can be. It can be a solution to a specific discrete informational problem, or it can be an artistic medium. These two things are not mutually exclusive. There is crossover here – like, radio was a military tool before radio plays were ever a thing.

But by the former, I mean you’re literally just making information go faster. You’re reducing the amount of noise between a message and its receiver. Any kind of metaverse is going to be really, really bad at this because you don’t need to look at a weird Pixar version of your coworker in order for them to convey what a deadline is.

“Subtle line between animations that help and animations that hurt”

In late 2023, designer Anthony Hobday published a small list of 20 interface quality of life improvements, and recently Hobday and Katie Langerman chatted about it on an episode of their podcast Complementary.

It’s a fun listen (perhaps if you skip a bit of a bummer 9-minute beginning), covering four listed things in more details:

  • generous mouse paths (especially in menus)
  • coyote time for modifier keys
  • optical alignments
  • tooltip timing details

There were a few interesting things that caught my attention:

  • Figma does have “coyote time” in the very interaction the hosts are talking about, perhaps showcasing that the details of the details can make or break them.
  • “Should modifier keys be reversible” and “should modifier keys be consistent with one another” are interesting challenges; some more recent graphic tools have changed the long-standing behaviour here, malking modifier keys more “sticky.”
  • Wholeheartedly agree with how frustrating it feels that the menu interactions are not yet baked into browsers as primitives. “The fact that the companies keep having to implement it themselves manually is maddening.” It is.
  • Good observation that some people associate animations with “feeling premium” (see also: the quote I put in the title).

Why do Macs ask you to press random keys when connecting a new keyboard?

You might have seen this, one of the strangest and most primitive experiences in macOS, where you’re asked to press keys next to left Shift and right Shift, whatever they might be.

Perhaps I can explain.

There are three main international keyboard layout variants in common use: American (ANSI, with a horizontal Enter), European (ISO, with a vertical Enter), and Japanese (JIS, with a square-ish Enter).

The shape of Enter and the shuffling of the surrounding keys is not the only difference. It’s also that the European layout has historically always had one more key – shoved in between Shift and Z – and the Japanese layout a few more.

But the main challenge is that a keyboard doesn’t have a way to tell the host computer what are its exact keys and where they’re located.

So, pressing the thing next to the left Shift can help Apple understand whether the keyboard is American or Japanese (always Z) or European (something else, but never Z). And pressing the thing next to the right Shift differentiates JIS (where it’s the _ key) from another keyboard (always /).

What I called “primitive” just above is actually clever in its approach. The legend of the key next to left Shift varies per locale (you can compare here), so the system can’t just tell you to press the < > key – and besides, asking the user to find a key that might not exist is a lot more stressful. And, identifying the keyboard by choosing a layout visually wouldn’t work either, since there are a million of layout variations – imagine having a split or a compact keyboard!

But it still is primitive, because it will still open up even if the keyboard you connect isn’t really a typing keyboard…

…or even if it doesn’t have any keys at all. (Some peripherals like credit card readers and two-factor dongles identify as keyboards as they transfer information by sending keystrokes.)

But: Why does it matter? What happens if you select the wrong layout or ignore the dialog?

If you mix up America and Europe, the difference should be largely cosmetic. After all, you still have to choose the keyboard language. People in, say, Germany will likely choose the appropriate locale, and the keys will do the right thing. However, also selecting the correct physical layout will properly display it in a few places, which can be helpful:

Japanese keyboards are more interesting, because they still have an English “mode” and the legends on a lot of the keys in that mode are different than on those on American and European keyboards – yet, the keys when pressed appear exactly the same (have the same “scan codes”) to the connected computer:

So knowing whether the keyboard is “US in the US” or “US in Japan” is important not just to place keys in the right position visually in a few places in macOS, but also for those keys to output what they actually show:

By the way, Apple’s own keyboards do not pop up this dialog. This is because while a keyboard can not do much when connected, it can at least send a vendor and model identification numbers, and Apple knows which of its keyboards sport what physical layout.

Why doesn’t macOS do that for third-party keyboards? They might, for some well-behaving ones; I don’t actually know. Unfortunately, the vendor/​model identification is a wild west and a lot of the keyboards I have identify simply as “unknown,” so building up an all-encompassing keyboard layout database is not really possible.

Either way, I mostly wanted to share why the dialog exists. Mind you, I don’t love it in that its language could be better and at one point it breaks a cardinal rule of reorienting options, which makes it hard to remember “oh yeah, it was the first scary setting that worked before.”

But overall, I thought it is a clever solution to a surprisingly hard problem. Sometimes primitive is better than nothing.

“And if I were to end this story here, this would be a great story.”

A 21-minute video from Karl Jobst about a 2025 videogame cheating scandal:

In short: One of the professional teams in the FPS game Squad built a sophisticated set of scripts that made it easier to use the game for esports tournaments by adding additional UI, useful stats, a floating camera, an extra over-the-shoulder view, and so on. The community embraced the scripts as they genuinely made the spectating much better.

Months later, it turned out that the creators not only hardcoded easier rules for their own team, but even added a pretty comprehensive set of cheating keyboard shortcuts.

The useful esports spectating scripts were, in effect, a trojan horse. A fascinating story, plus an interesting case of psychology of cheating.

Apr 5, 2026

“If you use your computer to do important work, you deserve fast software.”

Two great posts about interaction latency on the hardware and software side. First is from Ink & Switch:

There is a deep stack of technology that makes a modern computer interface respond to a user’s requests. Even something as simple as pressing a key on a keyboard and having the corresponding character appear in a text input box traverses a lengthy, complex gauntlet of steps, from the scan rate of the keyboard, through the OS and framework processing layers, through the graphics card rendering and display refresh rate.

There is reason for this complexity, and yet we feel sad that computer users trying to be productive with these devices are so often left waiting, watching spinners, or even just with the slight but still perceptible sense that their devices simply can’t keep up with them.

We believe fast software empowers users and makes them more productive. We know today’s software often lets users down by being slow, and we want to do better. We hope this material is helpful for you as you work on your own software.

I loved the slow-motion videos comparing what is normally impossible to notice:

Dan Luu has a complementary post digging a bit more into computer hardware latency from the 1970s to now:

I’ve had this nagging feeling that the computers I use today feel slower than the computers I used as a kid. As a rule, I don’t trust this kind of feeling because human perception has been shown to be unreliable in empirical studies, so I carried around a high-speed camera and measured the response latency of devices I’ve run into in the past few months.

I feel both of these essays are fantastic, and important to develop some sense of what are specific numeric thresholds separating fast and slow, also in the context of being able to have an informed conversation with a front-end engineer. (Luu subsequently links to even more articles in the “Other posts on latency measurement” section, if you are curious.)

Otherwise, from my observation, the two most quoted laws of user-facing latency are still Jakob Nielsen’s response time limits, and the Doherty Threshold. But the Jakob Nielsen 100/1000/10000ms rule is from 1993 and as far as I understand is concerned primarily with UX flows: reactions to clicking a button, responses to typing a command, and so on. And the Doherty Threshold is even older. Both are simply not enough, especially not for things related to typing, multitouch, or mousing, where for a great experience you have to go way below 100ms, occasionally even down to single-digit milliseconds.

(My internal yardstick is “10 for touch, 30 for mousing, 50 for typing.” Milliseconds, of course.)

At the end of his essay, Luu writes:

It’s not clear what force could cause a significant improvement in the default experience most users see.

Perhaps one challenge is that these posts are dense and informative, but only appeal to people who care? Maybe latency eradication needs a PR strategy, with a few memorable rules and – perhaps arbitrary, but well-informed – numbers that come with some great names attached? I know in the context of web loading some of the metric names like FCP (First Contentful Paint) broke through at least to some extent, but those still feel more on the nerdy side. Even Nielsen’s otherwise fun 2019 video about response time limits didn’t stick the landing – why focus on slowing down an arbitrary label appearing above the glass when the ping sound was right there for the taking?!

I can’t help but dream of interaction speed’s “enshittification” moment.

“It moved too slowly to be an asteroid.”

In the previous post, I wrote:

I understand that the best way to compare two things visually is to switch between them promptly in situ; our visual system is really good at spotting even small changes when aided this way.

I thought it would be fun to talk about it briefly, because it gives me a chance to show you a really fun device:

This is a blink comparator, an apparatus built for astronomers to easily flip between two images of the night sky, taken at the exact same position some time apart.

It makes it easy to spot a moving asteroid, like in this set of two photos:

Blink comparator was used in 1930 to spot Pluto:

(Pluto is the blinking dot a bit to the top and to the right from to the center – that dot moves to the left in the other frame. The fact that it moved at all made it an object of interest, but it didn’t traverse the sky like an asteroid or space debris would.)

This is why the “spot 10 differences” puzzles are always shown side by side…

…otherwise everything would be much, much easier to spot:

Today, this kind of stuff doesn’t require complex devices, but it’s useful to know the principle.

If you’re comparing a reference design with its implementation, instead of measuring things on both sides it can help to align them in two windows, and then switch between them using ⌘Tab.

If you’re working on an interface for users to see differences between two images – don’t (just) show them side by side, but also allow your users to flip between them this way. And, resist the very natural urge to add any transitions that would seem to be nicer and friendlier; it is sharply switching between images that is the most effective.

Linear’s clever internal redesign UI

I was impressed with this clever internal interface at Linear, shown inside this larger blog post:

The dev toolbar exists directly inside the app and allows us to easily toggle feature flags on and off. When something didn’t look right in the refreshed UI, it took us just one click to compare it with the previous version. That made it easier to determine whether the refresh had broken something or whether it had behaved that way before. Having the updates live behind feature flags also meant that instead of developing the redesign in isolation and shipping all the changes at once, we could integrate incremental changes to the platform.

I also cut it out here so it’s easier to see:

Here’s what I like about it:

  • It’s a separate UI surface: Rather than being awkwardly integrated alongside production UI and adding jank to it, it is a clearly delineated toolbar you know users won’t ever see, allowing the rest of the interface to always feel like production.
  • The feature flag toggling is easy: You don’t have to go anywhere else and possibly log in to toggle a flag, and you don’t have to wait for it to take effect. This will mean more people than just the core team members will be using it.
  • Toggling this particular feature flag is as easy as clicking on a tile: I don’t know if anyone can promote others flags their care about to be easily toggle’able tiles, but I can imagine this really beneficial, too.
  • The feature flag toggling is instantaneous without any visual jank: I understand that the best way to compare two things visually is to switch between them promptly in situ; our visual system is really good at spotting even small changes when aided this way.

Each one of the above bullet points is individually a small point of friction and easy to renege on, especially when it comes to internal-only interfaces. However, a combination of all of them results in great compounded interest, and I bet makes this interface effective – in addition to just feeling like fun to use.

Appreciate Linear sharing this internal detail; if you are using an interesting internal tool or UI that you are allowed to share, please consider it and let me know!

“I’m hoping that the listeners out there, when they hear it, they’ll feel seen.”

This 25-minute segment on MKBHD’s Waveform podcast (video or audio, segment starts at 40:21) is from November 2024, and is a nice counterpart to the post about favourite well-made apps and sites.

The original theme is “what is an app that you use all the time, and like to use, but is actually a bad app?” but it quickly moves to a more general conversation about good and bad mobile apps.

It’s always interesting to me to see what themes emerge and what other people think is important. Here’s the list where I linked to relevant apps as long as I could find them:

Bad apps:

  • Google Messages – dinged for unreliable spam and lack of organization/​filtering
  • Notion (on mobile) – hard to orient yourself and some direct manipulation is wonky
  • many smart home accessory apps – bad and redundant with Google Home, but have to keep for emergencies
  • Netgear Orbi (network router) – specific functionality and bad password recovery
  • Hatch (white noise machine for babies) – simple things are hard to discover
  • Nest app/Nest Yale Smart Lock – bad integration
  • Goodreads – stale

Good apps:

For your consideration: Tab to fix spelling

A few years ago, I suggested adding a new interaction to Figma. If your text cursor was on a misspelled word (anywhere inside, or the edges), you could press Tab to quickly accept the suggested correction, without even seeing it:

Independently, Google Docs approached it from a slightly different angle, but landing on a similar interaction – in their version there’s a small visual callout, although you can still press Tab (and then Enter) to accept the suggestion:

I know the Tab key has a lot of jobs – from indenting bullet points to jumping through GUI elements – but in this context this new addition doesn’t seem to be in conflict.

(Should I write a long photoessay about the Tab key, similar to the ones I wrote for Return/​Enter and Fn keys?)

Since we added it, I’ve really loved how it feels. From various typeaheads and autocompletes elsewhere, Tab has a strong “forward movement” energy so it makes conceptual sense, and it’s just really fun to go around and quickly fix your writing this way.

I think a lot about how to make keyboard interactions feel superpower-y: a good keyboard shortcut on a large key, a tight interaction, a blink-of-an-eye velocity – something that’s eminently designed to lodge itself in your motor memory as quickly as possible, as it builds on top of prior motor memory. I’m biased, of course, but I like the “no scope” Figma version more, and it has that feeling to me.

Anachronisms

Testing tip: Enable the zoom peek gesture

Go to Settings > Accessibility > Zoom, and then turn on “Use scroll gesture with modifier keys to zoom.”

Then, at any moment, you can hold Control and swipe with two fingers (or use a scroll wheel) up or down to zoom the entire screen.

I’d also recommend turning off “Smooth images” under “Advanced…” so you see individual pixels better:

Over the years, I found this feature very useful to inspect various misalignments, to check visual details, and occasionally simply to read text that’s too small.

Compared to other ways of zooming, this one has three benefits:

  • it’s extremely motor-memory friendly and so my fingers do it without me even thinking
  • it’s a system-wide thing, so it will work everywhere
  • it’s safe, because it’s something that I call a peek gesture

Peek gestures are fast, but the main benefit is that they’re safe. In some apps, pressing ⌘+ a few times and then ⌘– the matching amount of times doesn’t guarantee you will end up back in the same situation. The window size might change, the scroll position might move, the cursor might end up in a different place. In contrast, the Ctrl gesture is 100% deterministic and reversible; it will always work the same and never mess anything up.

I treasure peek gestures in general. Here are a few other useful (and/or inspiring?) ones:

  • previewing things in Finder by pressing (or, for power users, holding) the spacebar
  • using ⌘⇧4 with the intention not to take a screenshot, but just to (roughly) measure a distance between two objects, and then pressing Esc to abort
  • in tools like Figma and Sketch, using Ctrl+C just to quickly verify the color, and pressing Esc to cancel (rather than clicking to put the color into the clipboard or apply it elsewhere)

Book review: Maintenance: Of Everything (Part One)

★★★☆☆

The new book by Stewart Brand is tackling a subject that’s important to me. The introduction struck a chord:

The apparent paradox is profound: Maintenance is absolutely necessary and maintenance is optional. It is easy to put off, yet it has to be done. Defer now, regret later. Neglect kills.

What to do? Here’s a suggestion: Soften the paradox, and the misbehavior it encourages, by expanding the term “maintenance” beyond referring only to preventive maintenance to stave off the trauma of repair—brushing the damn teeth, etc. Let “maintenance” mean the whole grand process of keeping a thing going.

Ultimately, alas, the book doesn’t really expand on this suggestion. While the volume feels rich and dense in some ways – illustrations, extra commentary, highlights – its surface area ultimately appears to be rather shallow. Ironically, given the subject matter, it feels like Brand fell prey to a bunch of “sexy” stories, some of them only tangentially related to maintenance.

I will just say it: I wish the author was more woke. The book is very male-coded. The main chosen areas of investigation are: motorcycles! tanks! guns! wars! There are moments towards the end where Elon Musk and Bill Gates are talked about as if it was still 15 years ago and we haven’t actually learned anything since. (No word of Cybertruck, either.)

We know maintenance tends to be unrewarded and forgotten come promotion time. We know that tedious tasks are often assigned to women and people of color while white men go around doing “genius things.” It’s hard to imagine women not being present in a book about maintenance, and yet – and I wish I was joking – the only woman of any significance in the entire book is… The Statue Of Liberty.

That aside, before opening the book, I hoped it would provide me some vocabulary and evolved thinking about maintenance that I could put to use, and there are some moments where it almost approaches what I wanted from it. Here’s a passage:

Powell credits the Israeli military with a mindset that naturally viewed damaged tanks as soon-to-be-repaired tanks, rather than the irredeemable flotsam of battle. The fact that [Israeli] commanders thought in these terms gave purpose and direction to the maintenance-related technical and tactical skill their crews possessed.

This is fascinating. Tell me how? Tell me what was needed to make it happen? But, unfortunately, outside of some basic tenets of “give the rank and file more freedom to do things” and “embrace improvisation,” the book doesn’t seem to offer more.

Elsewhere, there is this quote:

In almost every plant I worked at, QA was seen as a hindrance to hitting productivity metrics. We never got credit for a well-maintained manufacturing capability, but QA almost always got blamed when things went wrong.

…which, again, felt like a fascinating thread to pull on. But instead of digging deeper, this is left hanging without investigation.

The book doesn’t really have a proper ending with synthesis of what came before, and generally meanders a lot – to a point that the table of contents has more “digressions” than actual subjects. It also feels occasionally rambling and occasionally showing off (name-dropping people like Kevin Kelly and Freeman Dyson, or quotes from “beta-tester” readers that mostly serve to paint Brand in a positive light), which takes away from otherwise brisk writing and at times truly excellent storytelling. (The first chapter in particular is fantastic.)

If you want an easy-to-read, breezy, well-typeset book filled with historical anecdotes, and the above caveats do not bother you, this might be a fun read! But I expected more from it.

The one place where the book shines is pointing people toward other books – there are pages that feel more like literature review (done really well!), and the end matter has bibliography and recommended reading with notes. So in that way, while disappointing in and of itself, it could also become an interesting starting off point for more research.

“Naïve, simple, not good enough.”

This is a thoughtful post from Florian Schulz about designing a typeahead experience.

I liked the details both within the implementation – for example, making sure the kerning is preserved! – but also in the presentation. I particularly enjoyed Schulz making the component demo itself, rather than using prerecorded videos. (I was delighted to discover that even the first large “picture” of the component is actually interactive!)

A small comment to this bit:

Unfortunately, not all browsers expose the selection or accent color of an operating system. For example, if a user would set the accent color in macOS to pink, the special CSS keyword color “Highlight” will still result in a light blue color in Safari. In other browsers like Chrome, the color will match the user preference. But since this is an attack vector for user tracking / fingerprinting, Apple made the right choice to hide the user preference from developers.

From my understanding, this is not necessarily correct. For example, in theory, the purple visited link color can be used for fingerprinting, by building a profile of whether or not I visited one of the hundreds of popular websites, quietly in the background.

The way browsers solve this is to never expose the color programmatically back to JavaScript – if your code asks for a link color, it will be blue regardless of whether the link was visited or not. It seems to me that the Highlight color could be used the same way here. Given that CSS now supports things like color-mix(in srgb, Highlight 20%, white), it would even allow a designer to riff on the color without ever knowing what it is.

“There is no quality or historical significance standard.”

Multibowl is one of my favourite emulation projects because it’s a rare example of using emulators creatively, rather than for nostalgia or research.

It’s a 2016 game by Bennett Foddy and AP Thompson that reimagines older existing games as smaller pieces of a new, Super Mario Party-like experience. Two players randomly join one of 300 games – sometimes in medias res – with a small explicit goal that can be accomplished in about ~30 seconds, after which a point is awarded, another game is loaded, and so on.

All of this is done through actual emulation and fast switching of games’s original code:

Regarding the game choices, at the outset, I wanted to curate a list of moments of gameplay that would be meaningful if played for just a short period of time. Sometimes it’s obvious – you can take a moment from a fighting game where both players are low on health, or play a sports game from the start until the first point is scored. So that’s where I started. Over time, I figured out that you could make exciting moments in games that are not otherwise interesting for a competitive duel. For example, in Dodonpachi (a bullet hell game) we take away the player’s guns and challenge them to stay alive in a huge hail of bullets.

For games that were designed as cooperative experiences, I eventually gravitated toward the structure ‘score more points but do not die’, which forces the players to calibrate how much risk they take relative to the other player.

This excerpt is from a 2017 interview of Foddy by Seb Chan from ACMI. There are many interesting moments in that interview, such as the issue of curation:

Multibowl is not a very precise historical curation like you might make for a museum exhibition, where you can only show a couple of dozen things at most. It’s a huge driftnet of games. There is no quality or historical significance standard, and no attempt to balance out the games in terms of nationality or gender. The only curatorial instinct that it follows is to find the most diverse set of game ideas. With each piece distilled down to a randomly-selected 30-second slice, there’s room for an infinite number of them.

In fact, contrary to a museum curation, the point of Multibowl is to have too many games for a single player to see. It’s best when it feels too big to grasp. I think, now that there are 300 games in there, it’s starting to feel that way.

Unfortunately, it is not possible to actually play Multibowl outside of special events, given copyright issues. In addition to general emulation copyright murkiness, Foddy adds, “I don’t think the actual bits of actual games have ever been used as the fabric of a larger game before.”

However, a really fun introduction to Multibowl is another art project from a now-defunct comedy duo Auralnauts, who actually played Multibowl pretending to be Kylo Ren and Bane, to hilarious results:

World-class female singers

The story about the original Macintosh’s built-in font set being named after “world-class cities” is well known and documented by Susan Kare on the Folklore site:

The first Macintosh font was designed to be a bold system font with no jagged diagonals, and was originally called “Elefont”. There were going to be lots of fonts, so we were looking for a set of attractive, related names. Andy Hertzfeld and I had met in high school in suburban Philadelphia, so we started naming the other fonts after stops on the Paoli Local commuter train: Overbrook, Merion, Ardmore, and Rosemont. (Ransom was the only one that broke that convention; it was a font of mismatched letters intended to evoke messages from kidnapers made from cut-out letters).

One day Steve Jobs stopped by the software group, as he often did at the end of the day. He frowned as he looked at the font names on a menu. “What are those names?”, he asked, and we explained about the Paoli Local.

“Well”, he said, “cities are OK, but not little cities that nobody’s ever heard of. They ought to be WORLD CLASS cities!”

So that is how Chicago (Elefont), New York, Geneva, London, San Francisco (Ransom), Toronto, and Venice […] got their names.

If you check out the actual Philly stops and witness all their provinciality, you can understand what Jobs was after:

Go to first Macintosh via Infinite Mac, open Infinite HD and MacWrite within, and you can examine the nine eventual fonts in their pixellated, cosmopolitan glory:

The list goes in this order: New York, Geneva, Toronto, Monaco, Chicago, Venice, London, Athens, San Francisco.

But: How about some hard evidence for the original anecdote? Turns out, the March 1984 issue of Popular Computing used pre-release Mac software and printed a screenshot of the names rejected by Jobs:

Since on the facing page we see the output in the same order, coming up with the correct mapping is not hard:

  • Cursive → Venice
  • Old English → London
  • City → Athens
  • Ransom → San Francisco
  • Overbrook → Toronto
  • System → Chicago
  • Rosemont → New York
  • Ardmore → Geneva
  • Merion → Monaco

One has to admire the final order of the Mac fonts that went from dependable and utilitarian at the top, to progressively more weird; this earlier list is all over the place.

In later releases of Mac OS, three other world-city fonts – Boston, Los Angeles, and Cairo – joined the party, so let’s show them here for completeness’s sake:

(Cairo is the classic icon font and in a way a predecessor of modern emoji, with inside jokes like Clarus The Dogcow.)

But that’s not the end of the story of the original Mac fonts. Let’s get back to 1983. On yet another page of the magazine, we see this list from MacPaint:

You can tell this screenshot is even older than the previous one, because it is itself set in an earlier version of Chicago, with a single-storey lowercase “a,” and many letterforms being works in progress. (I talked about the history of Chicago in my 2024 talk about pixel fonts.)

And it is old enough that this isn’t just interim names for surviving fonts – it’s actually quite a few old fonts that didn’t make it to the release day.

Unfortunately, this particular version of Macintosh software remains unknown, but one similar pre-release version of the first Mac software leaked, and so we can take a look at some of these fonts, too:

(You can download a lot of these fonts thanks to the hard work of John Duncan. They are still bitmap fonts and might not work in all the places in modern macOS, but they seem to work in TextEdit at least.)

Here’s what I learned from looking at this list:

  • You can definitely see how unpolished some of these fonts are in terms of spacing, letterforms, and available sizes – kudos to the team for holding a high quality bar even though there was little precedent for proportional fonts on home computers at that time.
  • Even the fonts that shipped – London (née Old English), Venice (née Cursive), and Chicago (née System) – have had their letterforms tweaked and improved.
  • Chicago is not named Elefont, but simply System. Had the System name persisted, this Medium snafu from 2015 would have been even more hilarious.
  • Cream came all the way from Xerox’s Smalltalk and was the original system font for Macintosh-in-progress, before Susan Kare created Elefont/​Chicago.
  • PaintFont was a symbol/​icon font, but distinct from Cairo and emoji in that it seems it was meant to be used only by the app to draw its interface. (Today, SF Symbols serve a similar purpose.)
  • Apple originally planned to use Times Roman and Helvetica, but this hasn’t happened presumably because of licensing issues. Only years later, the proper Times and Helvetica fonts were introduced. Here’s a comparison:

But the most interesting thing I haven’t noticed before are two fonts called “Marie Osmond” and “Patti.”

I am reaching outside of my well of knowledge here, but from context clues I’ll assume the latter means Patti LaBelle. And so, pulling on that thread, it’s kind of cool to imagine an alternate universe where the original Mac fonts are neither suburban Philly stations, nor well known cities, but something like this:

“That’s because the metro cab is his right hand. Videogames!”

In the Fallout 3: Broken Steel addition, the team wanted to introduce a moving subway train under Washington, D.C.:

However, the engine did not have any moving vehicles. Instead of adding a new kind of primitive into the game engine, the creators… made the player character itself become the subway car when in motion:

This was done by removing freedom of movement from the player, forcing the character to slide on the floor, and equipping him with… a “metro hat.”

The visuals of people hacking this to use it outside of the subway area are really funny:

Technically, it was not a hat, but a right-arm armor, as you can see from the right hand missing in the above picture.

The FPS genre is filled with all sorts of hacks for hand-held weapons, to compensate for the challenges of depicting things accurately not feeling as great…

…but I have never heard of someone “wearing a train.”

(The title comes from this post.)

Mar 29, 2026

“Decentralization does not always equal delight.”

A thoughtful 26-minute talk by Imani Joy, the solitary full-time designer on Mastodon, reflecting on her nine months there:

It’s an interesting peek behind the curtain at designing for this particular space, and the many unenviable constraints: lack of data, care for privacy, tension between Mastodon’s power-user early adopters (“they are values-driven, they want control, they’ll tolerate a lot of the clunkiness of the Fediverse”) and “mainstream audience [that] expects polish.”

At some point, design needs to be authoritative, but how do you combine that with wanting the process to be as inclusive as possible? The product itself is a federation of various servers that can exert their own control – so how do you bring it all together under one neat umbrella for the user? (Also a challenge for Android in comparison with iOS.) The mainstream design has certain fashion-y tendencies. How to make sure you don’t lose yourself while chasing them, but also not to stay ossified out of fear of making changes? (Wikipedia, Internet Archive, and other similar places look and behave a certain way, after all, and it’s not usually because of lack of talent to “modernize” them.)

The most interesting thing to me was this:

It’s easy to talk in terms of who to optimize for. Things get harder when you start to articulate who you won’t optimize for, what trade-offs you must make in pursuit of your goal, and who you’re going to risk letting down along the way. What the team needed from me more than anything was not the probabilities, not the usability findings, not the story of who we’re making happy. They needed to hear who will choose to disappoint and why. And I told them that building the best experience on Mastodon means that we’ll solve for the extremes, but we won’t center them. And sure, we do risk frustrating some power users who want absolute control over their profiles, but that risk is necessary to optimize the experience also for browsing users.

When we were working at Figma in 2019 shipping an update to text line height algorithms (moving them from the way print does things to the way web does things), I started an internal document called “The new line height and its discontents,” where myself and the team deliberately wrote out who will be most annoyed about the changes, and why. We listed our arguments, workarounds, even “deal sweeteners” (“but look at this other thing that will get better as a result!”), but we also tried very hard to be candid with ourselves. Some people were not going to be happy no matter what we do or say. Do we know precisely who these people are and are we okay with that? I’d recommend that approach for any change-management project, rather than keeping fingers crossed or toxic positivity.

Joy so far worked on quote posts and new profiles, and I appreciated her ending the talk on a note of recognition for these kinds of projects in these kinds of settings:

I know that we’re building something that will continue to be imperfect, but it doesn’t have to be perfect to make a positive difference in the world.

Come at the king, you best not miss

Column view cut its teeth on NeXT computers…

…and blossomed on early versions of Mac OS X…

…but where I thought it really shone was the first iPods:

This was perhaps the most fun you could ever have navigating a hierarchy of things; it made sense what left/​right/up/down meant in this universe, to a point you could easily build a mental model of what goes where, even if your viewport was smaller than ever.

It was also a close-to-ideal union of software and hardware, admirable in its simplicity and attention to detail. This is where Apple practiced momentum curves, haptics (via a tiny speaker, doing haptic-like clicks), and handling touch programmatically (only the first iPod had a physically rotating wheel, later replaced by stationary touch-sensitive surfaces) – all necessary to make iPhone’s eventual multi-touch so successful. And, iPhone embraced column views wholesale, for everything from the Music app (obvi), through Notes, to Settings.

Well, sometimes you don’t appreciate something until it’s taken away. Here are settings in the iOS version of Google Maps:

I am not sure why the designers chose to deviate from the standard, replacing a clear Y/X relationship with a more confusing Y/Z-that-looks-very-much-like-Y. They kept the chevrons hinting at the original orientation – and they probably had to, as vertical chevrons have a different connotation, but perhaps this was the warning sign right here not to change things.

I think the principle is, in general: if you’re reinventing something well-established, both of your reasoning and your execution have to be really, really solid. I don’t think this has happened here. (Other Google apps seem to use standard column view model.)

“Less of a pitch, more of a prediction”

An excellent 17-minute video from The Art Of Storytelling that analyzes the now-infamous 2021 Mark Zuckerberg Metaverse introduction video:

What I liked about it is that the author goes beyond cheap shots and deeper into both storytelling aspects (drawing from his experience)…

Now, as you can tell, the big problem with the design and execution of this video is that the producers failed to recognize the importance of point of view in telling this story. Now, perspective is already very important in any film, but it’s doubly important in a film for which one’s point of view in reality is also the subject. But this failure is present even in some of the more mundane parts of the film like the interviews that Mark does with various meta staff members. Now, as it’s plain to see, these are not real interviews. They’re fully scripted and staged – again, a classic mistake in corporate film. You can even tell that they’re not looking at each other. They’re clearly reading from a teleprompter. Yikes.

Of course, the entire premise of an interview is that two people are speaking candidly. So watching an obviously fake interview can be deeply unsettling as the speakers try to act out natural conversation and inevitably fail. This is why so many people in this video, including Mark, seem to not know what to do with their hands while speaking. It’s because they’ve been told to act naturally in a social situation that does not normally exist.

…and the meaning of these kinds of propaganda-esque announcements:

They are joined by some friends who are calling from Soho to tell them about some cool augmented reality street art that they’ve just discovered. […] And with a wave of his hand, Mark teleports the artwork into his spaceship so that he can appreciate it for himself, thus extracting this street art from any sense of place and context, which is the point of street art. I know this might sound like a nitpick, but I think it’s just worth lingering on the fact that, you know, in this high concept tech demo about how this technology will empower people to appreciate art in new ways. Nobody paused to ask what the social and cultural function of street art actually is.

The entire introduction video comes across as thoughtless and careless – “It’s not a product launch or even a demo. It’s just a cartoon about the world Mark Zuckerberg is telling you that you will one day live in.” – and some of the observations here will be relevant to other things, even in other mediums: UI redesign minisites, the font announcements articles, rebrand unveils, and so on.

I would love similar analyses of Apple’s stuff – not just the most obvious parallel which would be the 1987 Knowledge Navigator vision video, but some of the more recent scripted virtual keynotes, too.

Got your back, pt. 4

Connecting to public wi-fi networks with their captive portals is always a bit of a wonky proposition, and nothing makes public wi-fi wonkier than using it on a plane.

I believe that the resurgence of https made things harder – if the captive portal doesn’t kick in, no secure traffic can happen – and over time I just started remembering that “captive.apple.com” is a reliable HTTP-only destination to visit.

But I noticed this week that United’s onboard wi-fi network is called “Unitedwifi.com” as a reminder where to go once you are connected, to avoid that problem. I thought this was a nice touch.

On tools and toolmaking

Not long ago, a blog I otherwise like a lot included this passage:

Designers have been saying this for years. Cameras don’t take pictures, photographers do. Tools don’t make you a better designer. Now the PM world is arriving at the same conclusion.

I am not linking to the post because I hear this argument from time to time, and I want to comment on the general notion.

I think I understand the sentiment behind it: You’re not a designer because you know all the Figma shortcuts. You’re not a perfect typewriter away from The Next Great American Novel. Mastery of a tool is not mastery of the subject matter. And there is definitely a certain amount of performative pretense of an insta photo of a meticulously arranged desk with a bougie keyboard, going at length about the only correct set of presets and plugins, or an idea that “if only you do this one creative habit, a firehose of creativity will follow.”

But I also disagree. Good tools do make you a better designer.

A good tool can make you go faster and, as a result, let you spend more time doing revs and trying new things. A good tool can make you go slower when needed, practicing a connection with the material underneath.

A good tool will prevent you from shooting yourself in a foot, will teach you new things about what you’re doing – and perhaps even about yourself.

A good tool will value your growth, make you reflect on your growing body of work, and push you to try harder.

A good tool can inspire you. A great tool can make you fall in love. A bad tool can make you walk away, and a horrible tool will make you never want to come back.

A good tool will make you seek out more good tools.

Sure, people wrote books on a BlackBerry. Would you want to? Sure, the best camera is the one you have on you. But wouldn’t you prefer that camera to also be the best camera for whatever it is that makes you tick – a great sensor or glass, an amazing build quality, a friendly user interface, a logo that makes you want to step up, or some particular quirk or sentiment that you can’t even explain, but matters a whole lot to you?

I’m told I should be annoyed if someone’s first reaction to seeing a nice photo I made is “what kind of camera do you use?”, as it diminishes my accomplishments as a photographer. But: I chose the camera, and bolted on the appropriate lens, and realized over the years the aperture priority mode and very precise focus area is what makes my brain happy. I went through other cameras before, and learned I didn’t like them and I liked this one. At some point in my life I even ventured out into the frightening underworld of the settings menu, opened a new browser window, and decided “I will now try to understand all of these terms.” It took years, but I did.

The reason I enjoy scanning and processing old documents is because I invested in my tools. I have a little keypad, a bunch of hard-earned Photoshop actions, and some bespoke Keyboard Maestro combos that boss Photoshop around. This little tool universe doesn’t just make me more efficient, but it also makes me have fun.

I’d go even further. The mastery of the subject matter and the mastery of the tool are both important – but they also have to be joined by fluency with tool choices, and deep understanding of the relationships you have with your tools.

No single writing advice book will give you a perfect recipe, but read ten of them and scan twenty more, and you might compile the right mixtape of practical tidbits for your brain, and inspiration for your soul. Likewise, you have to try out a bunch of tools – some bad ones, a few great ones – to understand what you need. Not just for efficiency, but also for enjoyment, and ambition, and flexibility or maybe rigidity, and this sort of unmeasurable feeling of a tool getting you, or a tool made by someone like you.

Maybe it’s the 1960s typewriter you need, or a newfangled e-ink-based writing implement, or maybe you just have to open TextEdit and close everything else. I’m not going to tell you the novel comes out then. But the novel might never come out if you don’t figure out what tool can help get it out of you.

You also have to recognize the telltale signs when you outgrow the tool, or when the tool starts disappointing you. Over the years, I learned that I hate InDesign, but that I hate LaTeX even more. I switched from Apple Notes to SimpleNote in 2012, went back to Notes in 2017, and just this year moved over to Bear. I once cargo-culted Scrivener for writing and ran away screaming, but I also once cargo-culted DevonThink and still use it today, in awe of its clunkiness and old-fashionedness that match my own.

AI tools are still tools. And generative AI will allow you to build more tools for the solitary audience of just you – but, like elsewhere, it will require some understanding what makes for a good tool and what makes for a good tool for you.

Craig Mod wrote recently about using AI to build his own custom tools:

My situation is pretty unique. I’m dealing with multiple bank accounts in multiple countries. Constantly juggling currencies. Money moves between accounts locally and internationally. I freelance as a writer for clients around the world. I do media work — TV and radio. I make money from book sales paid by Random House via my New York agent, and I make money from book sales sold directly from my Shopify store. […] Simply put: It’s a big mess, and no off-the-shelf accounting software does what I need. So after years of pain, I finally sat down last week and started to build my own.

But I bet Mod knew what tool he needed to build based on his experience with tools that didn’t work for him – and software and design in general.

Elsewhere, Sam Henri Gold in a widely-shared essay that is worth a read, about MacBook Neo and the beginning of the tool journey:

He is going to go through System Settings, panel by panel, and adjust everything he can adjust just to see how he likes it. He is going to make a folder called “Projects” with nothing in it. He is going to download Blender because someone on Reddit said it was free, and then stare at the interface for forty-five minutes. He is going to open GarageBand and make something that is not a song. He is going to take screenshots of fonts he likes and put them in a folder called “cool fonts” and not know why. Then he is going to have Blender and GarageBand and Safari and Xcode all open at once, not because he’s working in all of them but because he doesn’t know you’re not supposed to do that, and the machine is going to get hot and slow and he is going to learn what the spinning beachball cursor means. None of this will look, from the outside, like the beginning of anything. But one of those things is going to stick longer than the others. He won’t know which one until later. He’ll just know he keeps opening it.

I am bothered by black-and-white, LinkedIn-ready statements. “Tools don’t make you a better designer” feels like another version of the abused and misunderstood “less is more.”

My camera taught me to be a better photographer. DevonThink told me how to better organize my thoughts. Norton Utilities showed me how to have fun when doing serious things, and Autodesk Animator how to be serious about having fun.

I’m a toolmaker, so perhaps I arrive at this biased. I endured some crappy tools, wrote some okay ones, benefitted from some great ones. I don’t think I would have become a designer without them.

To streamline or not to streamline

Software engineering has long had a concept of “premature optimization” – overbuilding things too early in anticipation of future that might or might not come.

I feel design has a version of that, too. Here’s viewer menu hierarchy in Google Drive:

One should always feel very uneasy about a menu with just one item, like Insert here. Even within the View menu, one could imagine streamlining all the commands to be in one main menu, rather than two tiny submenus (coupled with pretty excessive width that makes for an interaction that feels like walking a tightrope).

These are the menus for a PNG image. It’s entirely possible other file types offer more options and this menu structure earns its keep then, paying off in consistency over a long run – but I tried a few file formats, and the menus all looked similarly sparse.

As a counterpoint, here’s an example I just spotted in the context/​right-click menu in Apple’s Notes:

When you have one device, the three options get appended to the ground floor of the menu. But if you have more than one, they all get ejected into a submenu.

I like this soft consistency of introducing hierarchy only when it’s needed – or in reverse, flattening/​streamlining it as necessary.

I have mixed feelings about this one particular use, however. This menu is already very long (and seemingly abandoned – look at table and checklist and link options), so in this case perhaps a consistent submenu would be overall better. Also, the “Insert from iPhone and iPad” label is long and makes the entire menu slightly wider.

But as a pattern, it’s worth considering. (Just for completeness’s sake, you could also half-streamline by adding a submenu for the iPhone and another one for the iPad. But in this particular case, it’d also likely be a bad idea.)

System shock

I occasionally move older writing that still feels interesting to my new site, and today I republished the 2015 story about a strange bug that brought back an old pixel font from beyond the grave:

Some of the technical details inside are obsolete, but the story might still be fun. (Plus, it seems like at every job I have, I eventually stumble upon a bug that brings back something from the annals of history. Here’s one from 2019.)

Some more placeholder misuse

I mentioned placeholders before in the context of Dropbox Paper

…and I wanted to share a response by Nikita Prokopov, because he had a great point about those Dropbox Paper placeholders that I didn’t consider:

For me it’s […] confusing placement. Like if somebody writes “Have a nice day” on a door instead of “Push” or “Pull”. I don’t mind seeing “Have a nice day” message somewhere neutral, in a place not occupied by any other function, but not where I expect very specific help.

I was reminded of Prokopov’s comment when I saw this at the airport yesterday:

I remember, eons ago, how impressed I was when one of the Chrome designers was telling me how all of these error pages were specifically designed to feel like liminal spaces and not like destinations. These were, in a way, placeholder content.

But “Press space to play” feels like a strange thing to put in here. (Previously, the message said “No internet” or “There is no Internet connection.”) I understand that this is Chrome’s popular mascot, but this is still an error page whose purpose is to tell me what’s wrong, rather than serve as an entry point to a minigame.

Also, just a few days ago, I just stumbled upon this fun example of a placeholder collapse – where a temporary text becomes permanent:

If you are curious, this is what it looks like if you don’t forget to set the message. And funnily enough, given where we started, it says “Have a nice day”:

“Publishers aren’t evil, but they are desperate.”

A meandering and messy, but otherwise an absolutely worthwhile essay from Shubham Bose about the bloat and hostile behaviours on news sites:

I went to the New York Times to glimpse at four headlines and was greeted with 422 network requests and 49 megabytes of data. […]

Almost all modern news websites are guilty of some variation of anti-user patterns. As a reminder, the NNgroup defines interaction cost as the sum of mental and physical efforts a user must exert to reach their goal. In the physical world, hostile architecture refers to a park bench with spikes that prevent people from sleeping. In the digital world, we can call it a system carefully engineered to extract metrics at the expense of human cognitive load. Let’s also cover some popular user-hostile design choices that have gone mainstream.

Bose has a knack for naming some of these hostile patterns: The Pre-Read Ambush stands for distracting you even before you start reading, Z-Index Warfare is about multiple pop-ups competing with each other, and Viewport Suffocation is about covering so much screen with crap you can barely see the content. You can almost see those names fly by on the massive screens in the final scenes of WarGames:

By the way, I didn’t know that the ad bidding is actually happening on my computer, using my CPU, and clobbering my interface speed:

Before the user finishes reading the headline, the browser is forced to process dozens of concurrent bidding requests to exchanges like Rubicon Project […] and Amazon Ad Systems. While these requests are asynchronous over the network, their payloads are incredibly hostile to the browser’s main thread. To facilitate this, the browser must download, parse and compile megabytes of JS. As a publisher, you shouldn’t run compute cycles to calculate ad yields before rendering the actual journalism.

The essay ends on a call to action:

No individual engineer at the Times decided to make reading miserable. This architecture emerged from a thousand small incentive decisions, each locally rational yet collectively catastrophic.

They built a system that treats your attention as an extractable resource. The most radical thing you can do is refuse to be extracted. Close the tab. Use RSS. Let the bounce rate speak for itself.

Funny you should say that. There is another user-hostile pattern not mentioned in the article, as it happens on the other side; the swiping back gesture on the mobile phone is hijacked to insert a frustrating “Keep on reading” page, rather than getting you where you came from:

It’s there on many sites, from Slate to Ars Technica.

It usually shows cheap, attention-grabbing headlines (in the case of Ars Technica, the Linus Torvalds article was over a decade old!). I originally thought this was just a last-ditch attempt to keep me on the site, but when I asked on social, a reader suggested there is another reason:

It’s an SEO play. If you land on a site because of a Google search and swipe back to Google, it sends a signal to Google that it wasn’t the result you were looking for. So by forcing users to click a link on the page to read more than two paragraphs, it means the user is unable to swipe back to Google and send that negative SEO signal.

Even the bounce rate is not allowed to speak for itself.

Bear’s seamless OCR integration

I feel like social media and recently the slate of AI-powered “tell me what’s here” features continue to show us the power and longevity of screenshots. After all, nothing beats a more or less approachable shortcut and a file format that works literally everywhere.

But screenshots have issues, and I liked how Bear (a note-taking app) brilliantly integrated OCR inside images into its flows. This just worked for regular ⌘F finding without me having to do anything:

The recognized text also appears when you search through notes, and so on. It’s just a great peace of mind that you’re not going to miss on text just because you happened to screenshot it.

Apple operating systems have had detection of text inside images for a while – I know on iOS in particular it sometimes gets in a way of normal gestures – so I thought it was just that, but curiously this doesn’t work as nicely in Apple’s own Notes.

Two nice moments from MoMA in New York

To be fair, I am traveling and haven’t looked for solid evidence or citation that this works for people, but I personally like this approach: in lieu of a separate language selector button, each option here itself is both a language selector and a commit button.

The labels themselves are not the name of the language, but a call to action; I imagine recognizing the one label that means something to you should be easy if the other nine look like gibberish.

And, a thoughtful moment by one exhibit: Not only showing you where you are in the sequence of three videos, but even within the currently-playing video.

(I’m less of a fan of stretched type, though.)

“It takes an airplane to bring out the worst in a pilot.”

Speaking of fly-by-wire… William Langewiesche is one of my favourite technical writers. He finds a way to explain complex aviation aspects really well, and then add a certain amount of beauty and poetry on top of that. His style was a big influence on my book, and I like him so much I once compiled links to his writing so that others could find it more easily.

Here’s Langewiesche’s essay from 2014 about the 2009 Air France Flight 447, where an implementation of fly-by-wire – which means disconnecting the flight stick and attendant levers from immediately controlling flight surfaces via physical linkage, and instead putting motors and software in between – caused a fatal accident, as the pilots’ mental model of the system diverged too far from what was happening:

The [Airbus] A330 is a masterpiece of design, and one of the most foolproof airplanes ever built. How could a brief airspeed indication failure in an uncritical phase of the flight have caused these Air France pilots to get so tangled up? And how could they not have understood that the airplane had stalled? The roots of the problem seem to lie paradoxically in the very same cockpit designs that have helped to make the last few generations of airliners extraordinarily safe and easy to fly.

It’s an interesting read today in the context of robotaxis and self-driving, but also AI changing software writing:

This is another unintended consequence of designing airplanes that anyone can fly: anyone can take you up on the offer. Beyond the degradation of basic skills of people who may once have been competent pilots, the fourth-generation jets have enabled people who probably never had the skills to begin with and should not have been in the cockpit. As a result, the mental makeup of airline pilots has changed. On this there is nearly universal agreement—at Boeing and Airbus, and among accident investigators, regulators, flight-operations managers, instructors, and academics. A different crowd is flying now, and though excellent pilots still work the job, on average the knowledge base has become very thin.

It seems that we are locked into a spiral in which poor human performance begets automation, which worsens human performance, which begets increasing automation.

I was devastated to discover, while writing this post, that Langewiesche died last year. Rest in peace.

“This thing that Tamron’s doing is actually very cool.”

This 9-minute video from PetaPixel probably won’t make much sense for non-photographers, but there is something refreshing about this idea that there are still places where adding software is seen as positive:

The video talks about Tamron’s lenses which have their own software (independent of the camera), and even their own USB-C port.

In a camera lens equivalent of fly-by-wire, the software allows to fine-tune the behaviour of hardware: what should soft buttons do, should the focus ring be responding in a linear or not, or even in which direction should it rotate. However, there are also more complex behaviours – like time lapses with focus pulls – with an interesting interface that’s definitely not beautiful, but I think still worth checking out for how it uses skeuomorphism.

“So, what makes 3D so scary and different?”

It is common knowledge that Luigi is just a palette-swapped Mario, and that the characters facing left are the same characters as those facing right, only rendered mirrored.

This interesting 9-minute video from Core-A Gaming explains how this can be kind of tricky for fighting games in particular:

Suddenly, a character with a claw on one hand, or a patch on one eye, becomes a more complex situation – without redrawing, the claw or the patch move from one side of the body to another. Then there’s the issue of open stance toward the player, turning left-handed characters into right-handed ones just when they switch to the other side.

3D fighting games can, in theory, fix all of this with more ease, as instead of redrawing hundreds of sprites they can just introduce one change to a model… but they often choose not to. Enter the issues of 2.5D fighters vs. 3D fighters, 2D characters in 3D spaces, and lateralized control schemes.

It’s a small thing that quickly becomes a huge thing.

Here’s an object in Figma with one rounded corner. Notice how the UI always tries to match the rounded corner value based on where it is physically on the screen…

…which makes for a fun demo and feels smart, but: why don’t width and height do the same?

Turns (heh) out that this is a similar set of considerations as those in fighting games: both thinking deep about what is an intrinsic vs. derived property of an object, and what is the least confounding thing to present to the user. Since objects usually have noticeable orientation – text inside, or another visual property – width still feels like width and height like height even if they’re rotated. The same, however, isn’t necessarily true for four rounded corners. Or, perhaps, the remapping of four “physical” corners to four “logical” corners can be more error-prone.

Then, of course, there’s a question of what to do when the object doesn’t have a noticeable orientation. Like with many of the things on this blog, there are no “correct” answers. This too is a small thing that quickly becomes a huge thing.

One big step forward, three small steps back

This is a typical iOS Gmail dialog that allows you to snooze an email so it resurfaces later:

If you invoke that function on an email that’s an order receipt, a new option appears:

It’s great to see this clever and thoughtful button which is likely the best option here. But:

  • It reshuffles everything else, preventing motor memory from building. At this point, you can no longer rely on “bottom left” to always be “custom date,” and so on with other buttons. (One idea would be to put it at the back but draw attention to it visually, or at least make it span the entire row.)
  • It doesn’t show you the inferred date, even though there already is a precedent for doing that – especially important here as the feature seems to be powered by AI, which can get things wrong.
  • The icon heavily promotes the AI association, which is not that useful. It would probably be better to show a truck or some other visual signifier of “delivery.”

“I don’t like it but at least I know. Thanks.”

The search for the strangest Adobe setting continues in Lightroom, where the first option in the Interface section is… end marks:

Presently, only one option is there…

…but at least back in 2012 there were many more:

What does it do? It adds an old-time’y glyph at the end of either left or right panel.

The internet is rife with people perplexed by this option and I cannot deny – I’m one of them. (The title of this post is a reaction of one of the users.) It feels like such a peculiar way to add delight.

You are not limited to the pre-existing (one) flourish, as you can upload your own. Some people add a logo of their production studio, but John Beardsworth found a more creative use:

Alternatively, with a tiny bit of imagination you can exploit an often-forgotten detail of Lightroom’s interface – the “panel end marks”. These decorations at the bottom of Lightroom’s panels have often been derided as a waste of programming time, but in fact they can be made to serve more than their somewhat-trivial purpose. And as you can see in the examples on this page, they can serve as a reminder of star ratings, colour labels and even keyboard shortcuts for flags.

This is a fascinating hack, and an example of William Gibson’s famous “the street finds its own uses for things.” It made me curious why didn’t onscreen interfaces ever evolve to allow you to annotate them easily? You see stuff like this a lot in real life…

…but the Lightroom end mark hack is the only thing that comes to my mind where an onscreen UI got this kind of a treatment – and the feature wasn’t even intended for that use.

“Michael here will handle the bullshitting.”

I linked to this opaquely on Thursday, but it deserves its own entry. Michael Bierut’s 2005 essay called “On (design) bullshit” is one of my favourite design essays:

It follows that every design presentation is inevitably, at least in part, an exercise in bullshit. The design process always combines the pursuit of functional goals with countless intuitive, even irrational decisions. The functional requirements — the house needs a bathroom, the headlines have to be legible, the toothbrush has to fit in your mouth — are concrete and often measurable. The intuitive decisions, on the other hand, are more or less beyond honest explanation. These might be: I just like to set my headlines in Bodoni, or I just like to make my products blobby, or I just like to cover my buildings in gridded white porcelain panels. In discussing design work with their clients, designers are direct about the functional parts of their solutions and obfuscate like mad about the intuitive parts, having learned early on that telling the simple truth — “I don’t know, I just like it that way” — simply won’t do.

So into this vacuum rushes the bullshit: theories about the symbolic qualities of colors or typefaces; unprovable claims about the historical inevitability of certain shapes, fanciful forced marriages of arbitrary design elements to hard-headed business goals. As [Harry G.] Frankfurt points out, it’s beside the point whether bullshit is true or false: “It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.” There must only be the desire to conceal one’s private intentions in the service of a larger goal: getting your client it to do it the way you like it.

“I don’t know, I just like it that way” is such a tricky part of craft.

Design is more

During my first year at Figma, I designed and printed a run of posters for the office titled “Design is more.” The idea was to highlight that UX design is more than people expect, and connected in interesting ways to other domains. Today, they feel like a spiritual predecessor to this blog.

The first series was three posters:

I still (mostly) like them. I do believe that software can learn more about conveyance from video games; a lot of first-run experiences and particularly new feature onboarding still feel like a series of random pop-ups floating around the screen without much understanding of me as a user.

I would rewrite these posters, however, and particularly the Fitts’s Law examples: they’re generic and probably not as relevant to today’s applications.

After series one, we also collaboratively started working on series two, but the pandemic put a halt to the effort, and these posters were never finished/​printed. But the two below were perhaps closest to ready, and they seem fun today; I particularly liked the joke on the Hick’s Law one.

Jon Yablonski, the author of “Laws of UX,” made some posters in a similar vein and they’re available for purchase. His are slightly more on the visual side, but I was delighted to discover today that we both chose a rather similar approach to visualizing the Zeigarnik Effect.

(200th blog post here!)

The curse of the cursor

I had no idea it was Alan Kay himself who was responsible for the mouse pointer’s distinctive shape. In 2020, James Hill-Khurana emailed him and got this answer:

The Parc mouse cursor appearance was done (actually by me) because in a 16x16 grid of one-bit pixels (what the Alto at Parc used for a cursor) this gives you a nice arrowhead if you have one side of the arrow vertical and the other angled (along with other things there, I designed and made many of the initial bitmap fonts).

Then it stuck, as so many things in computing do.

And boy, did it stick.

But let’s rewind slightly. The first mouse pointer during the Doug Engelbart’s 1968 Mother Of All Demos was an arrow faced straight up, which was the obvious symmetrical choice:

(You can see two of them, because Engelbart didn’t just invent a mouse – he also thought of a few steps after that, including multiple people collaborating via mice.)

But Kay’s argument was that on a pixelated screen, it’s impossible to do this shape justice, as both slopes of the arrow will be jagged and imprecise. (A second unvoiced argument is that the tip of the arrow needs to be a sharp solitary pixel, but that makes it hard to design a matching tail of the cursor since it limits your options to 1 or 3 or 5 pixels, and the number you want is probably 2.)

Kay’s solution was straightening the left edge rather than the tail, and that shape landed in Xerox Alto in the 1970s:

Interestingly enough, the top facing cursor returned as one of the variants in Xerox Star, the 1981 commercialized version of Alto…

…but Star failed, and Apple’s Lisa in 1983 and Mac in 1984 followed in Alto’s footsteps instead. Then, 1985’s Windows 1.0 grabbed a similar shape – only with inverted colors – and the cursor has looked the same ever since.

That’s not to say there weren’t innovations since (mouse trails useful on slow LCD displays of the 1990s, shake to locate that Apple added in 2015), or the more recent battles with the hand mouse pointer popularized by the web.

But the only substantial attempt at redesigning the mouse pointer that I am aware of came from Apple in 2020, during the introduction of trackpad and mousing to the iPad. The mouse pointer a) was now a circle, b) morphed into other shapes, and c) occasionally morphed into the hovered objects themselves, too:

The 40-minute deep dive video is, today, a fascinating artifact. On one hand, it’s genuinely exciting to see someone take a stab at something that’s been around forever. Evolving some of the physics first tried in Apple TV’s interface feels smart, and the new inertia and magnetism mechanics are fun to think about.

But the high production value and Apple’s detached style robs the video of some authenticity. This is “Capital D Design” and one always has to remain slightly suspicious of highly polished design videos and the inherent propensity for bullshit that comes with the territory. Strip away the budget and the arguments don’t fully coalesce (why would the same principles that made text pointer snap vertically not extend to its horizontal movement?), and one has to wonder about things left unsaid (wouldn’t the pointer transitions be distracting and slow people down?).

Yet, I am speaking with the immense benefit of hindsight. Actually using that edition of the mouse pointer on my iPad didn’t feel like the revolution suggested, and barely even like an evolution. (Seeing Apple TV’s tilting buttons for the first time was a lot more enthralling.) And, Apple ended up undoing a bunch of the changes five years later anyway. The pointer went back to a familiar Alan Kay-esque shape…

…and lost its most advanced morphing abilities:

Watching the 2025 WWDC video mentioning the change (the relevant parts start at 8:40) is another interesting exercise:

2020:

We looked at just bringing the traditional arrow pointer over from the Mac, but that didn’t feel quite right on iPadOS. […] There’s an inconsistency between the precision of the pointer and the precision required by the app. So, while people generally think about the pointer in terms of giving you increased precision compared to touch, in this case, it’s helpful to actually reduce the precision of the pointer to match the user interface.

2025:

Everything on iPad was designed for touch. So the original pointer was circular in shape, to best approximate your finger in both size and accuracy. But under the hood, the pointer is actually capable of being much more precise than your finger. So in iPadOS 26, the pointer is getting a new shape, unlocking its true potential. The new pointer somehow feels more precise and responsive because it always tracks your input directly 1 to 1.

(That “somehow” in the second video is an interesting slip up.)

I hope this doesn’t come across as making fun of the presenters, or even of the to-me-overdesigned 2020 approach. We try things, sometimes they don’t work, and we go back to what worked before.

I just wish Apple opened itself up a bit more; there are limits to the “we’ve always been at war with Eastasia” PR approach they practice in these moments, and I would genuinely be curious what happened here: Did people hate the circular pointer? Was it hard to adopt by app developers? Was it just a random casualty of Liquid Glass’s visual style, or perhaps the person who was the biggest proponent of it simply left Apple? We could all learn from this.

But the most interesting part to me is the resilience of the slanted mouse pointer shape. In a post-retina world, one could imagine a sharp edge at any angle, and yet we’re stuck with Kay’s original sketch – refined to be sure, but still sporting its slightly uncomfortable asymmetry.

The always-excellent Posy covered this in the first 7 minutes of his YouTube video:

But specifically one comment under that video caught my attention:

Honestly, I’ve never thought of the mouse cursor as an arrow, but rather its own shape. My mind was blown when I realized that it was just an arrow the whole time.

…because maybe this is actually the answer. Maybe the mouse pointer went on the same journey floppy disk icon did, and transcended its origins. It’s not an arrow shape anymore. It’s the mouse pointer shape, and it forever will be.

User interface sugar crash

I think about some aspects of interface design as sugar.

This is how you adjust the photo in Photos app in the previous version of iOS:

And this is the same view in the current version:

The difference is in the delayed/​animated falling of the notches.

I don’t think it’s great. It’s “delightful” in a rudimentary and naïve sense, but like sugar, you cannot just add it to your daily diet without consequences. This extra animation serves no functional purpose, and the sugar high wears off quickly. What remains is constant distraction and overstimulation, the feeling of inherent slowness, and maybe even a bit of confusion.

It pairs nicely with the previous post about avoiding complexity and rewarding simplicity. I often see this kind of stuff as related to designer’s experience. Earlier on in your career, you are proud you’ve thought about this extra detail, you’ve figured out how to make this animation work and how to fine-tune the curves, and you’ve learned how to implement it or convince an engineer to get excited about it.

Later in your experience, you are proud you resisted it.

“And to make matters worse, complexity sells better.”

A smart post by Matheus Lima at his Terrible Software blog:

What you just learned is that complexity impresses people. The simple answer wasn’t wrong. It just wasn’t interesting enough. And you might carry that lesson with you into your career. […]

It also shows up in design reviews. An engineer proposes a clean, simple approach and gets hit with “shouldn’t we future-proof this?” So they go back and add layers they don’t need yet, abstractions for problems that might never materialize, flexibility for requirements nobody has asked for. Not because the problem demanded it, but because the room expected it.

I nodded to a lot of it. There’s some parallels to design, too. Perhaps in design, “future-proofed” gets replaced by “bespoke” – everyone wants a custom interface with a novel thing that doesn’t exist anywhere else in the app. That feels better. Tailor-made. Special. It’s hard to resist that, and go back to making your UI out of reusable parts, consistent, and boring in all the best possible ways.

This advice about how to talk about simplicity feels eminently universal:

If you’re an engineer, learn that simplicity needs to be made visible. The work doesn’t speak for itself; not because it’s not good, but because most systems aren’t designed to hear it. […] The decision not to build something is a decision, an important one! Document it accordingly. […]

If you’re an engineering leader, this one’s on you more than anyone else. You set the incentives, whether you realize it or not. And the problem is that most promotion criteria are basically designed to reward complexity, even when they don’t intend to. “Impact” gets measured by the size and scope of what someone built, which more often than not matters! But what they avoided should also matter.

One more thing: pay attention to what you celebrate publicly. If every shout-out in your team channel is for the big, complex project, that’s what people will optimize for. Start recognizing the engineer who deleted code. The one who said “we don’t need this yet” and was right.

Mar 18, 2026

“I like to use Soviet control panels as a starting point.”

One of my favourite genres is “I’m going to teach you something secretly while you’re having fun.”

This 2020 post by George Cave is ostensibly about Lego interface panels, but quietly sneaks in some stuff about shape coding and other kinds of coding:

The Lego interface panels seem to have a certain hold on people. Artist Love Hultén recreated some of them in a more human-compatible scale and even made them interactive:

It was fun to see one of the most well-crafted of early arcade games, Tempest, in this kind of a view, with the stud reimagined as a paddle controller:

Just earlier this month, designer Paul Stall announced his project M2x2 (the page itself is beautiful and interesting to visit – I paticularly loved the horizontal galleries):

The M2x2 is a functional homage to the classic Lego computer brick, upscaled and re-imagined as a high-performance workstation. […]

If our tools could look as playful as the things we built as kids, would we approach our work with more joy? The M2x2 is just the beginning of a workspace that feels less like an office and more like a laboratory for breakthroughs.

But both of these are enlarged Lego bricks. Three years ago, James Brown a.k.a. Ancient made an effort to embed an LCD screen in a regular-size Lego brick. It’s a fun 12-minute video of the construction process:

If you are into that kind of stuff, Brown followed it up 2 months later by putting a playable Doom inside a Lego brick:

But the most amazing to me outcome was this video, called “Busy little screens”:

A lot of diversity of the original bricks is gone, but it’s hard to expect Brown to recreate and animate them all. It’s a mesmerizing thing to watch nonetheless; one can almost taste a future where the technology will allow for Lego bricks to be animated, but look exactly as they originally did.