What you just learned is that complexity impresses people. The simple answer wasn’t wrong. It just wasn’t interesting enough. And you might carry that lesson with you into your career. […]
It also shows up in design reviews. An engineer proposes a clean, simple approach and gets hit with “shouldn’t we future-proof this?” So they go back and add layers they don’t need yet, abstractions for problems that might never materialize, flexibility for requirements nobody has asked for. Not because the problem demanded it, but because the room expected it.
I nodded to a lot of it. There’s some parallels to design, too. Perhaps in design, “future-proofed” gets replaced by “bespoke” – everyone wants a custom interface with a novel thing that doesn’t exist anywhere else in the app. That feels better. Tailor-made. Special. It’s hard to resist that, and go back to making your UI out of reusable parts, consistent, and boring in all the best possible ways.
This advice about how to talk about simplicity feels eminently universal:
If you’re an engineer, learn that simplicity needs to be made visible. The work doesn’t speak for itself; not because it’s not good, but because most systems aren’t designed to hear it. […] The decision not to build something is a decision, an important one! Document it accordingly. […]
If you’re an engineering leader, this one’s on you more than anyone else. You set the incentives, whether you realize it or not. And the problem is that most promotion criteria are basically designed to reward complexity, even when they don’t intend to. “Impact” gets measured by the size and scope of what someone built, which more often than not matters! But what they avoided should also matter.
One more thing: pay attention to what you celebrate publicly. If every shout-out in your team channel is for the big, complex project, that’s what people will optimize for. Start recognizing the engineer who deleted code. The one who said “we don’t need this yet” and was right.
In short, the way I think about software quality is the amount of meaningful problems. […]
There are problems in Finder — resizing columns, renaming or deleting files synced with a FileProvider-based app, and different views not reflecting immediate reality. There are problems with resizing windows. AirPlay randomly drops its connection. AirDrop and other “continuity” services do not always work or, in an interesting twist I experienced a couple days ago, work fine but display an error anyway. The AirPlay and AirDrop menus shuffle options just in time for you to tap the wrong one. […]
These are the products and features I actually use. There are plenty others I do not. I assume syncing my music collection over iCloud remains untrustworthy. Shortcuts seems largely forgotten. Meanwhile, any app augmented by one of Apple’s paid services — Fitness, News, TV — has turned into an upselling experience.
As I’m reading this and thinking about my own Apple usage patterns and a similar litany of problems, I keep returning to Apple TV, which feels by far like the most stable and least troubled platform. I wish I had a better explanation for it: Is Apple magically really good at TV interfaces? Are their benefitting from it being a “hobby project”? But I think the Occam’s Razor here is this: tvOS is just a lot simpler.
And just like that, a thought appears: Is what we’re seeing overall is really just Apple losing the battle with complexity?
Apple won once, in the late 1990s, when on the hardware side all the Performas and Newtons and LaserWriters were cut ruthlessly, and on the software front Mac OS X pushed Classic away as the operating system. The situation was different then, however, because there was no other choice. Today, Apple seems successful on paper, so the pressure needs to come from inside, from someone high up enough to recognize that what Apple is doing vis-a-vis software quality is not sustainable and hasn’t been for some time now. That the bill already came due on all of the decisions where systems thinking and deep testing and focus and preventative maintenance and paying off design debt have been deprioritized in favour of another shiny launch event that stretches the teams and platforms even thinner.
When thinking about complexity, a different go-to framework I have is “can I explain a situation in a short paragraph?” This can help separate regular bugs (where the explanation is typically: I am doing the thing that used to work and it’s no longer working, so something broke), from bigger problems that require some serious long-term system-thinking approach. Off the top of my head, there are many things I can no longer explain:
I cannot explain Apple’s widget strategy
I cannot explain what is going on with the Fn/Globe key
I cannot explain the long-term thinking surrounding icons in Tahoe menus
Of course, it’s not me who should be explaining those things. And I haven’t done this exercise before so I don’t know for sure if things are getting worse here. It feels like it, though. I wonder if Apple just hit a limit of some sort of being able to deal with complex things, and first course of action should be: don’t throw even more complex things on your plate.
It’s probably impossible to tell the upper echelon of Apple that it’s breaking revenue records in spite of its software design, not because of it. I hope the next regime knows better.
Over the past few years, I’ve found myself relying on TextEdit more as every other app has grown more complicated, adding cloud uploads, collaborative editing, and now generative A.I. TextEdit is not connected to the internet, like Google Docs. It is not part of a larger suite of workplace software, like Microsoft Word. You can write in TextEdit, and you can format your writing with a bare minimum of fonts and styling. […]
I trust in TextEdit. It doesn’t redesign its interface without warning, the way Spotify does; it doesn’t hawk new features, and it doesn’t demand I update the app every other week, as Google Chrome does.
But I get the feeling that Chayka would be better served switching from TextEdit to Apple Notes for most of these things he’s creating. Saving a whole pile of notes to yourself as text files on your desktop, with no organization into sub-folders, isn’t wrong. The whole point of “just put it on the desktop” is to absolve yourself of thinking about where to file something properly. That’s friction, and if you face a bit of friction every time you want to jot something down, it increases the likelihood that you won’t jot it down because you didn’t want to deal with the friction.
Part of me agrees with this vehemently – for casual text wrangling, Notes is by far the best iteration of what both the old Stickies app and TextEdit attempted.
But Notes are still evolving. The UI keeps changing. I’ve had a note shared by a friend hanging alongside my own notes for years, without me asking for it. I remember the moment when tags were introduced, and suddenly copy/paste from Slack started populating things in the sidebar. Then there was this scary asterisked dialog that slid so well into planned obsolescence worries that it felt like a self-own:
And the attendant warning, ostensibly well-intentioned, adorned my notes for months, just because I had an older Mac Mini I barely touch doing menial things in a dusty closet:
On top of that, the last version of Apple Notes on my macOS occasionally breaks copy/paste (!), which led to some writing loss on my part. (If you cut from one note intending to paste in another, and realize nothing was saved in the clipboard, you lost the text forever.)
These are not show stoppers. But they too are friction that has to be juxtaposed with what Gruber lists in his essay. They’re also friction of the unexpected, new, stochastic flavour. TextEdit’s challenges, on the other hand, are known knowns. In this context, TextEdit is in that rare – and maybe increasingly treasured – place where it no longer gets updates, but it doesn’t feel abandoned, or falling apart, or at the risk of outright cancellation. (I think on the inside of tech companies this is called being “maintenanced” – not actually staffed to be improved, but still eligible for breaking bug fixes and security updates.)
We need to normalize declaring software as finished. Not everything needs continuous updates to function. In fact, a minority of software needs this. Most software works as it is written. The code does not run out of date. I want more projects that are actually just finished, without the need to be continuously mutated and complexified ad infinitum.
Personally I would be very happy to live in a postcapitalist world where it was 100% FINE that desktop operating systems had “stopped evolving” because they were good enough to meet basically everyones’ needs, and there was no stock price to crash from an old monopoly having clawed its way to the top with nowhere else to go. “Let [certain] software be finished” has always felt to me like oblique pining for humanity to outgrow our current political-economic system.
The fact that in the 10+ years I’ve been using it, there’s only been a single major overhaul update is a feature, not a bug to me.
I have seen this sentiment grow in recent years, as AI is seemingly shoved into every crevice of everything whether or not it even had crevices to begin with. Liquid Glass on the Mac side and incessant ads plus bugs on the Windows side add to the malaise.
But I’ve also been in technology so long that even outside of tensions of capitalism, it’s hard for me to imagine software not changing. Code does run out of date even if you try very hard. So I don’t know yet how to square all this.
Bear is not finished/“maintenanced,” but it seems to not be changing the same way some other software is changing, either. I’m excited reading its blog – even if there are features or updates that do not pertain to me, they don’t bother me, and make me excited for others benefitting. Its innovation feels considered, not reckless.
In a week I’m praising products I didn’t expect to praise, I feel similarly about Lightroom Classic. When Adobe in 2017 forked Lightroom Classic out of the newly-refreshed Lightroom, a lot of us got worried about the “Classic” tag having “dead man’s walking” connotations. But nine years later, and Lightroom Classic is still being lightly updated with fixes, camera presets, and – occasionally – feature changes that largely feel welcome. Lightroom Classic appears, to once again use industry jargon, “stable.”
Maybe the answers are somewhere in this post: celebrate and fund “maintenanced” apps, fork apps into “stable” and “modern” paths, or encourage and practice slow, considered growth. I bet there are other approaches and altogether new ideas to try, too. (There used to be a tradition, when software was physical, to list all the new stuff at the back of the box. What if we started writing out the things we didn’t add?) But I like at least talking about it to begin with. There are apps in my life I want to feel like TextEdit, there are apps that I want to feel like Notes, and there are ones I’m happy to put on the cutting edge/beta/canary path, where bugs are a promise, and motor memory a distant dream.
I yearn for a software ecosystem that allows all of these types of apps to blossom.
This menu in Chrome feels like a surface running away from its creators:
I think cerebrally I understand the subtle difference between Show and Always Show, but is that difference worth it? Because at some point the repetitiveness and heaviness of that top section is casting a huge shadow over the rest of the menu.
I have an internal rule for adding a new menu item that happens to result in the longest string yet: think about the volume – the literal amount of pixels – you’re adding to the whole surface. Big menus are scarier, wide menus separate items from their shortcuts, submenus become harder to jump into, and so on. The economy of words can benefit in more ways than just the obvious ones.
But what made me a little nervous were the two grayed out options. What does it mean for something starting with Always Show to be grayed out here? What does it mean for something to be grayed out and enabled? My guess is that someone wired these without thinking too much about all the states, but it results in a stressful tension. Software should be making it very clear about what is under my control, and what is not.
Lastly, and this is almost funny: Full Screen is either F or ⌃⌘F, in all standard Mac apps. This alone is already confusing, as is Apple’s entire horrible Globe/Fn strategy (this is a story for another time), and I verified they both work independently in Chrome. How did they get conflated into one shortcut from hell is probably a really interesting bug somewhere – but also a sign no one is seemingly paying attention.
I recently learned of the OG App from 2022, which offered an ad-free, simpler experience to users frustrated with Instagram changes.
The app didn’t last – it couldn’t last – but it was a fascinating statement.
In a different corner of the internet, Michael Leggett, one of the former Gmail designers, created Simplify – an alternative “shell” to Gmail:
Hundreds of improvements (small and large) to streamline, simplify, and enhance Gmail’s design and functionality. Hide the features you don’t use, customize the ones you do including setting the list and message width and fonts.
Bad design can occur for a number of reasons including but not limited to:
Our needs as users are not well understood, prioritized, or aligned with the company’s goals.
Entropy: The natural decline of products over time as the vision decays or blurs and new features are conceived without consideration of the whole and added faster than the system’s overall design and architecture can evolve to support them.
Good design is hard. Good design is more than making a product pretty. It is about having the right capabilities in an intuitive, respectful, and well-crafted offering. I hope to expand on this topic in future posts.
I know ad blockers and “reader modes” exist, but these alternative shells go much further and change the original app’s design. I wonder what other examples of that are out there.
One of the most potent themes in Stanisław Lem’s writing was the fallacy of first contact.
Lem argued that we are just not ready for an actual meeting with something truly alien. That the most open-minded of us are close-minded on a cosmic scale. That sci-fi made us think that aliens will look like human with prosthetics when good, and insect-like creatures when evil, but sci-fi needs to be self-constrained for all the same reasons; showing us something actually inhuman will immediately render it utterly incomprehensible.
He wrote about it in Eden, and Solaris, and The Invincible, and Fiasco. The last of these is a book I was once so angry at that I threw it at the wall.
It also happens to be my most favourite book, ever.
Anyway. This is a diagram for a single-button flashlight called Andúril 2 (larger version):
I saw it for the first time earlier this week. I was speechless. Maybe a little bit in awe. I know I’m supposed to hate this, but this feels so profoundly… alien, that I don’t know if anything I know applies here. I don’t want to judge it by the wrong set of rules. I want to understand the dividing lines between the UI and its explanation. I want to study it more.
Oh, and because I was curious too – this is the flashlight:
I was surprised at this little thing that appeared in my Chrome Canary this morning.
It is rare to see an interface clean up after itself this way. This flew by quickly and wasn’t communicated very well, but I believe this changed my new tab page from this…
…to this:
Now, I said “surprised” and not “delighted” not just because the implementation felt a bit rough. I am also suspicious of the motivations, as Google’s sister iOS app played very fast and loose with this surface, literally moving the search bar from under my thumb in order to create room for features I would never use and could never remove. I suspect this is a preparation for something else that would take the place.
But until that day comes, this was an interesting gesture, and it’s really welcome to see a new tab harking back to the simplicity of Google from days past.
Edward Tufte has this visual rule that 1+1=3: With a single line on the screen, you have just that single object, but adding a second line does something interesting, it adds a third ‘object’ on the screen, the negative space between the two. All good visual designers deeply understand this effect.
In UX design we have a cognitive equivalent. If you have two buttons, there is a third ‘object’ created: the decision a user must make on which button to tap.