This 25-minute segment on MKBHD’s Waveform podcast (video or audio, segment starts at 40:21) is from November 2024, and is a nice counterpart to the post about favourite well-made apps and sites from a few months back.
The original theme is “what is an app that you use all the time, and like to use, but is actually a bad app?” but it quickly moves to a more general conversation about good and bad mobile apps.
It’s always interesting to me to see what themes emerge and what other people think is important. Here’s the list where I linked to relevant apps as long as I could find them:
Bad apps:
Google Messages – dinged for unreliable spam and lack of organization/filtering
Notion (on mobile) – hard to orient yourself and some direct manipulation is wonky
many smart home accessory apps – bad and redundant with Google Home, but have to keep for emergencies
Netgear Orbi (network router) – specific functionality and bad password recovery
Hatch (white noise machine for babies) – simple things are hard to discover
Multibowl is one of my favourite emulation projects because it’s a rare example of using emulators creatively, rather than for nostalgia or research.
It’s a 2016 game by Bennett Foddy and AP Thompson that reimagines older existing games as smaller pieces of a new, Super Mario Party-like experience. Two players randomly join one of 300 games – sometimes in medias res – with a small explicit goal that can be accomplished in about ~30 seconds, after which a point is awarded, another game is loaded, and so on.
All of this is done through actual emulation and fast switching of games’s original code:
Regarding the game choices, at the outset, I wanted to curate a list of moments of gameplay that would be meaningful if played for just a short period of time. Sometimes it’s obvious – you can take a moment from a fighting game where both players are low on health, or play a sports game from the start until the first point is scored. So that’s where I started. Over time, I figured out that you could make exciting moments in games that are not otherwise interesting for a competitive duel. For example, in Dodonpachi (a bullet hell game) we take away the player’s guns and challenge them to stay alive in a huge hail of bullets.
For games that were designed as cooperative experiences, I eventually gravitated toward the structure ‘score more points but do not die’, which forces the players to calibrate how much risk they take relative to the other player.
This excerpt is from a 2017 interview of Foddy by Seb Chan from ACMI. There are many interesting moments in that interview, such as the issue of curation:
Multibowl is not a very precise historical curation like you might make for a museum exhibition, where you can only show a couple of dozen things at most. It’s a huge driftnet of games. There is no quality or historical significance standard, and no attempt to balance out the games in terms of nationality or gender. The only curatorial instinct that it follows is to find the most diverse set of game ideas. With each piece distilled down to a randomly-selected 30-second slice, there’s room for an infinite number of them.
In fact, contrary to a museum curation, the point of Multibowl is to have too many games for a single player to see. It’s best when it feels too big to grasp. I think, now that there are 300 games in there, it’s starting to feel that way.
Unfortunately, it is not possible to actually play Multibowl outside of special events, given copyright issues. In addition to general emulation copyright murkiness, Foddy adds, “I don’t think the actual bits of actual games have ever been used as the fabric of a larger game before.”
However, a really fun introduction to Multibowl is another art project from a now-defunct comedy duo Auralnauts, who actually played Multibowl pretending to be Kylo Ren and Bane, to hilarious results:
A thoughtful 26-minute talk by Imani Joy, the solitary full-time designer on Mastodon, reflecting on her nine months there:
It’s an interesting peek behind the curtain at designing for this particular space, and the many unenviable constraints: lack of data, care for privacy, tension between Mastodon’s power-user early adopters (“they are values-driven, they want control, they’ll tolerate a lot of the clunkiness of the Fediverse”) and “mainstream audience [that] expects polish.”
At some point, design needs to be authoritative, but how do you combine that with wanting the process to be as inclusive as possible? The product itself is a federation of various servers that can exert their own control – so how do you bring it all together under one neat umbrella for the user? (Also a challenge for Android in comparison with iOS.) The mainstream design has certain fashion-y tendencies. How to make sure you don’t lose yourself while chasing them, but also not to stay ossified out of fear of making changes? (Wikipedia, Internet Archive, and other similar places look and behave a certain way, after all, and it’s not usually because of lack of talent to “modernize” them.)
The most interesting thing to me was this:
It’s easy to talk in terms of who to optimize for. Things get harder when you start to articulate who you won’t optimize for, what trade-offs you must make in pursuit of your goal, and who you’re going to risk letting down along the way. What the team needed from me more than anything was not the probabilities, not the usability findings, not the story of who we’re making happy. They needed to hear who will choose to disappoint and why. And I told them that building the best experience on Mastodon means that we’ll solve for the extremes, but we won’t center them. And sure, we do risk frustrating some power users who want absolute control over their profiles, but that risk is necessary to optimize the experience also for browsing users.
When we were working at Figma in 2019 shipping an update to text line height algorithms (moving them from the way print does things to the way web does things), I started an internal document called “The new line height and its discontents,” where myself and the team deliberately wrote out who will be most annoyed about the changes, and why. We listed our arguments, workarounds, even “deal sweeteners” (“but look at this other thing that will get better as a result!”), but we also tried very hard to be candid with ourselves. Some people were not going to be happy no matter what we do or say. Do we know precisely who these people are and are we okay with that? I’d recommend that approach for any change-management project, rather than keeping fingers crossed or toxic positivity.
Joy so far worked on quote posts and new profiles, and I appreciated her ending the talk on a note of recognition for these kinds of projects in these kinds of settings:
I know that we’re building something that will continue to be imperfect, but it doesn’t have to be perfect to make a positive difference in the world.
What I liked about it is that the author goes beyond cheap shots and deeper into both storytelling aspects (drawing from his experience)…
Now, as you can tell, the big problem with the design and execution of this video is that the producers failed to recognize the importance of point of view in telling this story. Now, perspective is already very important in any film, but it’s doubly important in a film for which one’s point of view in reality is also the subject. But this failure is present even in some of the more mundane parts of the film like the interviews that Mark does with various meta staff members. Now, as it’s plain to see, these are not real interviews. They’re fully scripted and staged – again, a classic mistake in corporate film. You can even tell that they’re not looking at each other. They’re clearly reading from a teleprompter. Yikes.
Of course, the entire premise of an interview is that two people are speaking candidly. So watching an obviously fake interview can be deeply unsettling as the speakers try to act out natural conversation and inevitably fail. This is why so many people in this video, including Mark, seem to not know what to do with their hands while speaking. It’s because they’ve been told to act naturally in a social situation that does not normally exist.
…and the meaning of these kinds of propaganda-esque announcements:
They are joined by some friends who are calling from Soho to tell them about some cool augmented reality street art that they’ve just discovered. […] And with a wave of his hand, Mark teleports the artwork into his spaceship so that he can appreciate it for himself, thus extracting this street art from any sense of place and context, which is the point of street art. I know this might sound like a nitpick, but I think it’s just worth lingering on the fact that, you know, in this high concept tech demo about how this technology will empower people to appreciate art in new ways. Nobody paused to ask what the social and cultural function of street art actually is.
The entire introduction video comes across as thoughtless and careless – “It’s not a product launch or even a demo. It’s just a cartoon about the world Mark Zuckerberg is telling you that you will one day live in.” – and some of the observations here will be relevant to other things, even in other mediums: UI redesign minisites, the font announcements articles, rebrand unveils, and so on.
I would love similar analyses of Apple’s stuff – not just the most obvious parallel which would be the 1987 Knowledge Navigator vision video, but some of the more recent scripted virtual keynotes, too.
This 9-minute video from PetaPixel probably won’t make much sense for non-photographers, but there is something refreshing about this idea that there are still places where adding software is seen as positive:
The video talks about Tamron’s lenses which have their own software (independent of the camera), and even their own USB-C port.
In a camera lens equivalent of fly-by-wire, the software allows to fine-tune the behaviour of hardware: what should soft buttons do, should the focus ring be responding in a linear or not, or even in which direction should it rotate. However, there are also more complex behaviours – like time lapses with focus pulls – with an interesting interface that’s definitely not beautiful, but I think still worth checking out for how it uses skeuomorphism.
It is common knowledge that Luigi is just a palette-swapped Mario, and that the characters facing left are the same characters as those facing right, only rendered mirrored.
Suddenly, a character with a claw on one hand, or a patch on one eye, becomes a more complex situation – without redrawing, the claw or the patch move from one side of the body to another. Then there’s the issue of open stance toward the player, turning left-handed characters into right-handed ones just when they switch to the other side.
3D fighting games can, in theory, fix all of this with more ease, as instead of redrawing hundreds of sprites they can just introduce one change to a model… but they often choose not to. Enter the issues of 2.5D fighters vs. 3D fighters, 2D characters in 3D spaces, and lateralized control schemes.
It’s a small thing that quickly becomes a huge thing.
Here’s an object in Figma with one rounded corner. Notice how the UI always tries to match the rounded corner value based on where it is physically on the screen…
…which makes for a fun demo and feels smart, but: why don’t width and height do the same?
Turns (heh) out that this is a similar set of considerations as those in fighting games: both thinking deep about what is an intrinsic vs. derived property of an object, and what is the least confounding thing to present to the user. Since objects usually have noticeable orientation – text inside, or another visual property – width still feels like width and height like height even if they’re rotated. The same, however, isn’t necessarily true for four rounded corners. Or, perhaps, the remapping of four “physical” corners to four “logical” corners can be more error-prone.
Then, of course, there’s a question of what to do when the object doesn’t have a noticeable orientation. Like with many of the things on this blog, there are no “correct” answers. This too is a small thing that quickly becomes a huge thing.
The Parc mouse cursor appearance was done (actually by me) because in a 16x16 grid of one-bit pixels (what the Alto at Parc used for a cursor) this gives you a nice arrowhead if you have one side of the arrow vertical and the other angled (along with other things there, I designed and made many of the initial bitmap fonts).
Then it stuck, as so many things in computing do.
And boy, did it stick.
But let’s rewind slightly. The first mouse pointer during the Doug Engelbart’s 1968 Mother Of All Demos was an arrow faced straight up, which was the obvious symmetrical choice:
(You can see two of them, because Engelbart didn’t just invent a mouse – he also thought of a few steps after that, including multiple people collaborating via mice.)
But Kay’s argument was that on a pixelated screen, it’s impossible to do this shape justice, as both slopes of the arrow will be jagged and imprecise. (A second unvoiced argument is that the tip of the arrow needs to be a sharp solitary pixel, but that makes it hard to design a matching tail of the cursor since it limits your options to 1 or 3 or 5 pixels, and the number you want is probably 2.)
Kay’s solution was straightening the left edge rather than the tail, and that shape landed in Xerox Alto in the 1970s:
Interestingly enough, the top facing cursor returned as one of the variants in Xerox Star, the 1981 commercialized version of Alto…
…but Star failed, and Apple’s Lisa in 1983 and Mac in 1984 followed in Alto’s footsteps instead. Then, 1985’s Windows 1.0 grabbed a similar shape – only with inverted colors – and the cursor has looked the same ever since.
That’s not to say there weren’t innovations since (mouse trails useful on slow LCD displays of the 1990s, shake to locate that Apple added in 2015), or the more recent battles with the hand mouse pointer popularized by the web.
But the only substantial attempt at redesigning the mouse pointer that I am aware of came from Apple in 2020, during the introduction of trackpad and mousing to the iPad. The mouse pointer a) was now a circle, b) morphed into other shapes, and c) occasionally morphed into the hovered objects themselves, too:
The 40-minute deep dive video is, today, a fascinating artifact. On one hand, it’s genuinely exciting to see someone take a stab at something that’s been around forever. Evolving some of the physics first tried in Apple TV’s interface feels smart, and the new inertia and magnetism mechanics are fun to think about.
But the high production value and Apple’s detached style robs the video of some authenticity. This is “Capital D Design” and one always has to remain slightly suspicious of highly polished design videos and the inherent propensity for bullshit that comes with the territory. Strip away the budget and the arguments don’t fully coalesce (why would the same principles that made text pointer snap vertically not extend to its horizontal movement?), and one has to wonder about things left unsaid (wouldn’t the pointer transitions be distracting and slow people down?).
Yet, I am speaking with the immense benefit of hindsight. Actually using that edition of the mouse pointer on my iPad didn’t feel like the revolution suggested, and barely even like an evolution. (Seeing Apple TV’s tilting buttons for the first time was a lot more enthralling.) And, Apple ended up undoing a bunch of the changes five years later anyway. The pointer went back to a familiar Alan Kay-esque shape…
We looked at just bringing the traditional arrow pointer over from the Mac, but that didn’t feel quite right on iPadOS. […] There’s an inconsistency between the precision of the pointer and the precision required by the app. So, while people generally think about the pointer in terms of giving you increased precision compared to touch, in this case, it’s helpful to actually reduce the precision of the pointer to match the user interface.
2025:
Everything on iPad was designed for touch. So the original pointer was circular in shape, to best approximate your finger in both size and accuracy. But under the hood, the pointer is actually capable of being much more precise than your finger. So in iPadOS 26, the pointer is getting a new shape, unlocking its true potential. The new pointer somehow feels more precise and responsive because it always tracks your input directly 1 to 1.
(That “somehow” in the second video is an interesting slip up.)
I hope this doesn’t come across as making fun of the presenters, or even of the to-me-overdesigned 2020 approach. We try things, sometimes they don’t work, and we go back to what worked before.
I just wish Apple opened itself up a bit more; there are limits to the “we’ve always been at war with Eastasia” PR approach they practice in these moments, and I would genuinely be curious what happened here: Did people hate the circular pointer? Was it hard to adopt by app developers? Was it just a random casualty of Liquid Glass’s visual style, or perhaps the person who was the biggest proponent of it simply left Apple? We could all learn from this.
But the most interesting part to me is the resilience of the slanted mouse pointer shape. In a post-retina world, one could imagine a sharp edge at any angle, and yet we’re stuck with Kay’s original sketch – refined to be sure, but still sporting its slightly uncomfortable asymmetry.
But specifically one comment under that video caught my attention:
Honestly, I’ve never thought of the mouse cursor as an arrow, but rather its own shape. My mind was blown when I realized that it was just an arrow the whole time.
…because maybe this is actually the answer. Maybe the mouse pointer went on the same journey floppy disk icon did, and transcended its origins. It’s not an arrow shape anymore. It’s the mouse pointer shape,and it forever will be.
It was fun to see one of the most well-crafted of early arcade games, Tempest, in this kind of a view, with the stud reimagined as a paddle controller:
The M2x2 is a functional homage to the classic Lego computer brick, upscaled and re-imagined as a high-performance workstation. […]
If our tools could look as playful as the things we built as kids, would we approach our work with more joy? The M2x2 is just the beginning of a workspace that feels less like an office and more like a laboratory for breakthroughs.
But both of these are enlarged Lego bricks. Three years ago, James Brown a.k.a. Ancient made an effort to embed an LCD screen in a regular-size Lego brick. It’s a fun 12-minute video of the construction process:
But the most amazing to me outcome was this video, called “Busy little screens”:
A lot of diversity of the original bricks is gone, but it’s hard to expect Brown to recreate and animate them all. It’s a mesmerizing thing to watch nonetheless; one can almost taste a future where the technology will allow for Lego bricks to be animated, but look exactly as they originally did.
I mentioned before how the old-fashioned pixels on CRT screens have little in common with pixels of today. The old pixels were huge, imprecise, blending with each other, and requiring a very different design approach.
Some years ago, the always-excellent Tech Connections also had a great video about how in the era of analog television, pixels didn’t even exist.
But earlier this month, MattKC published a fun 8-minute video arguing that for early video games it wasn’t just pixels that were imprecise. It was also colors.
What was Mario’s original reference palette? Which shade of blue is the correct one? Turns out… there isn’t one.
Come to learn some details about how the American NTSC TV standard (“Never The Same Color”) worked, stay for a cruel twist about PAL, its European equivalent.
One of my favorite bits of trivia about the 1983 movie WarGames is that all the computer typing scenes have been faked in a clever way: The actors (many of whom might have never typed before, as home computers were only slowly becoming popular) were allowed to press any key they wanted, but the interface would still proceed as if the correct letter was typed.
This allowed the computer to respond to keystrokes, making it all feel real, but also reduced the burden on actors to type things properly – and also make it easier for proper sight lines to happen, as the actors didn’t have to constantly look at the keyboard.
WarGames used it really well, showing all sorts of face reflections in the CRT screens, as if people literally talked to the machines, which must have been hell to film:
I have never seen this demoed or mentioned outside of the anecdote. However, yesterday, Cathode Ray Dude released an excellent video about the challenges of filming computer screens. The whole video is worth watching, although at this point mostly off-topic for this blog. But starting at 1:32 and ending around 1:37, there’s an actual demo of a similar piece of auto-typing software used in the 1996 movie Scream:
You might think this is just a piece of old-computer trivia, but I’ve actually used that in at least two of my talks, for some of the similar reasons! I run most of my talks from HTML/CSS/JS; it’s nice for the audience to see things being typed and responding properly to (audible, and occasionally visible) key presses – but it’s also nice as a speaker not to worry about messing things up under pressure.
For extra realism, make sure Backspace goes back in the script – you might occasionally press it instinctively – and for extra extra verisimilitude, actually bake in a typo or two into the predefined sequence. (And an escape hatch if you actually change your mind and want to go manual.)
Then, of course, there’s a classic 2011 piece of software called HackerTyper. Did someone already marry this idea with an LLM? Seems like a logical next step.
I know some of you are all whispering “he’s posting all of these hour-long YouTube videos, when am I supposed to find time to watch them”? I hear you loud and clear and I’m going to make it better…
Seriously, though, this is an extremely enjoyable deep dive into Disney’s failed Galactic Starcruiser hotel.
I don’t know much about Disney, but it was engrossing as half of the failures were actually software-related: from the flawed UI in various spaces in the hotel and screen-laden space windows in the rooms, to poor integration with physical elements of the scenery, an “immersive” interactive game that felt untested plus gave you poor feedback, and the general trends of laziness and cheapness that could never fully be remedied by the performers going above and beyond.
What Nicholson does a lot is trying to debug what actually happened to make her experience so miserable, and it’s really refreshing to see debugging in a different context than I usually see it.
Many of you have probably heard the repeated story of the first Moon landing in 1969 almost getting undone by a bunch of onboard computer glitches:
There could not be a worse time in the flight to have computer problems. At, the time the press gleefully reported how Armstrong seized manual control from a crippled and failing onboard computer and managed to heroically and single-handedly land the spaceship on the surface of the Moon against all odds.
Robert Wills argues against this narrative in this 2020 talk, wanting to shine a spotlight away from Neil Armstrong and toward people who designed the software (among them Margaret Hamilton), and the mission control’s Steve Bales, who made a decision not to abort the launch as the 1201 and 1202 errors were piling up.
The argument: the computer was working as intended, it fixed itself over and over again owing to its clever software, and it actually helped Buzz Aldrin understand (at least subconsciously) what led to the seemingly random and distracting computer errors.
The above is more of a traditional talk than the videos I usually share – a bit more technical, taking up an entire hour, and with generic slides – but it’s buoyed by Wills’s enthusiasm and knowledge.
Besides, it’s lunar landing! Did you know about DSKY and its fascinating keyboard and UI? Did you know the spacecraft’s window was part of the interface, too? Or that its software was woven into the hardware? Or that the Apollo 11 had a… guillotine in it?
An unsung hero of the decision not to abort the landing is Richard Koos, a NASA simulation supervisor who […] 11 days before the launch of Apollo 11, put the team of controllers including Bales […] through a simulation that intentionally triggered a 1201 alarm. […] Unable to figure out what the 1201 was, Bales aborted that simulated landing. He and Flight Director Gene Kranz were dressed down for it by Koos, who put the team through four more hours of training the next day specifically on program alarms. When the 1202 and 1201 alarms occurred during the actual landing, Garman, Bales, and even Duke recognized them immediately.
RuneScape is a popular MMORPG that reached its peak popularity in the late 2000s.
In the game, combat – colloquially known as PvP, or player vs. player – is limited to a specific map area (called the Wilderness) and otherwise people’s houses.
On 6/6/6 (sic!) a bug in RuneScape made it possible for a few players to start killing others outside of designated areas, without them being able to defend themselves. One of these players, Durial321, gained a lot of notoriety:
A player called Cursed You had invited some friends to his in-game house once he had maxed his construction skill, but decided to eject them all from the premises. Things turned sour, however, as a group of players marked as PvP in the house didn’t lose this PvP flag when ejected, allowing them to storm through Falador and massacre whoever they pleased. The most notorious of these players was named Durial321.
This event went down in internet infamy and meant that many players lost their items when killed as well as the banning of those involved.
I don’t have any context of RuneScape and I found it really funny to learn about this event from different retellings of the story.
Several others were able to use this glitch, but Durial321 abused it the most. His rampage lasted for about an hour, starting at Rimmington, where the house party was, then proceeding to Falador and subsequently Edgeville. At Edgeville, he gave Voodoolegion the green partyhat, who never gave it back to him. Soon after, he finally encountered a Jagex Moderator, Mod Murdoch, who disconnected him and locked his account. Durial321 was later permanently banned from RuneScape. In a 2006 interview, he said that player killing outside of the Wilderness was exciting, although he felt bad for the players who lost their belongings.
The 2006 incident later became known as the Falador Massacre.
There is also this more modern retelling that feels like scary story time by the campfire:
Reactions from players were initially kind of incredulous. Plenty of people were shocked and found the whole incident quite funny. Durial had essentially broken the game, after all. Some players wanted to be like him, whipping strangers to death and taking their items. But soon, as more players started hearing about what had happened and seeing the video, the mood shifted. Players wanted Durial321 hung, drawn and quartered, with his head displayed on a pike outside Lumbridge Castle.
Without spoiling too much, the bug was a classic Swiss cheese situation involving a new untested item, a race condition, peculiar timing, and a player with an unusually high uptime and a whole lotta luck.
This video from Marblr about adding fall damage to Overwatch is really intense – 45 minutes of length and a lot of footage of frantic gameplay – but really informative, too.
It’s a great case study of how something seemingly really simple – deducting health from the player as they fall from height – can be a complicated thing to figure out in all the detail.
I never played Overwatch and rarely play videogames anymore, but many of the lessons here more universal for any sort of UI and system design:
You will have to introduce tactical inconsistencies for the system to feel consistent, but be careful as there might be a point those inconsistencies start to outweigh the whole thing.
Wanna learn how you and others feel about something? Overcrank it to make the feelings come out more easily. (And to find bugs.)
There will always be tensions between what the data says and how you feel about something. (I was surprised how often the word “intuitive” entered the picture.)
Also, it’s just a really well-made video, filled with little presentation and storytelling details that elevate it. I wish more videos like this existed for UI mechanics.
But maybe the most important takeway? You don’t have to choose between rigor and fun. You can have both.
The year is 1981. Your IBM PC is equipped with a tragic speaker that sounds awful for anything except occasional beeps. (Those beeps sound awful, too.)
You can’t afford a sound card and besides, sound cards for your PC have not been invented yet. You can’t even afford a floppy drive, so you’re one of the rare people who actually uses an audio cassette player as a storage device – a technique usually reserved for more primitive machines that have half the bits your new PC does.
But there’s a silver lining. Your cassette player has a little relay that controls its motor. You can engage and disengage the relay at will.
So, someone figured out that toggling the relay kind of sounds like a metronome. Like percussion. It’s a hack, but in the sonic landscape inhabited solely by your sorry speaker, it’s a breath of fresh air (scroll to 7:26 if you don’t land there automatically):
The year is 2026. Your computer itself is the size of an audio cassette, fits in your pocket, has better storage, graphics, sound, pretty much everything compared to a 1981 PC. It even has a special haptic motor. Except, that motor can only be controlled by native apps, and there is no official API to do it from a browser.
But there’s a silver lining. Tapping any checkbox on a site generates a haptic pulse. And that apparently works even if the checkbox is hidden and if thecomputer is doing the tapping.
I love these kinds of hacks, and I wonder what’s going to happen to this one. Will it fly under a radar, or will some websites start abusing it? If so, will Safari clamp it down, or will it actually give people a proper API for haptics?
From May last year, a 21-minute video by Linus Boman about font piracy, specifically during the era of personal computing and early internet:
The nuances of what separates font piracy from non-pirated revivals or general inspiration are too much even for me, but I liked how the video moved on from the obvious and cheap “haha, you wouldn’t pirate a font” story to cover a few of the more complex issues with panache.
My small contribution to the discourse is that I just scanned an interesting booklet from 1979 called Typeface Analogue, which catalogs various names different phototypesetting manufacturers used for their “replica” fonts – a sort of a translation table between once-relevant parallel type ecosystems.
Some are pretty uninspired: CS for Century Schoolbook, OP for Optima, Eurostyle for Eurostile, and so on. Others are more interesting: a version of Palatino called Patina, American Classic becoming Colonial, or Futura renamed to Twentieth Century. Absolute fav? Helvetica becoming Megaron.
The display fonts you see on this blog are my vector conversion and slight improvement (kerning pairs!) over a bitmap PC/GEOS font called University, which itself was inspired by the original Macintosh’s Geneva. Inspired or downright stolen? You decide:
As a former ISP employee I occasionally like dipping my toes into some networking stuff, and this 25-minute video from The Serial Port is a good retelling of the day in 2014 when one of internet’s important routing tables crossed a threshold of 512K, which caused all sorts of trouble:
What I appreciate about The Serial Port is that they always seem to actually test the vintage hardware or rebuild the old software they’re commenting on, and this time was no exception: they grabbed a classic unsung hero of ISPs, a Cisco Catalyst 6500-series router, and then recreated “The 512K Day” in their studio.
This was a nice comment under the video:
Have absolutely no knowledge about networking, but watched this video as if a thriller movie. Thanks for opening my world of tech to networking.
Yeah, the video is kind of nerdy and intense, but maybe you’ll enjoy it; even a classic aging piece of hardware with an arbitrary ticking-bomb limit deserves some respect.
Also, the funniest comment:
I had a 2.4k day a couple days ago when I realized Farm Sim 22 only allows a max of 2400 bales. Couldn’t load into my saved game. Had to go into items.xml and temp remove a hundred bales.
Before computer graphics, movies relied on matte paintings to extend or flesh out the background. This is perhaps my favourite matte painting, from the end credits of Die Hard 2:
Turns out, videogames do something similar, except the result is called a skybox, since it has to encompass the player from all sides. It’s another way to use cheap trickery to pretend the world is larger than it is.
This 9-minute video by 3kliksphilip shows a few more advanced skybox tricks from Counter Strike games using the Source engine:
I particularly liked two discoveries:
In real world, you wouldn’t style backfacing parts, because the player will never be allowed to see from the other side. Here, you don’t even have to render them.
Modern skyboxes have layers and layers of deceptions: more realistic 3D buildings closer to you, and completely flat bitmaps far away. It almost feels like each skybox contains the history of skybox technology that preceded it.
On the other hand, seeing clouds as flat bitmaps was really disappointing.
This was a fun 15-minute architectural video from Stewart Hicks (absolutely worth a follow otherwise) that mapped precisely into the same kind of tension and internal debate I sometimes feel when talking about minimalism in UX design: Minimalism is good! Until it’s not!
One interesting lesson here is that the famous “less is more” was actually – surprise! – perverted from the original poem, where it meant “less technical perfection means more emotional impact.”
I wasn’t fully sure why Hicks decided to incorporate a commentary to his own story this way – maybe he was afraid that the sarcasm of “steel wanting to share its joy” and “lessness” and “simplificity” wouldn’t land well? Or perhaps it was just the introduction that didn’t quite work for me, as it confused the entire joke.
But it was fun to watch it twice anyway. Those stories are never easy. I am not ready to draw too many parallels between architecture and UX design, even if Hicks lightly does so at the end. There’s no gentrification and displacement when Liquid Glass takes over Aqua, although I think a lot of people would love to see a Apple’s recent design decisions meeting the business end of a wrecking ball.
My favourite recent saying to replace “less is more” is this, by Paul Valéry (another poet!):
Everything simple is false. Everything complex is unusable.
You can see it as unsolvable, cynical, maybe even nihilistic. I do too, on a dark day. But more often, I see it as a great challenge. “Less is more” has this simplistic seductiveness that feels naïve. “More” is not an option, but often in my work on complex systems “less” is neither, and a lot of UX design is finding the perfect shade of gray.
A really interesting 28-minute video by daivuk about making a first-person shooter game QUOD that fit in just 64 kilobytes:
I found watching it strangely enthralling and even nerve racking. The creator keeps adding stuff that seemingly has no chance of fitting into such a small space – textures! sounds effects! music! his own language! – and somehow finds a way to squeeze them all in.
This is inspiring, but also practically useful: even though you and I are maybe never likely to need such high optimization, some of these techniques alone could be useful in some tight quarters like a load-bearing CSS file, or embedded software.
As an example, the author wrote his own “music tracker,” which is a clever way to reduce the weight of music: instead of the tune being one big audio file, only the instruments are sampled, and then arranged in repeating patterns.
Except in his case, there were no instruments… just audio effects already existing in the game. And audio effects themselves were generated in a similar way, by combining smaller waves and effects.
The same was done for textures: the creator wrote a bespoke text editor that saves each texture as smaller pieces and combination instructions – a sort of a “PDF” of a texture rather than a more costly scan of the printed page – and re-generates it on entry.
Lastly, this debug view of “cost” was really interesting. (Good debug views, in my opinion, are generally underrated.)
I’m not going to spoil the surprise. Am I fully supportive of the approach? Not sure. PlayStation’s region protection complicates my feelings, and any sort of DRM-esque approach eventually backfires when it comes to software preservation. But you can’t deny what Spyro developers did is a really fascinating and weird approach.
The quote in the title of this post refers to the hackers who eventually did conquer the Spyro’s copy protection system. I guess – and I apologize in advance – game recognize game.
Palette cycling is an interesting technique borne out of limitations of old graphic cards. Today, any pixel can have any color it wants. In the 1970s and 1980s, you were limited to just a few fixed colors: as few as 2 for monochrome displays, or 4, or 8, or – if you were lucky – 16. Some of those fixed palettes, like CGA’s, became iconic:
But there was an interesting hybrid period in between then and now where you still were only allowed 4 or 8 or 16 or 256 color choices in total, but you could assign any of these at will from a much bigger palette.
So, as an example, each one of these three is made out of 16 colors, but each one is 16 different colors:
Moving pixels was slow. But palette swaps were so fast and easy, that it led to a technique known as palette cycling. This is probably the best-known example, from an Atari ST program called NEOchrome.
Despite so much apparent movement, no pixels are changing location, as that’d be prohibitively slow in 1985. Only the palette is changing. If you watch the same animation with the UI visible, you can clearly see which colors are “static,” and which are moving around:
But this was 1985, so why I am mentioning it 40 years later?
I like looking at old computers for a few reasons. Some of these seeminly-ancient techniques are inspiring and remind me that the limitations are often in the eye of the beholder. Seeing someone really good pushing a platform to its limits is just a good thing to load into your neurons – this could be you next time! And, believe it or not, some tips and tricks can still be relevant.
For example, this is a 9-minute video by Steffest from just earlier this year that walks through a modern attempt to make a palette cycling animation, including starting on an iPad:
The end result goes much harder than I expected. It was interesting to see again the technique of dithering to simulate transparency (we’ve seen it before, but this one is more advanced). But what particularly stood out to me here was the artist making his own little tools to aid in the creative process; I’ve always loved the notion that a computer is really just meant to be an accelerant, making it easy for you to avoid drudgery.
San Andreas was released in 2004, but the game started breaking only after Windows got updated… in 2024. Turns out the bug was sort of a ticking time bomb just waiting for the right set of conditions. We covered one similar bug before, in Half-Life 2 – but this investigation goes deeper, and shines a light on the difficulty of making Windows, whose backwards compatibility comes at a price.
I am pretty sure this is nothing new for heavy command-line gurus (and heavy Raycast users, and so on), but I found it delightful to see someone so excited about creative uses of the terminal, and it made me realize how much time I do waste going through the browser, then Google Search, then scrolling. I am sure tightening some of these loops would feel great.
There is also something interesting in the argument about terminal being the ultimate “reading mode” of any website, chiefly because it cannot be anything else.
Mostly, this and Strudel before make me excited to see some new (to me) stuff happening with text-based user interfaces.
I’m slightly suspicious of this story that Unix commands were made so short (cp instead of copy, mv instead of move, ls instead of list, and so on) because the console keyboard had really unpleasant keys.
I imagine it must be a confluence of many things, not just this one. Shorter means faster even with amazing keyboards. Shorter also means the commands travel quicker over the slow modems of the era. The downsides were limited: the early nerdy user base of Unix could handle the extra confusion.
On the other hand – no pun intended – I typed on the keyboard on the picture and I can confirm it is absolutely, positively atrocious, with the tallest keys you have possibly seen:
At any rate, it’s a good a reminder of the power of motor memory, and the difficulty of change management. Even the worst keyboards imaginable are so much better now, and the modems so much faster. And yet, the short and confusing commands remain to this day.
It serves as a bit of design history and even critique of early Mario games, and then in the middle it turns into an analysis of the Mario port on Game & Watch – an obsolete technology even in the 1980s, and something that could have been an easy cash grab, except someone cared.
Translating Mario’s mechanics to a much inferior tech is an interesting design challenge, plus there’s just this universal pleasure of seeing someone go extra. And the video has a nice ending message, too.
An entertaining 9-minute video by Shloop that starts with a common mistake of typing in an English mode on a Korean keyboard, but then goes through a bunch of other fun and light input internationalization stories:
This is of course competence porn, made even better by the dry Polish lektor-like delivery. But it’s also a puzzle. I watched this so many times. There are so many great UI lessons in here:
You can absolutely put graphics inside a textbox
Sparklines rule
Slider is still the best UI element in history
Previews don’t have to feel like training wheels
Synchronizing sounds to visuals is so powerful (see: turn signals on a car dashboard)
I found myself thinking about how you’d design something that feels real-time, but also needs to be resilient against typos, and has a distinct “commit” moment (which is what I think those yellow flashes are); some of the best moments in the video are the quick fixes that aren’t narrated.
Ultimately, this also shows how powerful and underrated plain text can be as interface. It’s a bit like designing straight in CSS, operating at the weird intersection of motor memory, creativity, and abstraction. (Is there a CSS editor that feels more like this?)
On top of all of this, the act of building the track this way is also how the finished track would sound like. Amazing stuff.
Remember all these jokes that went like this?
[God looking at a pug dog for the first time] What the hell did you humans do with my bad ass wolf I gave you?
Imagine sitting the creators of the typewriter in front of YouTube and having them watch this video.
I’d guess a lot of people know that the original 1980 Pac-Man ends accidentally with an iconic, glitchy, and impassable “kill screen.” Many people will also nod with recognition at hearing the kill screen is level 256, a number that immediately gives some ideas on what might have happened.
But this fun 11-minute video from 2017 by Retro Game Mechanics Explained doesn’t stop there. It shows, step by step, exactly what is going on when you reach level 256, and how each one of the glitchy things appear on the screen.
It’s a little mesmerizing, like watching a building demolition in slow motion.
Ross designed Input, a coding font superfamily which was very inspiring to me in the day, and taught me that coding fonts could be a place of surprising creativity and innovation.
First of all, Input has four width options: from regular through Narrow to Condensed to Compressed – this not only allows to avoid the “blocky/squareish” nature of many coding fonts, but also, pragmatically, to squeeze in more stuff on mobile screens.
Secondly, since a lot of coding environments didn’t (and maybe still don’t) allow for fine-tuned typography settings, you can bake them into a font upon download – choose a different default line height to be there in the font itself, or have your favorite style of zero just hanging there in the default slot.
Thirdly, serif versions of Input coexist with sans serif, and so does italic, and you can mix them together.
But most important thing comes at the end: you can imagine coding in non-monospaced fonts! What seemed like blasphemy before made so much sense once I put it to use – I still code in Input Sans Narrow (non monospaced) to this day:
Of course, since the release of Input in 2014 a few other coding fonts did interesting creative things in this (mono)space. But to me this will always be the original that opened my eyes to what’s possible, and the talk captures so well a lot of deep thinking that went into the font. To quote Ross:
Type design is design and design is about solving problems.
It taught me many things and it clarified that things were more complicated than they seemed. Windows Vista (widely seen as failure) perhaps wasn’t so bad, and 7 (quoted by many as the best Windows ever) was not that far away from Vista, down to its internal version number being 6.1 to Vista’s 6.0.
It’s also interesting to reflect on this today, when macOS is having its own Vista moment.
There is also a follow-up video on Windows 8, the possibly most consequential Windows release of that era, with product decisions that reverberate still today.
Main takeaway: An entire book could be written and a lifetime of lessons learned from Microsoft’s “.1” releases.
I mentioned speedrunning before in the context of mastery, but there is the other side of speedrunning that’s equally interesting: that utilizing bugs (or, glitches) to get the fastest possible time.
This 17-minute video by Msushi covers “one of the most loved and broken glitches in Portal 2” and the strange relationship the community has in following a bug to its conclusion – which, in this case, is not fixing it, but creatively using it to shave of speedrunning time. (There is an element of mastery there too, with spawning and despawning, but I don’t want to spoil the surprise.)
A 16-minute video from Ahoy from last year about Chris Sawyer, creator of Transport Tycoon and Rollercoaster Tycoon games from the late 1990s.
The video focuses more on the economics of the industry and some technical details, but what’s interesting to me was how tight those two games felt in terms of UI. They have a shared custom GUI, they are assembly-coded, and they felt perhaps like the last instance of a graphical user interface where it felt there was nothing standing between you and the pixels.
I know those are games and not productivity apps, but they can be inspiring for those, too. You can download OpenTTD, which is a modern recreation of Transport Tycoon Deluxe that doesn’t require emulation, and it still captures the snappy and tight feeling very well.
I’m thinking about it in particular because the web took a lot of that away. The web loves latency and loose interactions and reflow and temporary fonts and CSS leaks and text sticking out of the box and many other papercuts. It’s nice to be reminded of the world where things were closer to the metal, and how that felt as a user.
When home computers were new, there was this enduring myth of “killer poke.” POKE was a pretty low-level BASIC command that allowed you to write any number to any place in the memory, as there was no memory protection. From that developed a set of myths of the right magical pairs of numbers that could be input and cause permanent damage to the hardware of the computer, shared in nerd circles almost like campfire stories.
Wikipedia has a pretty dry set of those. The most exciting one there is annotated with [citation needed], and the message seems to be: by the 1980s, this was no longer possible. Even in the earlier version of this idea, Halt and Catch Fire, the “catch fire” was an exaggeration. Before then? Sure, I bet some user actions could damage the computer, but computers themselves, with their high-voltage vector CRTs, electromechanical parts, and even liquid mercury tanks early on, were not that hard to damage.
Unsurprisingly, there are more modern versions of “killer poke,” too. At this point, the best they can do is crash or hang your operating system, but they are still chased, and coveted, and mysterious.
This 10-minute 2021 video from Mrwhosetheboss is a fun story of a wallpaper that could crash your Android OS. I’m not going to spoil the surprise, but it’s not what I expected – although the moment you see the wallpaper in question, you might figure it out.
It’s a fun video, and of that good kind that actually teaches you something.
A 6-minute video from JHR about the 1980s British game Jet Set Willy, a big prize for its completion, the bug that made it unplayable, the copy protection, the hackers, and the mess of it all.
Perhaps the only ever musical that’s about a buggy piece of software. From the inimitable Cabel Sasser, this 2006 video about Saints Row, with three songs and a goddamn reprise at the end.
It’s very good.
my car door’s freaking out
it seems to be forever in the concrete barricade
I wonder how I’m ever gonna drive away
this really is isn’t my day
the sparks are flying
people dying
metal frying
and I wonder if there’s more to life
or if I’ll find that this is really it
this game is a piece of work
A funny 12-minute video by Chris Spargo about why traffic signs in the world are standardized only to some extent. This was interesting to me generally in the context of Europe being more iconographic, and America being more “word-y” in their sign design, which extends to devices, keyboards, and (presumably?) software.
The story why [the old STOP sign] got replaced by the American version is also the story why the rest of our signs still look different, and why they probably always will.
September 6, 2014, was a landmark day in speedrunning history.
I like Summoning Salt’s videos about speedrunners because they manage to add a great dose of storytelling to what otherwise would be boring, mundane events, and this one about Punch-Out is no exception. It’s Rocky meets Moneyball, in a way.
This pairs well with the previous review of the “Pilgrim in the microworld” book because speedrunning feels very connected to mastery and to quality – whether it’s because of the old-fashioned grind to be better, or by exploiting all sorts of glitches in the game to shave off sometimes milliseconds. The video above is in the former category, or what speedrunners would call “glitchless.” It’s also just really fun to watch. (The book wasn’t fun to read.)
When you make dialogue in a video game you have a distinct file that has all the possible text that can pop up in your game. This is usually a CSV file, or a JSON, and you can think of it as basically a database for text. So then at different parts in your code, you extract specific parts of this file, and that’ll depend on what character you’re talking to, if you have a certain item, whatever, and that’s one of the most efficient and common ways to do it.
But the way that Undertale handles dialogue is much worse. All of the dialogue in the entire game, every text box that pops up, is handled in one massive if statement. […] case 737 out of what must have been at least 1,000 lines.
This reminded me a little of my first week with my personal computer, when I didn’t yet know you can write IF X <> 3 THEN, so I spent half a day writing statements like IF X = 1 OR X = 2 OR X = 4 OR X = 5…
An absolutely eviscerating 18-minute walkthrough of Apple Music for macOS Catalina, from a few years ago. More funny than anything else, but a reminder to test the “boring” edges of your app – like a state with a lapsed subscription, or coming back after a few months.
There’s no way to drag and drop. […] If I want to add this to here, I have to go through this bullshit, and when I do, it takes seconds again.
Also, an ode to a well-functioning back button, and well-behaving loading states. Those things add up so quickly.
(My debugging brain understood what populated the confusing History entries – I bet it was the early play sequences that went through a bunch of stuff without playing.)