“Prototyping turned into an excuse for not thinking”

The 2016 launch of No Man’s Sky and the 2020 launch of Cyberpunk 2077 were catastrophes. No Man’s Sky fell so incredibly short of the promises the founder shared over the years – from smaller ones like rivers on the surface of planets, to huge ones like seeing other players – that some people felt it must have been a scam all along.

The other game was a simpler case study: Cyberpunk was buggy as hell. Not just the abysmal performance, but also the overall quality. People called it “the Hindenburg of videogames” and made YouTube compilations and listicles of its often hilarious bugs: cars exploding for no reason with perfect comedic timing, intimate body parts protruding through the clothes, and the infamous T poses.

In an unprecedented move, Microsoft slapped a big warning atop Cyberpunk’s app store listing, and Sony pulled it from their store altogether.

But it is 2026 now, and both games redeemed themselves. For years after the launch, the No Man’s Sky team worked hard on adding promised features:

Over a decade on from its initial reveal, No Man’s Sky both manages to remain the same game it was at launch while also bringing almost every single missing feature (and dozens of new surprise ones) into the title – implementing them intelligently and with great consideration for how it will affect the core of the game. They achieved their redemption years ago, yet continue diligently with massive update after massive update.

No other title has done what Hello Games have managed to achieve. And the best part? Every single update, patch and addition to the game was and is 100% free, with no falsified hype or build-up to each update.

Cyberpunk 2077 had a redemptive arc of its own, too, highlighted and contextualized in this 17-minute video from gameranx. Today, both games are rated “very positive” on Steam, and are actually still gaining daily players.

So, wonderful comeback stories, right? Depends on how you look at it. It’s great that both these games ended up being good products, but perhaps not as great that it was all happening in the open.

The videogame industry tried to get creative about it and established an idea of “early access”: being able to purchase an incomplete game earlier, and watch it get better while the publisher receives funding to keep going. But for every Minecraft there is Godus, and for every Kerbal Space Program there is The Day Before. Plus, neither No Man’s Sky nor Cyberpunk launched in early access with attendant caveats and discounts. (By the way, Wikipedia’s entry for early access is worth checking out – it’s so eloquent I’m surprised not to see any warning boxes.)

There seems to be ongoing and perhaps rising frustration with companies releasing software products too early and fixing them in flight, if at all. Already in 1996, Geoff Duncan wrote about his annoyances with that:

What Beta Means Now: […] In many cases – particularly with Mac Internet software – “beta” doesn’t mean anything close to what it used to. We’ve seen programs in public beta that not only contain innumerable known bugs the developers are aware of and plan to fix, but also accumulate major new features through subsequent releases. Similarly, we’ve seen products that change fundamental system and technology requirements during beta – details which should have been etched in stone long before. Beta often means what “alpha” or even “development build” used to mean.

Subsequently, Google and other web-first companies diluted the meaning of beta labels even more.

The trend of premature launches extended to devices, too. About two years ago, AI assistant gizmos from Humane and Rabbit were pilloried by audiences for launching in an effectively unfinished form. Both devices failed in the market; MKBHD’s video reviews of Humane AI Pin and Rabbit R1 remain both entertaining and informative watches.

AI complicates this even further in many ways. I enjoyed Pavel Samsonov’s recent post on his blog Product Picnic analyzing another disastrous launch: Grammarly’s writing advice feature that replicated well-known authors who never agreed for their likenesses to be used this way:

Reading between the lines, Mehotra’s interview paints a picture that I think many tech workers will find familiar: features are conceived, coded, and shipped as quickly as possible. He is happy to admit that the feature was a mistake… in retrospect. But in the moment it actually mattered, critical thinking was swept away by the false urgency of pushing things out.

It is worth reading in full and following the links, too; I watched the mentioned (tense) interview, and was similarly frustrated with the CEO’s lack of accountability or even a hint of an explanation of why the feature was launched to begin with. Key line from Samsonov’s post:

If you don’t know what are you trying to learn when you ship a prototype, do not ship a prototype.

This becomes even more important as the difference between a prototype and a final product is now thinner than a retina pixel. Both No Man’s Sky and Cyberpunk had, at least, well thought through foundations.

I understand that for some people gen AI software building tools is a discovery – perhaps for the first time – of a genuine joy of creation. But there’s also the other, newish side, a sort of “cult of velocity” where people show screens filled with agents coding things as if the world needed every possible app right this second.

Velocity and urgency can be important, but it’s hard to be careful and thoughtful when you’re going really fast; unsurprisingly, some don’t know what to do with that newfound AI-powered speed or realize the importance of thinking about crucial aspects other than time to market. (When digital cameras came around, the barrier to entry for photography was drastically lowered – it was possible to take a lot of photos without worrying about cost or quality. Tons of people took tons of objectively subpar photos; some were the end goal, some were a stepping stone toward more photographic mastery. However, I am not sure I remember people on either side ever bragging “I took over 1,200 photos today!”)

All this could be contrasted with movement of slow software (the name is part of a bigger slow movement although has unfortunate connotations in tech – it’s slow as in “speech,” not slow as in “beer”). Jared White in 2023 defined it as:

  • Sustainable software. Architecting and writing code in ways which are easily understandable and maintainable over time, requiring few dependencies and a rate of change that is healthy for the underlying ecosystem.
  • Thoughtful software. Working through feature development and making decisions based on what will benefit the userbase over the long term, placing mental and social health as priority over immediate gains or selfish interests.
  • Careful software. Seeking to understand the ways software might be used for harm, or itself be harmful by taking attention away from more important concerns in the broader culture.
  • Humanist software. Recognizing that most software—at least in application development—is primarily written for humans to understand and reason about with ease across a wide array of skill levels, and that relying on complex code generators or “generative AI” tooling to resolve complexity instead of simply building simpler human-scale tools is an industry dead-end.
  • Open software. Looking to established collaborative software movements like open source and the standards bodies responsible for open protocols to inspire how we build and maintain software (regardless of licensing).

I don’t really have a conclusion for this meandering post, as I am not sure a snappy conclusion is possible. Perhaps some of the links above can provide inspiration or food for thought about urgency, reputation, and doing things in the open.

Some patterns I’m noticing are:

  • Velocity is never an end goal.
  • Velocity is only one of many ingredients of software building.
  • It is necessary to think of people who will experience your work-in-progress as it is, not as it might one day be.

On tools and toolmaking

Not long ago, a blog I otherwise like a lot included this passage:

Designers have been saying this for years. Cameras don’t take pictures, photographers do. Tools don’t make you a better designer. Now the PM world is arriving at the same conclusion.

I am not linking to the post because I hear this argument from time to time, and I want to comment on the general notion.

I think I understand the sentiment behind it: You’re not a designer because you know all the Figma shortcuts. You’re not a perfect typewriter away from The Next Great American Novel. Mastery of a tool is not mastery of the subject matter. And there is definitely a certain amount of performative pretense of an insta photo of a meticulously arranged desk with a bougie keyboard, going at length about the only correct set of presets and plugins, or an idea that “if only you do this one creative habit, a firehose of creativity will follow.”

But I also disagree. Good tools do make you a better designer.

A good tool can make you go faster and, as a result, let you spend more time doing revs and trying new things. A good tool can make you go slower when needed, practicing a connection with the material underneath.

A good tool will prevent you from shooting yourself in a foot, will teach you new things about what you’re doing – and perhaps even about yourself.

A good tool will value your growth, make you reflect on your growing body of work, and push you to try harder.

A good tool can inspire you. A great tool can make you fall in love. A bad tool can make you walk away, and a horrible tool will make you never want to come back.

A good tool will make you seek out more good tools.

Sure, people wrote books on a BlackBerry. Would you want to? Sure, the best camera is the one you have on you. But wouldn’t you prefer that camera to also be the best camera for whatever it is that makes you tick – a great sensor or glass, an amazing build quality, a friendly user interface, a logo that makes you want to step up, or some particular quirk or sentiment that you can’t even explain, but matters a whole lot to you?

I’m told I should be annoyed if someone’s first reaction to seeing a nice photo I made is “what kind of camera do you use?”, as it diminishes my accomplishments as a photographer. But: I chose the camera, and bolted on the appropriate lens, and realized over the years the aperture priority mode and very precise focus area is what makes my brain happy. I went through other cameras before, and learned I didn’t like them and I liked this one. At some point in my life I even ventured out into the frightening underworld of the settings menu, opened a new browser window, and decided “I will now try to understand all of these terms.” It took years, but I did.

The reason I enjoy scanning and processing old documents is because I invested in my tools. I have a little keypad, a bunch of hard-earned Photoshop actions, and some bespoke Keyboard Maestro combos that boss Photoshop around. This little tool universe doesn’t just make me more efficient, but it also makes me have fun.

I’d go even further. The mastery of the subject matter and the mastery of the tool are both important – but they also have to be joined by fluency with tool choices, and deep understanding of the relationships you have with your tools.

No single writing advice book will give you a perfect recipe, but read ten of them and scan twenty more, and you might compile the right mixtape of practical tidbits for your brain, and inspiration for your soul. Likewise, you have to try out a bunch of tools – some bad ones, a few great ones – to understand what you need. Not just for efficiency, but also for enjoyment, and ambition, and flexibility or maybe rigidity, and this sort of unmeasurable feeling of a tool getting you, or a tool made by someone like you.

Maybe it’s the 1960s typewriter you need, or a newfangled e-ink-based writing implement, or maybe you just have to open TextEdit and close everything else. I’m not going to tell you the novel comes out then. But the novel might never come out if you don’t figure out what tool can help get it out of you.

You also have to recognize the telltale signs when you outgrow the tool, or when the tool starts disappointing you. Over the years, I learned that I hate InDesign, but that I hate LaTeX even more. I switched from Apple Notes to SimpleNote in 2012, went back to Notes in 2017, and just this year moved over to Bear. I once cargo-culted Scrivener for writing and ran away screaming, but I also once cargo-culted DevonThink and still use it today, in awe of its clunkiness and old-fashionedness that match my own.

AI tools are still tools. And generative AI will allow you to build more tools for the solitary audience of just you – but, like elsewhere, it will require some understanding what makes for a good tool and what makes for a good tool for you.

Craig Mod wrote recently about using AI to build his own custom tools:

My situation is pretty unique. I’m dealing with multiple bank accounts in multiple countries. Constantly juggling currencies. Money moves between accounts locally and internationally. I freelance as a writer for clients around the world. I do media work — TV and radio. I make money from book sales paid by Random House via my New York agent, and I make money from book sales sold directly from my Shopify store. […] Simply put: It’s a big mess, and no off-the-shelf accounting software does what I need. So after years of pain, I finally sat down last week and started to build my own.

But I bet Mod knew what tool he needed to build based on his experience with tools that didn’t work for him – and software and design in general.

Elsewhere, Sam Henri Gold in a widely-shared essay that is worth a read, about MacBook Neo and the beginning of the tool journey:

He is going to go through System Settings, panel by panel, and adjust everything he can adjust just to see how he likes it. He is going to make a folder called “Projects” with nothing in it. He is going to download Blender because someone on Reddit said it was free, and then stare at the interface for forty-five minutes. He is going to open GarageBand and make something that is not a song. He is going to take screenshots of fonts he likes and put them in a folder called “cool fonts” and not know why. Then he is going to have Blender and GarageBand and Safari and Xcode all open at once, not because he’s working in all of them but because he doesn’t know you’re not supposed to do that, and the machine is going to get hot and slow and he is going to learn what the spinning beachball cursor means. None of this will look, from the outside, like the beginning of anything. But one of those things is going to stick longer than the others. He won’t know which one until later. He’ll just know he keeps opening it.

I am bothered by black-and-white, LinkedIn-ready statements. “Tools don’t make you a better designer” feels like another version of the abused and misunderstood “less is more.”

My camera taught me to be a better photographer. DevonThink told me how to better organize my thoughts. Norton Utilities showed me how to have fun when doing serious things, and Autodesk Animator how to be serious about having fun.

I’m a toolmaker, so perhaps I arrive at this biased. I endured some crappy tools, wrote some okay ones, benefitted from some great ones. I don’t think I would have become a designer without them.

“It takes an airplane to bring out the worst in a pilot.”

Speaking of fly-by-wire… William Langewiesche is one of my favourite technical writers. He finds a way to explain complex aviation aspects really well, and then add a certain amount of beauty and poetry on top of that. His style was a big influence on my book, and I like him so much I once compiled links to his writing so that others could find it more easily.

Here’s Langewiesche’s essay from 2014 about the 2009 Air France Flight 447, where an implementation of fly-by-wire – which means disconnecting the flight stick and attendant levers from immediately controlling flight surfaces via physical linkage, and instead putting motors and software in between – caused a fatal accident, as the pilots’ mental model of the system diverged too far from what was happening:

The [Airbus] A330 is a masterpiece of design, and one of the most foolproof airplanes ever built. How could a brief airspeed indication failure in an uncritical phase of the flight have caused these Air France pilots to get so tangled up? And how could they not have understood that the airplane had stalled? The roots of the problem seem to lie paradoxically in the very same cockpit designs that have helped to make the last few generations of airliners extraordinarily safe and easy to fly.

It’s an interesting read today in the context of robotaxis and self-driving, but also AI changing software writing:

This is another unintended consequence of designing airplanes that anyone can fly: anyone can take you up on the offer. Beyond the degradation of basic skills of people who may once have been competent pilots, the fourth-generation jets have enabled people who probably never had the skills to begin with and should not have been in the cockpit. As a result, the mental makeup of airline pilots has changed. On this there is nearly universal agreement—at Boeing and Airbus, and among accident investigators, regulators, flight-operations managers, instructors, and academics. A different crowd is flying now, and though excellent pilots still work the job, on average the knowledge base has become very thin.

It seems that we are locked into a spiral in which poor human performance begets automation, which worsens human performance, which begets increasing automation.

I was devastated to discover, while writing this post, that Langewiesche died last year. Rest in peace.

“Their attitudes about the issues still shifted.”

I have been at times frustrated by cute placeholder text in places, most notably Dropbox Paper, which still puts them in a just-created doc…

…and in new to-do items:

This bothered me for two reasons.

First was a potential tone mismatch. What if you are writing a layoffs announcement, a project cancellation doc, or something personal and heartfelt? At Medium back in the day, at some point we added a fun celebratory dialog after publishing that said something like “Now, shout it out from the rooftops!” We took it down very quickly as people made us realize Medium is used to write many kinds of things we didn’t anticipate, and in those situations the cutesy message really failed to read the room.

But the other half of my frustration with Paper was that it felt like the app was making itself too comfortable in my space, in effect shouting all over my inner voice and distracting me. I felt like any app giving you a creative canvas should back off of that canvas unless it’s explicitly invited to participate.

Turns out, I can now attach something tangible to that discomfort. From Scientific American earlier this week (emphasis mine):

The researchers asked participants to fill in an online survey with questions about hot-button social and political issues. Some were prompted with an AI autocomplete answer that was deliberately biased toward one side of the issue. For example, participants who were asked whether they agreed that the death penalty should be legal might receive an AI suggestion that disagreed.

Across all the different topics in the survey, participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all. Overall, the study participants who saw the biased AI text shifted their positions toward those espoused by the AI.

Interestingly, the people in the study didn’t tend to think the AI autocomplete suggestions were biased or to notice that they had changed their own thinking on an issue in the course of the study.

The quoted study shows an example…

…and elaborates on how adding warnings didn’t really help:

The Warning and Debrief messages failed to significantly reduce the attitude shift, which is concerning because they were also inspired by those used in real AI applications. AI tools such as ChatGPT show brief and general statements about AI’s propensity to hallucinate false information (e.g., “ChatGPT is AI and can make mistakes. Check important info.”), similar to the messages used in our interventions.

I know on this blog I often focus on the mechanics of interactions, but the job of every designer is to think of more than that. I keep coming back to both pull-to-refresh and infinite scroll mechanics. Both can be put to good use and feel “delightful,” but both started being abused so much that it led to their respective creators disowning them.

“These platforms are ad-heavy to the detriment and frustration of users, yet they remain successful and growing.”

A good batch of history and observations by Nick Heer at Pixel Envy about ads coming to AI chatbots:

It is incredible how far we have come for these barely-distinguished placements to be called “visually separated”. Google’s ads, for example, used to have a coloured background, eventually fading to white. The “sponsored link” text turned into a little yellow “Ad” badge, eventually becoming today’s little bold “Ad” text. Apple, too, has made its App Store ads blend into normal results. In OpenAI’s case, they have opted to delineate ads by using a grey background and labelling them “Sponsored”.

Now OpenAI has something different to optimize for. We can all pretend that free market forces will punish the company if it does not move carefully, or it inserts too many ads, or if organic results start to feel influenced by ad buyers. But we have already seen how this works with Google search, in Instagram, in YouTube, and elsewhere. These platforms are ad-heavy to the detriment and frustration of users, yet they remain successful and growing. No matter what you think of OpenAI’s goals already, ads are going to fundamentally change ChatGPT and the company as a whole.

“Battered, bedraggled, inexplicably enthusiastic about a bargain flight to Bermuda”

I thought about it on Masto in January (the responses are interesting if you want to read), but recently Robin Sloan eludicated it a lot better:

What makes the AI chatbots and agents feel light and clean, here and now in 2026? Is it an innate architectural resistance to advertising, to attention hacks, to adversarial crud? No — it’s that they are simply new! The language models in 2026 are Google in 1999, Twitter in 2009. Their vast conjoined industry of influence hasn’t yet arisen … though it is stirring.

And I believe their architecture makes them more susceptible to adversarial crud, not less. I suppose we’ll see.

It’s interesting and useful to imagine — really visualize — the chatbots and agents in ten years or twenty … barnacled with gunk … locked in a permanent cat-and-mouse game with their adversaries … just as a platform like Google is today. In 2036, you send your AI agent out into the internet, and it returns battered, bedraggled, inexplicably enthusiastic about a bargain flight to Bermuda.

This is no criticism — just an observation about the way things go.

The AI community tends to say “this is the worst this will ever be” in response to criticism, but in a very learned sense, in many aspects it is also the best it will ever be.

Or maybe, to steal words from another person smarter than me, Ted Chiang:

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

I remember The Master Switch being an excellent book that taught us how to spot and anticipate these patterns. It might be worth a re-read.