On tools and toolmaking

Not long ago, a blog I otherwise like a lot included this passage:

Designers have been saying this for years. Cameras don’t take pictures, photographers do. Tools don’t make you a better designer. Now the PM world is arriving at the same conclusion.

I am not linking to the post because I hear this argument from time to time, and I want to comment on the general notion.

I think I understand the sentiment behind it: You’re not a designer because you know all the Figma shortcuts. You’re not a perfect typewriter away from The Next Great American Novel. Mastery of a tool is not mastery of the subject matter. And there is definitely a certain amount of performative pretense of an insta photo of a meticulously arranged desk with a bougie keyboard, going at length about the only correct set of presets and plugins, or an idea that “if only you do this one creative habit, a firehose of creativity will follow.”

But I also disagree. Good tools do make you a better designer.

A good tool can make you go faster and, as a result, let you spend more time doing revs and trying new things. A good tool can make you go slower when needed, practicing a connection with the material underneath.

A good tool will prevent you from shooting yourself in a foot, will teach you new things about what you’re doing – and perhaps even about yourself.

A good tool will value your growth, make you reflect on your growing body of work, and push you to try harder.

A good tool can inspire you. A great tool can make you fall in love. A bad tool can make you walk away, and a horrible tool will make you never want to come back.

A good tool will make you seek out more good tools.

Sure, people wrote books on a BlackBerry. Would you want to? Sure, the best camera is the one you have on you. But wouldn’t you prefer that camera to also be the best camera for whatever it is that makes you tick – a great sensor or glass, an amazing build quality, a friendly user interface, a logo that makes you want to step up, or some particular quirk or sentiment that you can’t even explain, but matters a whole lot to you?

I’m told I should be annoyed if someone’s first reaction to seeing a nice photo I made is “what kind of camera do you use?”, as it diminishes my accomplishments as a photographer. But: I chose the camera, and bolted on the appropriate lens, and realized over the years the aperture priority mode and very precise focus area is what makes my brain happy. I went through other cameras before, and learned I didn’t like them and I liked this one. At some point in my life I even ventured out into the frightening underworld of the settings menu, opened a new browser window, and decided “I will now try to understand all of these terms.” It took years, but I did.

The reason I enjoy scanning and processing old documents is because I invested in my tools. I have a little keypad, a bunch of hard-earned Photoshop actions, and some bespoke Keyboard Maestro combos that boss Photoshop around. This little tool universe doesn’t just make me more efficient, but it also makes me have fun.

I’d go even further. The mastery of the subject matter and the mastery of the tool are both important – but they also have to be joined by fluency with tool choices, and deep understanding of the relationships you have with your tools.

No single writing advice book will give you a perfect recipe, but read ten of them and scan twenty more, and you might compile the right mixtape of practical tidbits for your brain, and inspiration for your soul. Likewise, you have to try out a bunch of tools – some bad ones, a few great ones – to understand what you need. Not just for efficiency, but also for enjoyment, and ambition, and flexibility or maybe rigidity, and this sort of unmeasurable feeling of a tool getting you, or a tool made by someone like you.

Maybe it’s the 1960s typewriter you need, or a newfangled e-ink-based writing implement, or maybe you just have to open TextEdit and close everything else. I’m not going to tell you the novel comes out then. But the novel might never come out if you don’t figure out what tool can help get it out of you.

You also have to recognize the telltale signs when you outgrow the tool, or when the tool starts disappointing you. Over the years, I learned that I hate InDesign, but that I hate LaTeX even more. I switched from Apple Notes to SimpleNote in 2012, went back to Notes in 2017, and just this year moved over to Bear. I once cargo-culted Scrivener for writing and ran away screaming, but I also once cargo-culted DevonThink and still use it today, in awe of its clunkiness and old-fashionedness that match my own.

AI tools are still tools. And generative AI will allow you to build more tools for the solitary audience of just you – but, like elsewhere, it will require some understanding what makes for a good tool and what makes for a good tool for you.

Craig Mod wrote recently about using AI to build his own custom tools:

My situation is pretty unique. I’m dealing with multiple bank accounts in multiple countries. Constantly juggling currencies. Money moves between accounts locally and internationally. I freelance as a writer for clients around the world. I do media work — TV and radio. I make money from book sales paid by Random House via my New York agent, and I make money from book sales sold directly from my Shopify store. […] Simply put: It’s a big mess, and no off-the-shelf accounting software does what I need. So after years of pain, I finally sat down last week and started to build my own.

But I bet Mod knew what tool he needed to build based on his experience with tools that didn’t work for him – and software and design in general.

Elsewhere, Sam Henri Gold in a widely-shared essay that is worth a read, about MacBook Neo and the beginning of the tool journey:

He is going to go through System Settings, panel by panel, and adjust everything he can adjust just to see how he likes it. He is going to make a folder called “Projects” with nothing in it. He is going to download Blender because someone on Reddit said it was free, and then stare at the interface for forty-five minutes. He is going to open GarageBand and make something that is not a song. He is going to take screenshots of fonts he likes and put them in a folder called “cool fonts” and not know why. Then he is going to have Blender and GarageBand and Safari and Xcode all open at once, not because he’s working in all of them but because he doesn’t know you’re not supposed to do that, and the machine is going to get hot and slow and he is going to learn what the spinning beachball cursor means. None of this will look, from the outside, like the beginning of anything. But one of those things is going to stick longer than the others. He won’t know which one until later. He’ll just know he keeps opening it.

I am bothered by black-and-white, LinkedIn-ready statements. “Tools don’t make you a better designer” feels like another version of the abused and misunderstood “less is more.”

My camera taught me to be a better photographer. DevonThink told me how to better organize my thoughts. Norton Utilities showed me how to have fun when doing serious things, and Autodesk Animator how to be serious about having fun.

I’m a toolmaker, so perhaps I arrive at this biased. I endured some crappy tools, wrote some okay ones, benefitted from some great ones. I don’t think I would have become a designer without them.

“It takes an airplane to bring out the worst in a pilot.”

Speaking of fly-by-wire… William Langewiesche is one of my favourite technical writers. He finds a way to explain complex aviation aspects really well, and then add a certain amount of beauty and poetry on top of that. His style was a big influence on my book, and I like him so much I once compiled links to his writing so that others could find it more easily.

Here’s Langewiesche’s essay from 2014 about the 2009 Air France Flight 447, where an implementation of fly-by-wire – which means disconnecting the flight stick and attendant levers from immediately controlling flight surfaces via physical linkage, and instead putting motors and software in between – caused a fatal accident, as the pilots’ mental model of the system diverged too far from what was happening:

The [Airbus] A330 is a masterpiece of design, and one of the most foolproof airplanes ever built. How could a brief airspeed indication failure in an uncritical phase of the flight have caused these Air France pilots to get so tangled up? And how could they not have understood that the airplane had stalled? The roots of the problem seem to lie paradoxically in the very same cockpit designs that have helped to make the last few generations of airliners extraordinarily safe and easy to fly.

It’s an interesting read today in the context of robotaxis and self-driving, but also AI changing software writing:

This is another unintended consequence of designing airplanes that anyone can fly: anyone can take you up on the offer. Beyond the degradation of basic skills of people who may once have been competent pilots, the fourth-generation jets have enabled people who probably never had the skills to begin with and should not have been in the cockpit. As a result, the mental makeup of airline pilots has changed. On this there is nearly universal agreement—at Boeing and Airbus, and among accident investigators, regulators, flight-operations managers, instructors, and academics. A different crowd is flying now, and though excellent pilots still work the job, on average the knowledge base has become very thin.

It seems that we are locked into a spiral in which poor human performance begets automation, which worsens human performance, which begets increasing automation.

I was devastated to discover, while writing this post, that Langewiesche died last year. Rest in peace.

“Their attitudes about the issues still shifted.”

I have been at times frustrated by cute placeholder text in places, most notably Dropbox Paper, which still puts them in a just-created doc…

…and in new to-do items:

This bothered me for two reasons.

First was a potential tone mismatch. What if you are writing a layoffs announcement, a project cancellation doc, or something personal and heartfelt? At Medium back in the day, at some point we added a fun celebratory dialog after publishing that said something like “Now, shout it out from the rooftops!” We took it down very quickly as people made us realize Medium is used to write many kinds of things we didn’t anticipate, and in those situations the cutesy message really failed to read the room.

But the other half of my frustration with Paper was that it felt like the app was making itself too comfortable in my space, in effect shouting all over my inner voice and distracting me. I felt like any app giving you a creative canvas should back off of that canvas unless it’s explicitly invited to participate.

Turns out, I can now attach something tangible to that discomfort. From Scientific American earlier this week (emphasis mine):

The researchers asked participants to fill in an online survey with questions about hot-button social and political issues. Some were prompted with an AI autocomplete answer that was deliberately biased toward one side of the issue. For example, participants who were asked whether they agreed that the death penalty should be legal might receive an AI suggestion that disagreed.

Across all the different topics in the survey, participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all. Overall, the study participants who saw the biased AI text shifted their positions toward those espoused by the AI.

Interestingly, the people in the study didn’t tend to think the AI autocomplete suggestions were biased or to notice that they had changed their own thinking on an issue in the course of the study.

The quoted study shows an example…

…and elaborates on how adding warnings didn’t really help:

The Warning and Debrief messages failed to significantly reduce the attitude shift, which is concerning because they were also inspired by those used in real AI applications. AI tools such as ChatGPT show brief and general statements about AI’s propensity to hallucinate false information (e.g., “ChatGPT is AI and can make mistakes. Check important info.”), similar to the messages used in our interventions.

I know on this blog I often focus on the mechanics of interactions, but the job of every designer is to think of more than that. I keep coming back to both pull-to-refresh and infinite scroll mechanics. Both can be put to good use and feel “delightful,” but both started being abused so much that it led to their respective creators disowning them.

“These platforms are ad-heavy to the detriment and frustration of users, yet they remain successful and growing.”

A good batch of history and observations by Nick Heer at Pixel Envy about ads coming to AI chatbots:

It is incredible how far we have come for these barely-distinguished placements to be called “visually separated”. Google’s ads, for example, used to have a coloured background, eventually fading to white. The “sponsored link” text turned into a little yellow “Ad” badge, eventually becoming today’s little bold “Ad” text. Apple, too, has made its App Store ads blend into normal results. In OpenAI’s case, they have opted to delineate ads by using a grey background and labelling them “Sponsored”.

Now OpenAI has something different to optimize for. We can all pretend that free market forces will punish the company if it does not move carefully, or it inserts too many ads, or if organic results start to feel influenced by ad buyers. But we have already seen how this works with Google search, in Instagram, in YouTube, and elsewhere. These platforms are ad-heavy to the detriment and frustration of users, yet they remain successful and growing. No matter what you think of OpenAI’s goals already, ads are going to fundamentally change ChatGPT and the company as a whole.

“Battered, bedraggled, inexplicably enthusiastic about a bargain flight to Bermuda”

I thought about it on Masto in January (the responses are interesting if you want to read), but recently Robin Sloan eludicated it a lot better:

What makes the AI chatbots and agents feel light and clean, here and now in 2026? Is it an innate architectural resistance to advertising, to attention hacks, to adversarial crud? No — it’s that they are simply new! The language models in 2026 are Google in 1999, Twitter in 2009. Their vast conjoined industry of influence hasn’t yet arisen … though it is stirring.

And I believe their architecture makes them more susceptible to adversarial crud, not less. I suppose we’ll see.

It’s interesting and useful to imagine — really visualize — the chatbots and agents in ten years or twenty … barnacled with gunk … locked in a permanent cat-and-mouse game with their adversaries … just as a platform like Google is today. In 2036, you send your AI agent out into the internet, and it returns battered, bedraggled, inexplicably enthusiastic about a bargain flight to Bermuda.

This is no criticism — just an observation about the way things go.

The AI community tends to say “this is the worst this will ever be” in response to criticism, but in a very learned sense, in many aspects it is also the best it will ever be.

Or maybe, to steal words from another person smarter than me, Ted Chiang:

I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.

I remember The Master Switch being an excellent book that taught us how to spot and anticipate these patterns. It might be worth a re-read.