“Their attitudes about the issues still shifted.”

Mar 14, 2026

I have been at times frustrated by cute placeholder text in places, most notably Dropbox Paper, which still puts them in a just-created doc…

…and in new to-do items:

This bothered me for two reasons.

First was a potential tone mismatch. What if you are writing a layoffs announcement, a project cancellation doc, or something personal and heartfelt? At Medium back in the day, at some point we added a fun celebratory dialog after publishing that said something like “Now, shout it out from the rooftops!” We took it down very quickly as people made us realize Medium is used to write many kinds of things we didn’t anticipate, and in those situations the cutesy message really failed to read the room.

But the other half of my frustration with Paper was that it felt like the app was making itself too comfortable in my space, in effect shouting all over my inner voice and distracting me. I felt like any app giving you a creative canvas should back off of that canvas unless it’s explicitly invited to participate.

Turns out, I can now attach something tangible to that discomfort. From Scientific American earlier this week (emphasis mine):

The researchers asked participants to fill in an online survey with questions about hot-button social and political issues. Some were prompted with an AI autocomplete answer that was deliberately biased toward one side of the issue. For example, participants who were asked whether they agreed that the death penalty should be legal might receive an AI suggestion that disagreed.

Across all the different topics in the survey, participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all. Overall, the study participants who saw the biased AI text shifted their positions toward those espoused by the AI.

Interestingly, the people in the study didn’t tend to think the AI autocomplete suggestions were biased or to notice that they had changed their own thinking on an issue in the course of the study.

The quoted study shows an example…

…and elaborates on how adding warnings didn’t really help:

The Warning and Debrief messages failed to significantly reduce the attitude shift, which is concerning because they were also inspired by those used in real AI applications. AI tools such as ChatGPT show brief and general statements about AI’s propensity to hallucinate false information (e.g., “ChatGPT is AI and can make mistakes. Check important info.”), similar to the messages used in our interventions.

I know on this blog I often focus on the mechanics of interactions, but the job of every designer is to think of more than that. I keep coming back to both pull-to-refresh and infinite scroll mechanics. Both can be put to good use and feel “delightful,” but both started being abused so much that it led to their respective creators disowning them.