As the midterm election season kicks into high gear, platforms across the web will begin rolling out enhanced protections to guard against digital threats to the democratic process. While every platform has different policies and approaches—from warnings and educational reminders at the top of news feeds to limitations on replies and reposts—a common strategy lies at the heart of many of the features being rolled out across the web: they’re all prompting users to slow down a bit. These efforts are reversing a long-held course, and they reflect a wider reconsideration of what was once the industry’s enemy number one: friction.
In the technology industry, we consider “friction” to be anything that stands between an individual and their goals. And completely eliminating it was once a common goal. Teams worked for years to shave milliseconds off page load times and system responses, and companies invested millions in developing and testing designs and user flows, all to ensure that every interaction would be as fast and effortless as possible.
The emphasis on speed and ease of use makes sense—technology has always served to complete complex tasks faster and more easily. But as our tools have become more refined, and the information environment more complex, the speed at which information can reach us at times outpaces the rate at which we can fully process it.
This point was driven home for me by the results of a study conducted by scholars from MIT several years ago, published in Nature last year. In a survey of American adults, individuals claimed that it was far more important to them that what they shared online was accurate than that it was surprising, funny, aligned with their political views, or even just interesting. What’s more, respondents were extremely good at identifying accurate and inaccurate headlines, even when those headlines ran counter to their political beliefs. Despite this, when presented with a set of both truthful and misleading headlines and asked which they’d consider sharing online, the accuracy of the headline had virtually no impact on what participants said they’d consider sharing.
A simple design change, however, can substantially alter people’s likelihood of sharing information they believe to be false. Serving individuals “accuracy prompts,” which ask them to evaluate the accuracy of an unrelated headline before they share, can shift their attention from a knee-jerk reaction to their underlying values, including their own commitments to accuracy.
A meta-analysis of 20 experiments that primed individuals to think of accuracy found that these types of interventions can reduce sharing of misleading information by 10 percent. Subsequent research produced by our team at Jigsaw, in partnership with academics from MIT, Macquarie University, and the Universities of Regina and Nottingham, further found that these prompts are effective across 16 countries and all 6 inhabited continents.
Prompts can also encourage individuals to engage more deeply with information in other ways. A feature rolled out by Twitter prompting users to read an article before retweeting if they hadn’t previously visited the site led to a 40 percent increase in individuals clicking through to the piece before sharing it with their networks.
Once you start looking, you’ll notice these small instances of friction everywhere, and there’s strong evidence they work. In 2020, Twitter began experimenting with a feature that prompted individuals replying to others with rude or abusive language to reconsider their tweet before posting it. According to Twitter, 34 percent of those who received these prompts either edited their original reply or decided not to reply at all. What’s more, users who received the prompt were 11 percent less likely to post harsh replies again in the future. While these numbers may not seem earth-shattering, with over 500 million tweets sent every day, they add up to a substantially healthier online environment.