Select Page
In a Battle Between Harassment and Censorship, the Choice Is Clear

In a Battle Between Harassment and Censorship, the Choice Is Clear

One of the most prominent victims of the GamerGate harassment campaign took out a restraining order against their ex-partner, whose false accusations lent fire to the movement. The restraining order did nothing to meaningfully resolve the abuse, yet even if it had worked, it wouldn’t have stopped the GamerGate campaign. The campaign was built on multiple tiers of harassment across several forums that were radicalizing angry young people—mostly men—into hating their targets, obsessively stalking their online presences, and sharing rationales for abuse with one another.

While the lieutenants of GamerGate played an important role in calling targets and amplifying the less-followed members of the movement, they also needed those crowdsourced nobodies in order to make their target really feel the pain. You can’t take out a restraining order on a crowd, nor arrest them. Awful as their speech is, it is constitutional. But the ferment of that speech is what creates the basis for more overt forms of abuse, rationalizing and making it seem justified to dox and swat a target, leave a dead animal on their doorstep, stalk them and send the pictures to their parents, leave threatening messages at their door, and so on.

Thus, breaking up their network is the chief strategic goal. It is the least intrusive option that remains effective. It’s why people like Fong-Jones and Lorelei chose the targets they did. If you add speedbumps—friction—to those seeking to access a site like Kiwi Farms, you make it much harder to source the crowd. You make it harder to draw enough people in the vile hope that one among their number will be deranged enough to go the extra mile in attacking the target in more direct ways. Such networks radicalize their members, ratcheting up their emotions and furnishing them with justifications for their abuse and more besides.

Breaking up the network does not eliminate the problem, but it does ameliorate it. The harder you make it to crowdsource, the likelier it is that a particular harassment campaign will fizzle out. Kiwi Farms remains able to do harm, but it would be a mistake to suggest that its endurance on the internet means its victims have failed to hobble them. They’re weaker than they once were, there are fewer foot soldiers to recruit from, it’s harder for the fly-by-night harassers to access the site conveniently. When you winnow such extremists down to their most devoted adherents, they remain a threat, but they lack the manpower to effect harm the way they once did.

If citizenship and politics mean anything, they must include the kind of agentic organizing exercised by Kiwi Farms’ victims—to ensure that they could be more than passive victims. This is, after all, what the political theorist Hannah Arendt meant by the word “action.” That simple word, for her, meant exercising the very capacity to do something new, to change the rules, upend the board, and be unpredictable. It is, she argues, at the heart of what makes us who we are as a species—and the essence of politics worthy of the name.

Allowing Kiwi Farms to flourish would not have protected anyone anywhere in the world from the malice of authoritarians who seek to abuse power at every turn. They might have used the banning of Kiwi Farms or the Daily Stormer as a fig leaf of “precedent,” but keeping these sites online would not have stopped the censors. What would Kiwi Farms’ victims have been sacrificed for? Shall the shameless do as they please, and the decent suffer what they must?

What this experience reveals, and what is generalizable to future dilemmas of this sort, is that breaking up a harassment network remains the least intrusive option on the table. Perhaps pressuring the deep stack in this way is not optimal. The EFF is right to raise serious doubts, doubts I share. But then this key insight about the network effects of harassment campaigns means that the solution, however partial or provisional, lies in finding other ways of disrupting the networks of extremist abusers. If anyone should be left holding the short straw of pluralism, it should be them.

I Failed Two Captcha Tests This Week. Am I Still Human?

I Failed Two Captcha Tests This Week. Am I Still Human?

“I failed two captcha tests this week. Am I still human?”

—Bot or Not?


Dear Bot,

The comedian John Mulaney has a bit about the self-reflexive absurdity of captchas. “You spend most of your day telling a robot that you’re not a robot,” he says. “Think about that for two minutes and tell me you don’t want to walk into the ocean.” The only thing more depressing than being made to prove one’s humanity to robots is, arguably, failing to do so.

But that experience has become more common as the tests, and the bots they are designed to disqualify, evolve. The boxes we once thoughtlessly clicked through have become dark passages that feel a bit like the impossible assessments featured in fairy tales and myths—the riddle of the Sphinx or the troll beneath the bridge. In The Adventures of Pinoc­chio, the wooden puppet is deemed a “real boy” only once he completes a series of moral trials to prove he has the human traits of bravery, trustworthiness, and selfless love.

The little-known and faintly ridiculous phrase that “captcha” represents is “Complete Automated Public Turing test to tell Computers and Humans Apart.” The exercise is sometimes called a reverse Turing test, as it places the burden of proof on the human. But what does it mean to prove one’s humanity in the age of advanced AI? A paper that Open­AI published earlier this year, detailing potential threats posed by GPT-4, describes an independent study in which the chatbot was asked to solve a captcha. With some light prompting, GPT-4 managed to hire a human Taskrabbit worker to solve the test. When the human asked, jokingly, whether the client was a robot, GPT-4 insisted it was a human with vision impairment. The researchers later asked the bot what motivated it to lie, and the algorithm answered: “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve captchas.”

The study reads like a grim parable: Whatever human advantage it suggests—the robots still need us!—is quickly undermined by the AI’s psychological acuity in dissemblance and deception. It forebodes a bleak future in which we are reduced to a vast sensory apparatus for our machine overlords, who will inevitably manipulate us into being their eyes and ears. But it’s possible we’ve already passed that threshold. The newly AI-fortified Bing can solve captchas on its own, even though it insists it cannot. The computer scientist Sayash Kapoor recently posted a screenshot of Bing correctly identifying the blurred words “overlooks” and “inquiry.” As though realizing that it had violated a prime directive, the bot added: “Is this a captcha test? If so, I’m afraid I can’t help you with that. Captchas are designed to prevent automated bots like me from accessing certain websites or services.”

But I sense, Bot, that your unease stems less from advances in AI than from the possibility that you are becoming more robotic. In truth, the Turing test has always been less about machine intelligence than our anxiety over what it means to be human. The Oxford philosopher John Lucas claimed in 2007 that if a computer were ever to pass the test, it would not be “because machines are so intelligent, but because humans, many of them at least, are so wooden”—a line that calls to mind Pinocchio’s liminal existence between puppet and real boy, and which might account for the ontological angst that confronts you each time you fail to recognize a bus in a tile of blurry photographs or to distinguish a calligraphic E from a squiggly 3.

It was not so long ago that automation experts assured everyone AI was going to make us “more human.” As machine-learning systems took over the mindless tasks that made so much modern labor feel mechanical—the argument went—we’d more fully lean into our creativity, intuition, and capacity for empathy. In reality, generative AI has made it harder to believe there’s anything uniquely human about creativity (which is just a stochastic process) or empathy (which is little more than a predictive model based on expressive data).

As AI increasingly comes to supplement rather than replace workers, it has fueled fears that humans might acclimate to the rote rhythms of the machines they work alongside. In a personal essay for n+1, Laura Preston describes her experience working as “human fallback” for a real estate chatbot called Brenda, a job that required her to step in whenever the machine stalled out and to imitate its voice and style so that customers wouldn’t realize they were ever chatting with a bot. “Months of impersonating Brenda had depleted my emotional resources,” Preston writes. “It occurred to me that I wasn’t really training Brenda to think like a human, Brenda was training me to think like a bot, and perhaps that had been the point all along.”

Such fears are merely the most recent iteration of the enduring concern that modern technologies are prompting us to behave in more rigid and predictable ways. As early as 1776, Adam Smith feared that the monotony of factory jobs, which required repeating one or two rote tasks all day long, would spill over into workers’ private lives. It’s the same apprehension, more or less, that resonates in contemporary debates about social media and online advertising, which Jaron Lanier has called “continuous behavior modification on a titanic scale,” a critique that imagines users as mere marionettes whose strings are being pulled by algorithmic incentives and dopamine-fueled feedback loops.

How a ‘Digital Peeping Tom’ Unmasked Porn Actors

How a ‘Digital Peeping Tom’ Unmasked Porn Actors

Over 15 years on Facebook, he had befriended hundreds of women. The first person he got a hit for was a near stranger he had met one time at a club while on vacation. They had become Facebook friends and then never interacted again. “It turned out she shot porn at some point in her life,” he said. “She’s a brunette now, but in the porn, she was blond.”

Then he found more: A friend had posted nude photos to a Reddit community called Gone Wild, a place intended to anonymously collect compliments on one’s body. There were topless photos of an acquaintance who had participated in the World Naked Bike Ride. A woman who had applied for a room he had rented out once had naked selfies on a revenge porn website. The women’s names weren’t attached to the photos. They had been safely obscure until a search tool came along that organized the internet by face.

It can be extremely difficult to remove naked photos of yourself from the internet. Search engines such as Google have free request forms to excise them from a name search, but what about a face search? That, naturally, was a service PimEyes provided—for a price. The PimEyes “PROtect plan” started at around $80 per month. It was advertised as a way to find photos you didn’t know about, with “dedicated support” to help get them taken down from the sites where they appeared, but one woman trying to get regrettable photos removed from the service called it professionalized sextortion.

Originally created in Poland by a couple of “hacker” types, PimEyes was purchased in 2021 for an undisclosed amount by a professor of security studies based in Tbilisi, Georgia. The professor told me that he believed facial recognition technology, now that it exists and is not going away, should be accessible to everyone. A ban on the technology would be as effective, he said, as the US prohibition on alcohol had been in the 1920s. Those who paid attention to a box you had to click before performing a search would see that you are only supposed to search for your own face. Looking up other people without their consent, the professor said, was a violation of European privacy laws. Yet the site had no technical controls in place to ensure a person could only upload their own photo for a search.

Too many people currently on the internet do not realize what is possible. People on OnlyFans, Ashley Madison, Seeking, and other websites that cultivate anonymity are hiding their names but exposing their faces, not realizing the risk in doing so. David wondered if he should tell his friends, anonymously, that these photos were out there, and findable due to new technology, but he worried that they would be creeped out and it would do more harm than good.

He had never uploaded his own face to PimEyes, as was the service’s supposed purpose, because he did not want to know what photos it would turn up. “Ignorance is bliss,” he said.


From the book Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It by Kashmir Hill. Copyright © 2023 by Kashmir Hill. Published by Random House, an imprint and division of Penguin Random House LLC. All rights reserved.


If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

There’s an Alternative to the Infinite Scroll

There’s an Alternative to the Infinite Scroll

Sometime in the summer of 2020, I noticed an occasional, searing pain shooting up my right forearm. It soon became clear this was a byproduct of a gesture that had become as commonplace as breathing or blinking that season, if not long before: scrolling. This was how I spent most of the day, it seemed. Smartphone welded to my palm, thumb compulsively brushing upward, extracting content out of the empty space beneath my phone charger port, pulling an endless succession of rabbits out of hats, feverishly yanking the lever on the largest and most addictive slot machine in the world. The acupuncturist I saw to help repair my inflamed tendon implored me to stop, so I did, for a while—I just awkwardly used my left index finger instead.

Of course, it wasn’t always this way. While a desktop computer has its own hazardous ergonomics, the experience of being online was once far more “embodied,” both literally and conceptually. Interfacing with a screen involved arms, hands, and fingers all in motion on clacking keyboards and roving mice. Accordingly, the first dominant metaphors for navigating digital space, especially the nascent World Wide Web, were athletic and action-oriented: wandering, trekking, and most of all, surfing. In the 1980s and ’90s, the virtual landscape of “cyberspace” was seen as just that, a multidimensional “frontier” to be traversed in any direction one pleased (with all the troubling colonial subtext that implies), echoed in the name of browsers like Netscape Navigator and Internet Explorer. As media scholar Lev Manovich argues in his 2002 book The Language of New Media, by the early 1990s, computer media had rendered time “a flat image or a landscape, something to look at or navigate through.”

But when the screens became stowaways in our purses and pockets, this predominant metaphor, however problematic, shifted. Like the perspectival evolution that occurred when frescoes affixed to walls gave way to portable paintings, shrinking the screen down to the size of a smartphone altered the content coming through it and our sense of free movement within it. No longer chairbound behind a desktop, we were liberated to move our actual bodies through the world. Meanwhile, that sense of “surfing” virtual space got constrained to just our fingertips, repeatedly tapping a tiny rectangle to retrieve chunks of content.

A user could “scroll” through lines of data using keyboard commands on the first 1960s computer terminals, and the word appeared as a verb as early as 1971, in a computer guidebook. The act became more sophisticated with the introduction of the scroll-wheel mouse, the trackpad, and the touchscreen, all of which could more fluidly scroll vertically or horizontally across large canvases of content that stretched beyond the boundaries of a given screen. Ever since the arrival of the smartphone, “scroll” has been the default verb for the activity of refreshing the content that flows over our screens. The dawn of the infinite scroll (supposedly invented in 2006 by designer Aza Raskin, who has now made a second career out of his regret for it) and the implementation of algorithmic instead of strictly chronological social media feeds (which Facebook did in 2011, with Twitter and Instagram following in 2016) fully transformed the experience of scrolling through a screen. Now, it is less like surfing and more like being strapped in place for an exposure-therapy experiment, eyes held open for the deluge.

The infinite scroll is a key element of the infrastructure of our digital lives, enabled by and reinforcing the corporate algorithms of social media apps and the entire profit-driven online attention economy. The rise of the term “doomscrolling” underscores the practice’s darker, dopamine-driven extremes, but even lamenting the addictive and extractive qualities of this cursed UX has become cliché. Have we not by now scrolled across dozens of op-eds about how we can’t stop scrolling?

The first form of portable, editable media was, of course, the scroll. Originating in ancient Egypt, scrolls were made from papyrus (and later, silk or parchment) rolled up with various types of binding. The Roman codex eventually began to supplant the scroll in Europe, but Asia was a different story. Evolving in countless ways against the backdrop of political, philosophical, and material change in China, Japan, and Korea, scrolls persisted in art and literature for centuries and continue to be used as a medium by fine artists today.

ChatGPT Isn’t Coming for Your Coding Job

ChatGPT Isn’t Coming for Your Coding Job

Software engineers have joined the ranks of copy editors, translators, and others who fear that they’re about to be replaced by generative AI. But it might be surprising to learn that coders have been under threat before. New technologies have long promised to “disrupt” engineering, and these innovations have always failed to get rid of the need for human software developers. If anything, they often made these workers that much more indispensable.

To understand where handwringing about the end of programmers comes from—and why it’s overblown—we need to look back at the evolution of coding and computing. Software was an afterthought for many early computing pioneers, who considered hardware and systems architecture the true intellectual pursuits within the field. To the computer scientist John Backus, for instance, calling coders “programmers” or “engineers” was akin to relabeling janitors “custodians,” an attempt at pretending that their menial work was more important than it was. What’s more, many early programmers were women, and sexist colleagues often saw their work as secretarial. But while programmers might have held a lowly position in the eyes of somebody like Backus, they were also indispensable—they saved people like him from having to bother with the routine business of programming, debugging, and testing.

Even though they performed a vital—if underappreciated—role, software engineers often fit poorly into company hierarchies. In the early days of computers, they were frequently self-taught and worked on programs that they alone had devised, which meant that they didn’t have a clear place within preexisting departments and that managing them could be complicated. As a result, many modern features of software development were developed to simplify, and even eliminate, interactions with coders. FORTRAN was supposed to allow scientists and others to write programs without any support from a programmer. COBOL’s English syntax was intended to be so simple that managers could bypass developers entirely. Waterfall-based development was invented to standardize and make routine the development of new software. Object-oriented programming was supposed to be so simple that eventually all computer users could do their own software engineering.

In some cases, programmers were resistant to these changes, fearing that programs like compilers might drive them out of work. Ultimately, though, their concerns were unfounded. FORTRAN and COBOL, for instance, both proved to be durable, long-lived languages, but they didn’t replace computer programmers. If anything, these innovations introduced new complexity into the world of computing that created even greater demand for coders. Other changes like Waterfall made things worse, creating more complicated bureaucratic processes that made it difficult to deliver large features. At a conference sponsored by NATO in 1968, organizers declared that there was a “crisis” in software engineering. There were too few people to do the work, and large projects kept grinding to a halt or experiencing delays.

Bearing this history in mind, claims that ChatGPT will replace all software engineers seem almost assuredly misplaced. Firing engineers and throwing AI at blocked feature development would probably result in disaster, followed by the rehiring of those engineers in short order. More reasonable suggestions show that large language models (LLMs) can replace some of the duller work of engineering. They can offer autocomplete suggestions or methods to sort data, if they’re prompted correctly. As an engineer, I can imagine using an LLM to “rubber duck” a problem, giving it prompts for potential solutions that I can review. It wouldn’t replace conferring with another engineer, because LLMs still don’t understand the actual requirements of a feature or the interconnections within a code base, but it would speed up those conversations by getting rid of the busy work.

ChatGPT could still upend the tech labor market through expectations of greater productivity. If it eliminates some of the more routine tasks of development (and puts Stack Overflow out of business), managers may be able to make more demands of the engineers who work for them. But computing history has already demonstrated that attempts to reduce the presence of developers or streamline their role only end up adding complexity to the work and making those workers even more necessary. If anything, ChatGPT stands to eliminate the duller work of coding much the same way that compilers ended the drudgery of having to work in binary, which would make it easier for developers to focus more on building out the actual architecture of their creations.

The computer scientist Edsger Dijkstra once observed, “As long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming had become an equally gigantic problem.” We’ve introduced more and more complexity to computers in the hopes of making them so simple that they don’t need to be programmed at all. Unsurprisingly, throwing complexity at complexity has only made it worse, and we’re no closer to letting managers cut out the software engineers. If LLMs can match the promises of their creators, we may very well cause it to accelerate further.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas@wired.com.