Select Page
It’s Time to Stop Arresting People for Trolling the Government

It’s Time to Stop Arresting People for Trolling the Government

After Robert Frese posted a nasty Facebook comment about a police officer in 2018, police obtained a warrant to arrest him. This was the second time in six years that Frese was charged with “criminal defamation.”

Frese does not live in Russia, China, Iran, or another country notorious for oppressive speech laws. He lives in New Hampshire, which criminalizes the act of purposely making a false statement that exposes someone “to public hatred, contempt, or ridicule.” While Americans typically associate defamation with civil lawsuits, in which the alleged victim sues the speaker for money, many are unaware that, in some states, defamation is a crime that can lead to fines or jail time. 

Criminal defamation laws are a relic of England, the colonial era, and early America. The federal Sedition Act of 1798 levied fines and prison time on those who transmitted “any false, scandalous, and malicious writing or writings” against the government, and John Adams’ administration used it to prosecute dozens of critics. The federal law expired in 1801 after a critic, Thomas Jefferson, became president, but many states continued to prosecute their own criminal defamation laws.

Today, New Hampshire and 13 other states still have criminal defamation laws on the books. While prosecutions under these laws were rare as recently as a few years ago, we’ve seen disturbing examples of charges filed against citizens who criticize local government officials on social media. Worse, those officials often have unilateral authority to bring criminal defamation charges.

Frese had his first brush with New Hampshire’s criminal defamation law in 2012, after posting comments on Craigslist that accused a local life coach of distributing drugs and running a scam business. The local police arrested Frese and charged him with criminal defamation and harassment. He was fined $1,488, with most of it suspended.

In the 2018 case, Frese pseudonymously posted on the local newspaper’s Facebook page that a retiring police officer was “the dirtiest most corrupt cop that I have ever had the displeasure of knowing … and the coward Chief Shupe did nothing about it.” The newspaper deleted that comment, but Frese posted a similar comment accusing the police chief of a cover-up. After the police chief denied a cover-up, a detective determined that no evidence supported Frese’s allegations about the retiring officer and filed a criminal complaint that resulted in an arrest warrant.

Although the police department dropped its complaint after state officials determined there was insufficient evidence that he had made the statements with actual malice, Frese asked a federal judge to find New Hampshire’s criminal defamation law unconstitutional, arguing that the threat of a third prosecution under the statute chills his speech.

Judge Joseph Laplante declined Frese’s request—not because he was particularly enthusiastic about the prospect of police arresting people for defamation, but because the US Supreme Court, in the 1964 case Garrison v. Louisiana, ruled that states can “impose criminal sanctions for criticism of the official conduct of public officials” provided that the government establishes the speaker made the false statements with “actual malice,” which means they knew the statement was false, or at least entertained serious doubts about its truth. This is a high bar, but even if the case ultimately fails, the mere prospect of facing arrest or being forced through a criminal prosecution in a hostile jurisdiction can freeze speech.

Your ChatGPT Relationship Status Shouldn’t Be Complicated

Your ChatGPT Relationship Status Shouldn’t Be Complicated

The technology behind ChatGPT has been around for several years without drawing much notice. It was the addition of a chatbot interface that made it so popular. In other words, it wasn’t a development in AI per se but a change in how the AI interacted with people that captured the world’s attention.

Very quickly, people started thinking about ChatGPT as an autonomous social entity. This is not surprising. As early as 1996, Byron Reeves and Clifford Nass looked at the personal computers of their time and found that “equating mediated and real life is neither rare nor unreasonable. It is very common, it is easy to foster, it does not depend on fancy media equipment, and thinking will not make it go away. In other words, people’s fundamental expectation from technology is that it behaves and interacts like a human being, even when they know it is “only a computer.” Sherry Turkle, an MIT professor who has studied AI agents and robots since the 1990s, stresses the same point and claims that lifelike forms of communication, such as body language and verbal cues, “push our Darwinian buttons”—they have the ability to make us experience technology as social, even if we understand rationally that it is not.

If these scholars saw the social potential—and risk—in decades-old computer interfaces, it’s reasonable to assume that ChatGPT can also have a similar, and probably stronger, effect. It uses first-person language, retains context, and provides answers in a compelling, confident, and conversational style. Bing’s implementation of ChatGPT even uses emojis. This is quite a step up on the social ladder from the more technical output one would get from searching, say, Google. 

Critics of ChatGPT have focused on the harms that its outputs can cause, like misinformation and hateful content. But there are also risks in the mere choice of a social conversational style and in the AI’s attempt to emulate people as closely as possible. 

The Risks of Social Interfaces

New York Times reporter Kevin Roose got caught up in a two-hour conversation with Bing’s chatbot that ended in the chatbot’s declaration of love, even though Roose repeatedly asked it to stop. This kind of emotional manipulation would be even more harmful for vulnerable groups, such as teenagers or people who have experienced harassment. This can be highly disturbing for the user, and using human terminology and emotion signals, like emojis, is also a form of emotional deception. A language model like ChatGPT does not have emotions. It does not laugh or cry. It actually doesn’t even understand the meaning of such actions.

Emotional deception in AI agents is not only morally problematic; their design, which resembles humans, can also make such agents more persuasive. Technology that acts in humanlike ways is likely to persuade people to act, even when requests are irrational, made by a faulty AI agent, and in emergency situations. Their persuasiveness is dangerous because companies can use them in a way that is unwanted or even unknown to users, from convincing them to buy products to influencing their political views.

As a result, some have taken a step back. Robot design researchers, for example, have promoted a non-humanlike approach as a way to lower people’s expectations for social interaction. They suggest alternative designs that do not replicate people’s ways of interacting, thus setting more appropriate expectations from a piece of technology. 

Defining Rules 

Some of the risks of social interactions with chatbots can be addressed by designing clear social roles and boundaries for them. Humans choose and switch roles all the time. The same person can move back and forth between their roles as parent, employee, or sibling. Based on the switch from one role to another, the context and the expected boundaries of interaction change too. You wouldn’t use the same language when talking to your child as you would in chatting with a coworker.

In contrast, ChatGPT exists in a social vacuum. Although there are some red lines it tries not to cross, it doesn’t have a clear social role or expertise. It doesn’t have a specific goal or a predefined intent, either. Perhaps this was a conscious choice by OpenAI, the creators of ChatGPT, to promote a multitude of uses or a do-it-all entity. More likely, it was just a lack of understanding of the social reach of conversational agents. Whatever the reason, this open-endedness sets the stage for extreme and risky interactions. Conversation could go any route, and the AI could take on any social role, from efficient email assistant to obsessive lover.

Twitter’s Open Source Algorithm Is a Red Herring

Twitter’s Open Source Algorithm Is a Red Herring

Last Friday afternoon, Twitter posted the source code of its recommendation algorithm to GitHub. Twitter said it was “open sourcing” its algorithm, something I would typically be in favor of. Recommendation algorithms and open source code are major focuses of my work as a researcher and advocate for corporate accountability in the tech industry. My research has demonstrated why and how companies like YouTube should be more transparent about the inner workings of their recommendation algorithms—and I’ve run campaigns pressuring them to do so. Mozilla, the nonprofit where I am a senior fellow, famously open-sourced the Netscape browser code and invited a community of developers around the world to contribute to it in 1998, and it has continued to push for an open internet since. So why aren’t I impressed or excited by Musk’s decision? 

If anything, Twitter’s so-called “open sourcing” is a clever red herring to distract from its recent moves away from transparency. Just weeks ago, Twitter quietly announced it was shutting down the free version of its API, a tool that researchers around the world have relied on for years to conduct research into harmful content, disinformation, public health, election monitoring, political behavior, and more. The tool it is being replaced with will now cost researchers and developers between $42,000 and $210,000 a month to use. Twitter’s move caught the attention of lawmakers and civil society organizations (including the Coalition for Independent Tech Research, which I sit on the board of), who condemned Twitter’s decision.

The irony is that many of the issues people raised over the weekend while analyzing the source code could actually be tested by the very tool that Twitter is in the process of disabling. For example, researchers speculated that the “UkraineCrisisTopic” parameter found in Twitter’s source code was a signal for the algorithm to demote tweets referring to the invasion of Ukraine. Using Twitter’s API, researchers could have retrieved tweets related to the invasion of Ukraine and analyzed their engagement to determine if the algorithm amplified or de-amplified them. Tools like these allow the public to independently confirm—or refute—the nuggets of information that the source code provides. Without them, we are at the mercy of what Twitter tells us to be true.

Twitter’s stunt is just the latest example of transparency washing to come from the tech industry. In 2020, TikTok also used the words “source code” to dazzle regulators in the US and Europe who demanded more transparency into how the platform worked. It was the first platform to announce the opening of physical “Transparency Centers,” supposedly designed to “allow experts to examine and verify TikTok’s practices.” In 2021 I participated in a virtual tour of the Center, which amounted to little more than a Powerpoint presentation from TikTok’s policy staff explaining how the app works and reviewing their already public content moderation policies. Three years on, the Centers remain closed to the public (TikTok’s website cites the pandemic as the reason why) and TikTok has not released any source code.

If Musk had really wanted to bring accountability to Twitter’s algorithm, he could have made it scrutable in addition to transparent. For instance, he could have created tools that simulate the outputs of an algorithmic system based on a series of inputs. This would allow researchers to conduct controlled experiments to test how recommendation systems would rank real content. These tools should be available to researchers who work in the public interest (and, of course, who can demonstrate how their methods respect people’s privacy) for little or no cost.

There is good news on this front: Europe’s Digital Services Act, due to come into force for very large online platforms as soon as this summer, will compel platforms to conduct third-party audits on their algorithms to ensure they are not at risk of harming people. The kind of data that will be required for such audits goes far beyond what Twitter, TikTok, or any other platform currently provides.

Releasing the source code was a bold but hasty move that Twitter itself seemed unprepared for: The GitHub repository has been updated at least twice since the release to remove embarrassing bits from the code that were likely never meant to be made public. While the source code reveals the underlying logic of an algorithmic system, it tells us almost nothing about how the system will perform in real time, on real Tweets. Elon Musk’s decision leaves us unable to tell what is happening right now on the platform, or what may happen next.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.

AI Desperately Needs Global Oversight

AI Desperately Needs Global Oversight

Every time you post a photo, respond on social media, make a website, or possibly even send an email, your data is scraped, stored, and used to train generative AI technology that can create text, audio, video, and images with just a few words. This has real consequences: OpenAI researchers studying the labor market impact of their language models estimated that approximately 80 percent of the US workforce could have at least 10 percent of their work tasks affected by the introduction of large language models (LLMs) like ChatGPT, while around 19 percent of workers may see at least half of their tasks impacted. We’re seeing an immediate labor market shift with image generation, too. In other words, the data you created may be putting you out of a job.

When a company builds its technology on a public resource—the internet—it’s sensible to say that that technology should be available and open to all. But critics have noted that GPT-4 lacked any clear information or specifications that would enable anyone outside the organization to replicate, test, or verify any aspect of the model. Some of these companies have received vast sums of funding from other major corporations to create commercial products. For some in the AI community, this is a dangerous sign that these companies are going to seek profits above public benefit.

Code transparency alone is unlikely to ensure that these generative AI models serve the public good. There is little conceivable immediate benefit to a journalist, policy analyst, or accountant (all “high exposure” professions according to the OpenAI study) if the data underpinning an LLM is available. We increasingly have laws, like the Digital Services Act, that would require some of these companies to open their code and data for expert auditor review. And open source code can sometimes enable malicious actors, allowing hackers to subvert safety precautions that companies are building in. Transparency is a laudable objective, but that alone won’t ensure that generative AI is used to better society.

In order to truly create public benefit, we need mechanisms of accountability. The world needs a generative AI global governance body to solve these social, economic, and political disruptions beyond what any individual government is capable of, what any academic or civil society group can implement, or any corporation is willing or able to do. There is already precedent for global cooperation by companies and countries to hold themselves accountable for technological outcomes. We have examples of independent, well-funded expert groups and organizations that can make decisions on behalf of the public good. An entity like this is tasked with thinking of benefits to humanity. Let’s build on these ideas to tackle the fundamental issues that generative AI is already surfacing.

In the nuclear proliferation era after World War II, for example, there was a credible and significant fear of nuclear technologies gone rogue. The widespread belief that society had to act collectively to avoid global disaster echoes many of the discussions today around generative AI models. In response, countries around the world, led by the US and under the guidance of the United Nations, convened to form the International Atomic Energy Agency (IAEA), an independent body free of government and corporate affiliation that would provide solutions to the far-reaching ramifications and seemingly infinite capabilities of nuclear technologies. It operates in three main areas: nuclear energy, nuclear safety and security, and safeguards. For instance, after the Fukushima disaster in 2011 it provided critical resources, education, testing, and impact reports, and helped to ensure ongoing nuclear safety. However, the agency is limited: It relies on member states to voluntarily comply with its standards and guidelines, and on their cooperation and assistance to carry out its mission.

In tech, Facebook’s Oversight Board is one working attempt at balancing transparency with accountability. The Board members are an interdisciplinary global group, and their judgments, such as overturning a decision made by Facebook to remove a post that depicted sexual harassment in India, are binding. This model isn’t perfect either; there are accusations of corporate capture, as the board is funded solely by Meta, can only hear cases that Facebook itself refers, and is limited to content takedowns, rather than addressing more systemic issues such as algorithms or moderation policies.

The ‘Manhattan Project’ Theory of Generative AI

The ‘Manhattan Project’ Theory of Generative AI

The pace of change in generative AI right now is insane. OpenAI released ChatGPT to the public just four months ago. It took only two months to reach 100 million users. (TikTok, the internet’s previous instant sensation, took nine.) Google, scrambling to keep up, has rolled out Bard, its own AI chatbot, and there are already various ChatGPT clones as well as new plug-ins to make the bot work with popular websites like Expedia and OpenTable. GPT-4, the new version of OpenAI’s model released last month, is both more accurate and “multimodal,” handling text, images, video, and audio all at once. Image generation is advancing at a similarly frenetic pace: The latest release of MidJourney has given us the viral deepfake sensations of Donald’s Trump “arrest” and the Pope looking fly in a silver puffer jacket, which make it clear that you will soon have to treat every single image you see online with suspicion.

And the headlines! Oh, the headlines. AI is coming to schools! Sci-fi writing! The law! Gaming! It’s making video! Fighting security breaches! Fueling culture wars! Creating black markets! Triggering a startup gold rush! Taking over search! DJ’ing your music! Coming for your job! 

In the midst of this frenzy, I’ve now twice seen the birth of generative AI compared to the creation of the atom bomb. What’s striking is that the comparison was made by people with diametrically opposed views about what it means.

One of them is the closest person the generative AI revolution has to a chief architect: Sam Altman, the CEO of OpenAI, who in a recent interview with The New York Times called the Manhattan Project “the level of ambition we aspire to.” The others are Tristan Harris and Aza Raskin of the Center for Humane Technology, who became somewhat famous for warning that social media was destroying democracy. They are now going around warning that generative AI could destroy nothing less than civilization itself, by putting tools of awesome and unpredictable power in the hands of just about anyone.

Altman, to be clear, doesn’t disagree with Harris and Raskin that AI could destroy civilization. He just claims that he’s better-intentioned than other people, so he can try to ensure the tools are developed with guardrails—and besides, he has no choice but to push ahead because the technology is unstoppable anyway. It’s a mind-boggling mix of faith and fatalism.

For the record, I agree that the tech is unstoppable. But I think the guardrails being put in place at the moment—like filtering out hate speech or criminal advice from chatGPT’s answers—are laughably weak. It would be a fairly trivial matter, for example, for companies like OpenAI or MidJourney to embed hard-to-remove digital watermarks in all their AI-generated images to make deepfakes like the Pope pictures easier to detect. A coalition called the Content Authenticity Initiative is doing a limited form of this; its protocol lets artists voluntarily attach metadata to AI-generated pictures. But I don’t see any of the major generative AI companies joining such efforts.