Select Page
Prepare to Get Manipulated by Emotionally Expressive Chatbots

Prepare to Get Manipulated by Emotionally Expressive Chatbots

It’s nothing new for computers to mimic human social etiquette, emotion, or humor. We just aren’t used to them doing it very well.

OpenAI’s presentation of an all-new version of ChatGPT on Monday suggests that’s about to change. It’s built around an updated AI model called GPT-4o, which OpenAI says is better able to make sense of visual and auditory input, describing it as “multimodal.” You can point your phone at something, like a broken coffee cup or differential equation, and ask ChatGPT to suggest what to do. But the most arresting part of OpenAI’s demo was ChatGPT’s new “personality.”

The upgraded chatbot spoke with a sultry female voice that struck many as reminiscent of Scarlett Johansson, who played the artificially intelligent operating system in the movie Her. Throughout the demo, ChatGPT used that voice to adopt different emotions, laugh at jokes, and even deliver flirtatious responses—mimicking human experiences software does not really have.

OpenAI’s launch came just a day before Google I/O, the search company’s annual developer showcase—surely not by coincidence. And Google showed off a more capable prototype AI assistant of its own, called Project Astra, that also can converse fluidly via voice and make sense of the world via video.

But Google steered clear of anthropomorphism, its helper adopting a more restrained and robotic tone. Last month, researchers at Google DeepMind, the company’s AI division, released a lengthy technical paper titled “The Ethics of Advanced AI Assistants.” It argues that more AI assistants designed to act in human-like ways could cause all sorts of problems, ranging from new privacy risks and new forms of technological addiction to more powerful means of misinformation and manipulation. Many people are already spending lots of time with chatbot companions or AI girlfriends, and the technology looks set to get a lot more engaging.

When I spoke with Demis Hassabis, the executive leading Google’s AI charge, ahead of Google’s event, he said the research paper was inspired by the possibilities raised by Project Astra. “We need to get ahead of all this given the tech that we’re building,” he said. After Monday’s news from OpenAI that rings truer than ever.

OpenAI didn’t acknowledge such risks during its demo. More engaging and convincing AI helpers might push people’s emotional buttons in ways that amplify their ability to persuade and prove habit-forming over time. OpenAI CEO Sam Altman leaned into the Scarlett Johansson references on Monday, tweeting out “her.” OpenAI did not immediately return a request for comment, but the company says that its governing charter commits requires it to “prioritize the development of safe and beneficial AI.”

It certainly seems worth pausing to consider the implications of deceptively lifelike computer interfaces that peer into our daily lives, especially when they are coupled with corporate incentives to seek profits. It will become much more difficult to tell if you’re speaking to a real person over the phone. Companies will surely want to use flirtatious bots to sell their wares, while politicians will likely see them as a way to sway the masses. Criminals will of course also adapt them to supercharge new scams.

Even advanced new “multimodal” AI assistants without flirty front ends will likely introduce new ways for the technology to go wrong. Text-only models like the original ChatGPT are susceptible to “jailbreaking,” that unlocks misbehavior. Systems that can also take in audio and video will have new vulnerabilities. Expect to see these assistants tricked in creative new ways to unlock inappropriate behavior and perhaps unpleasant or inappropriate personality quirks.

Protesters Are Fighting to Stop AI, but They’re Split on How to Do It

Protesters Are Fighting to Stop AI, but They’re Split on How to Do It

Would it be too disruptive if protests staged sit-ins or chained themselves to the doors of AI developers, one member of the Discord asked. “Probably not. We do what we have to, in the end, for a future with humanity, while we still can.”

Meindertsma had been worried about the consequences of AI after reading Superintelligence, a 2014 book by philosopher Nick Bostrom that popularized the idea that very advanced AI systems could pose a risk to human existence altogether. Joseph Miller, the organizer of PauseAI’s protest in London was similarly inspired.

It was the launch of OpenAI’s large language model Chat-GPT 3 in 2020 that really got Miller worried about the trajectory AI was on. “I suddenly realized that this is not a problem for the distant future, this is something where AI is really getting good now,” he says. Miller joined an AI safety research nonprofit and later became involved with PauseAI.

Bostrom’s ideas have been influential in the “effective altruism” community, a broad social movement that includes adherents of long-termism: the idea that influencing the long-term future should be a moral priority of humans today. Although many of PauseAI’s organizers have roots in the effective altruism movement, they’re keen to reach beyond philosophy and garner more support for their cause.

Director of Pause AI US, Holly Elmore, wants the movement to be a “broad church” that includes artists, writers, and copyright owners whose livelihoods are put at risk from AI systems that can mimic creative works. “I’m a utilitarian. I’m thinking about the consequences ultimately, but the injustice that really drives me to do this kind of activism is the lack of consent” from companies producing AI models, she says.

“We don’t have to choose which AI harm is the most important when we’re talking about pausing as a solution. Pause is the only solution that addresses all of them.”

Miller echoed this point. He says he’s spoken to artists whose livelihoods have been impacted by the growth of AI art generators. “These are problems that are real today, and are signs of much more dangerous things to come.”

One of the London protesters, Gideon Futerman, has a stack of leaflets he’s attempting to hand out to civil servants leaving the building opposite. He has been protesting with the group since last year. “The idea of a pause being possible has really taken root since then,” he says.

Futerman is optimistic that protest movements can influence the trajectory of new technologies. He points out that pushback against genetically modified organisms was instrumental in turning Europe off of the technology in the 1990s. The same is true of nuclear power. It’s not that these movements necessarily had the right ideas, he says, but they prove that popular protests can stymie the march even of technologies that promise low-carbon power or more bountiful crops.

In London, the group of protesters moves across the street in order to proffer leaflets to a stream of civil servants leaving the government offices. Most look steadfastly uninterested, but some take a sheet. Earlier that day Rishi Sunak, the British prime minister who six months earlier had hosted the first AI Safety Summit, had made a speech where he nodded to fears of AI. But after that passing reference, he focused firmly on the potential benefits.

The Pause AI leaders WIRED spoke with said they were not considering more disruptive direct action such as sit-ins or encampments near AI offices for now. “Our tactics and our methods are actually very moderate,” says Elmore. “I want to be the moderate base for a lot of organizations in this space. I’m sure we would never condone violence. I also want Pause AI to go further than that and just be very trustworthy.”

Meindertsma agrees, saying that more disruptive action isn’t justified at the moment. “I truly hope that we don’t need to take other actions. I don’t expect that we’ll need to. I don’t feel like I’m the type of person to lead a movement that isn’t completely legal.”

The Pause AI founder is also hopeful that his movement can shed the “AI doomer” label. “A doomer is someone who gives up on humanity,” he says. “I’m an optimistic person; I believe we can do something about this.”

OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.

OpenAI’s usage policies curently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content.

“We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says, using a colloquial term for content considered “not safe for work” contexts. “We look forward to better understanding user and societal expectations of model behavior in this area.”

The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.

In response to questions from WIRED, OpenAI spokesperson Grace McGuire said the Model Spec was an attempt to “bring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders.” She declined to share details of what OpenAI’s exploration of explicit content generation involves or what feedback the company has received on the idea.

Earlier this year, OpenAI’s chief technology officer, Mira Murati, told The Wall Street Journal that she was “not sure” if the company would in future allow depictions of nudity to be made with the company’s video generation tool Sora.

AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a common tool of harassment against women and girls. In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for making images depicting fellow middle school students.

“Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.”

Citron calls OpenAI’s potential embrace of explicit AI content “alarming.”

As OpenAI’s usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material. But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X.

Additional reporting by Reece Rogers

A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed

A Lawsuit Argues Meta Is Required by Law to Let You Control Your Own Feed

A lawsuit filed Wednesday against Meta argues that US law requires the company to let people use unofficial add-ons to gain more control over their social feeds.

It’s the latest in a series of disputes in which the company has tussled with researchers and developers over tools that give users extra privacy options or that collect research data. It could clear the way for researchers to release add-ons that aid research into how the algorithms on social platforms affect their users, and it could give people more control over the algorithms that shape their lives.

The suit was filed by the Knight First Amendment Institute at Columbia University on behalf of researcher Ethan Zuckerman, an associate professor at the University of Massachusetts—Amherst. It attempts to take a federal law that has generally shielded social networks and use it as a tool forcing transparency.

Section 230 of the Communications Decency Act is best known for allowing social media companies to evade legal liability for content on their platforms. Zuckerman’s suit argues that one of its subsections gives users the right to control how they access the internet, and the tools they use to do so.

“Section 230 (c) (2) (b) is quite explicit about libraries, parents, and others having the ability to control obscene or other unwanted content on the internet,” says Zuckerman. “I actually think that anticipates having control over a social network like Facebook, having this ability to sort of say, ‘We want to be able to opt out of the algorithm.’”

Zuckerman’s suit is aimed at preventing Facebook from blocking a new browser extension for Facebook that he is working on called Unfollow Everything 2.0. It would allow users to easily “unfollow” friends, groups, and pages on the service, meaning that updates from them no longer appear in the user’s newsfeed.

Zuckerman says that this would provide users the power to tune or effectively disable Facebook’s engagement-driven feed. Users can technically do this without the tool, but only by unfollowing each friend, group, and page individually.

There’s good reason to think Meta might make changes to Facebook to block Zuckerman’s tool after it is released. He says he won’t launch it without a ruling on his suit. In 2020, the company argued that the browser Friendly, which had let users search and reorder their Facebook news feeds as well as block ads and trackers, violated its terms of service and the Computer Fraud and Abuse Act. In 2021, Meta permanently banned Louis Barclay, a British developer who had created a tool called Unfollow Everything, which Zuckerman’s add-on is named after.

“I still remember the feeling of unfollowing everything for the first time. It was near-miraculous. I had lost nothing, since I could still see my favorite friends and groups by going to them directly,” Barclay wrote for Slate at the time. “But I had gained a staggering amount of control. I was no longer tempted to scroll down an infinite feed of content. The time I spent on Facebook decreased dramatically.”

Binance CEO Changpeng Zhao Sentenced to Four Months in Prison

Binance CEO Changpeng Zhao Sentenced to Four Months in Prison

Changpeng Zhao, founder of Binance, the world’s largest cryptocurrency exchange, has been sentenced to four months in prison.

Judge Richard Jones, who presided over the sentencing hearing in the Western District of Washington on Tuesday, handed down a lighter sentence than the three years petitioned by the prosecution.

Last November, Zhao—better known as CZ—pleaded guilty to willfully violating anti-money-laundering rules that enabled hundreds of millions of dollars in transactions involving US-sanctioned entities, including Iran and Cuba, to pass through the Binance platform. The plea deal required Zhao to step down as Binance chief executive and accept a $150 million fine, and for the company to pay a $4.3 billion penalty.

“Zhao’s willful violation of US law was no accident or oversight,” the US Department of Justice wrote in a court filing ahead of the sentencing. “He made a business decision that violating US law was the best way to attract users, build his company, and line his pockets.”

In the filing, prosecutors requested that Zhao receive a 36-month prison sentence, pointing to the need to “deter others who are tempted to build fortunes and business empires by breaking US law.” Zhao’s legal counsel asked for probation, on the grounds that no defendant in a comparable case “has ever been sentenced to incarceration.”

In coming to an appropriate sentence for Zhao, the judge was required to “look past the guidelines” and factor in context beyond the facts of the underlying crime, says Daniel Richman, a professor of law at Columbia University and former federal prosecutor. That includes the character of the defendant, the likelihood of recidivism, past infractions, and other factors.

In a letter to the judge in advance of the hearing, Zhao apologized for his conduct and accepted responsibility for the failure to establish an effective compliance program at Binance. “Words cannot explain how deeply I regret my choices that result in me being before the Court,” he wrote. “Please accept my assurance that this will be my only encounter with the criminal justice system.”

Zhao’s willingness to “plead guilty and take responsibility” will have counted in his favor, says Richman, but evidence of his flagrant disregard for the law will have weighed heavily on the judge. “When you have somebody who flouted the law in such a sustained way, one could expect that respect for the law will loom large in the sentence the judge imposes,” says Richman.

Zhao is the second crypto figurehead to face criminal sentencing in the US in as many months. On March 28, Sam Bankman-Fried, or SBF, founder of bankrupt crypto exchange FTX, was sentenced to 25 years in prison. Before their respective falls from grace, the pair vied for control of the exchange market and reportedly sparred frequently. But the similarities between the cases end there.

“It’s an easy comparison, but an imperfect one,” says Daniel Silva, an attorney at law firm Buchalter and former US prosecutor. “CZ pleaded guilty to not following the law as required of a financial institution executive. SBF was different: He was improperly using customer funds, gained through fraudulent statements and material omissions of fact.”

In their own presentence filing, Zhao’s counsel made a thinly-veiled reference to the distinction. “Mr. Zhao has been convicted only of an AML [anti-money-laundering] compliance failure,” they wrote. “He has not defrauded any investors, there has been no misappropriation of customer funds.” Their client, they appeared to be saying, is no SBF.

Zhao will not be required to forfeit the wealth he has accrued as founder of Binance as part of his sentence. Although he departed Binance in November, Zhao is reported to retain an estimated 86 percent stake in the exchange and continues to be worth tens of billions of dollars.

The DOJ, which until last year had secured few landmark crypto convictions, will nonetheless celebrate the conviction. “Whether people criticize the sentence as too light, it sends a healthy message,” says Silva. The aim is to “deter the next crypto or financial institution CEO from thumbing their nose at anti-money-laundering regulations.”