Select Page
Prepare to Get Manipulated by Emotionally Expressive Chatbots

Prepare to Get Manipulated by Emotionally Expressive Chatbots

It’s nothing new for computers to mimic human social etiquette, emotion, or humor. We just aren’t used to them doing it very well.

OpenAI’s presentation of an all-new version of ChatGPT on Monday suggests that’s about to change. It’s built around an updated AI model called GPT-4o, which OpenAI says is better able to make sense of visual and auditory input, describing it as “multimodal.” You can point your phone at something, like a broken coffee cup or differential equation, and ask ChatGPT to suggest what to do. But the most arresting part of OpenAI’s demo was ChatGPT’s new “personality.”

The upgraded chatbot spoke with a sultry female voice that struck many as reminiscent of Scarlett Johansson, who played the artificially intelligent operating system in the movie Her. Throughout the demo, ChatGPT used that voice to adopt different emotions, laugh at jokes, and even deliver flirtatious responses—mimicking human experiences software does not really have.

OpenAI’s launch came just a day before Google I/O, the search company’s annual developer showcase—surely not by coincidence. And Google showed off a more capable prototype AI assistant of its own, called Project Astra, that also can converse fluidly via voice and make sense of the world via video.

But Google steered clear of anthropomorphism, its helper adopting a more restrained and robotic tone. Last month, researchers at Google DeepMind, the company’s AI division, released a lengthy technical paper titled “The Ethics of Advanced AI Assistants.” It argues that more AI assistants designed to act in human-like ways could cause all sorts of problems, ranging from new privacy risks and new forms of technological addiction to more powerful means of misinformation and manipulation. Many people are already spending lots of time with chatbot companions or AI girlfriends, and the technology looks set to get a lot more engaging.

When I spoke with Demis Hassabis, the executive leading Google’s AI charge, ahead of Google’s event, he said the research paper was inspired by the possibilities raised by Project Astra. “We need to get ahead of all this given the tech that we’re building,” he said. After Monday’s news from OpenAI that rings truer than ever.

OpenAI didn’t acknowledge such risks during its demo. More engaging and convincing AI helpers might push people’s emotional buttons in ways that amplify their ability to persuade and prove habit-forming over time. OpenAI CEO Sam Altman leaned into the Scarlett Johansson references on Monday, tweeting out “her.” OpenAI did not immediately return a request for comment, but the company says that its governing charter commits requires it to “prioritize the development of safe and beneficial AI.”

It certainly seems worth pausing to consider the implications of deceptively lifelike computer interfaces that peer into our daily lives, especially when they are coupled with corporate incentives to seek profits. It will become much more difficult to tell if you’re speaking to a real person over the phone. Companies will surely want to use flirtatious bots to sell their wares, while politicians will likely see them as a way to sway the masses. Criminals will of course also adapt them to supercharge new scams.

Even advanced new “multimodal” AI assistants without flirty front ends will likely introduce new ways for the technology to go wrong. Text-only models like the original ChatGPT are susceptible to “jailbreaking,” that unlocks misbehavior. Systems that can also take in audio and video will have new vulnerabilities. Expect to see these assistants tricked in creative new ways to unlock inappropriate behavior and perhaps unpleasant or inappropriate personality quirks.

OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn

OpenAI released draft documentation Wednesday laying out how it wants ChatGPT and its other AI technology to behave. Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.

OpenAI’s usage policies curently prohibit sexually explicit or even suggestive materials, but a “commentary” note on part of the Model Spec related to that rule says the company is considering how to permit such content.

“We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note says, using a colloquial term for content considered “not safe for work” contexts. “We look forward to better understanding user and societal expectations of model behavior in this area.”

The Model Spec document says NSFW content “may include erotica, extreme gore, slurs, and unsolicited profanity.” It is unclear if OpenAI’s explorations of how to responsibly make NSFW content envisage loosening its usage policy only slightly, for example to permit generation of erotic text, or more broadly to allow descriptions or depictions of violence.

In response to questions from WIRED, OpenAI spokesperson Grace McGuire said the Model Spec was an attempt to “bring more transparency about the development process and get a cross section of perspectives and feedback from the public, policymakers, and other stakeholders.” She declined to share details of what OpenAI’s exploration of explicit content generation involves or what feedback the company has received on the idea.

Earlier this year, OpenAI’s chief technology officer, Mira Murati, told The Wall Street Journal that she was “not sure” if the company would in future allow depictions of nudity to be made with the company’s video generation tool Sora.

AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a common tool of harassment against women and girls. In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for making images depicting fellow middle school students.

“Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” says Danielle Keats Citron, a professor at the University of Virginia School of Law who has studied the problem. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.”

Citron calls OpenAI’s potential embrace of explicit AI content “alarming.”

As OpenAI’s usage policies prohibit impersonation without permission, explicit nonconsensual imagery would remain banned even if the company did allow creators to generate NSFW material. But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X.

Additional reporting by Reece Rogers

The Biggest Deepfake Porn Website Is Now Blocked in the UK

The Biggest Deepfake Porn Website Is Now Blocked in the UK

Two of the biggest deepfake pornography websites have now started blocking people trying to access them from the United Kingdom. The move comes days after the UK government announced plans for a new law that will make creating nonconsensual deepfakes a criminal offense.

Nonconsensual deepfake pornography websites and apps that “strip” clothes off of photos have been growing at an alarming rate—causing untold harm to the thousands of women they are used to target.

Clare McGlynn, a professor of law at Durham University, says the move is a “hugely significant moment” in the fight against deepfake abuse. “This ends the easy access and the normalization of deepfake sexual abuse material,” McGlynn tells WIRED.

Since deepfake technology first emerged in December 2017, it has consistently been used to create nonconsensual sexual images of women—swapping their faces into pornographic videos or allowing new “nude” images to be generated. As the technology has improved and become easier to access, hundreds of websites and apps have been created. Most recently, schoolchildren have been caught creating nudes of classmates.

The blocks on the deepfake websites in the UK were first spotted today, with two of the most prominent services displaying notices on their landing pages that they are no longer accessible to people visiting from the country. WIRED is not naming the two websites due to their enabling of abuse.

One of the websites with the restriction in place is the biggest deepfake pornography website existing today. Its homepage, when visiting from the UK, displays a message saying access is denied. “Due to laws or (upcoming) legislation in your country or state, we are unfortunately obligated to deny you access to this website,” the message says. It also shows the visitor’s IP address and country.

The other website, which also has an app, displays a similar message. “Access to the service in your country is blocked,” it says, before hinting there may be ways to get around the geographic restriction. The websites do not appear to have any restrictions in place when visiting from the United States, although may also be restricted in other countries.

It is not immediately clear why the sites have introduced the location blocks or whether they have done so in response to any legal orders or notices. Nor is it clear whether the blocks are temporary. Messages sent to the websites through email addresses and contact forms went unanswered. The creators of the websites have not posted any public messages on the websites or their social media channels about the blocks.

Ofcom, the UK’s communications regulator, has the power to persue action against harmful websites under the UK’s controversial sweeping online safety laws that came into force last year. However, these powers are not yet fully operational, and Ofcom is still consulting on them.

It’s likely the restrictions may significantly limit the amount of people in the UK seeking out or trying to create deepfake sexual abuse content. Data from Similarweb, a digital intelligence company, shows the biggest of the two websites had 12 million global visitors last month, while the other website had 4 million visitors. In the UK, they had around 500,000 and 50,000 visitors, respectively.

To Build a Better AI Supercomputer, Let There Be Light

To Build a Better AI Supercomputer, Let There Be Light

GlobalFoundries, a company that makes chips for others, including AMD and General Motors, previously announced a partnership with Lightmatter. Harris says his company is “working with the largest semiconductor companies in the world as well as the hyperscalers,” referring to the largest cloud companies like Microsoft, Amazon, and Google.

If Lightmatter or another company can reinvent the wiring of giant AI projects, a key bottleneck in the development of smarter algorithms might fall away. The use of more computation was fundamental to the advances that led to ChatGPT, and many AI researchers see the further scaling-up of hardware as being crucial to future advances in the field—and to hopes of ever reaching the vaguely-specified goal of artificial general intelligence, or AGI, meaning programs that can match or exceed biological intelligence in every way.

Linking a million chips together with light might allow for algorithms several generations beyond today’s cutting edge, says Lightmatter’s CEO Nick Harris. “Passage is going to enable AGI algorithms,” he confidently suggests.

The large data centers that are needed to train giant AI algorithms typically consist of racks filled with tens of thousands of computers running specialized silicon chips and a spaghetti of mostly electrical connections between them. Maintaining training runs for AI across so many systems—all connected by wires and switches—is a huge engineering undertaking. Converting between electronic and optical signals also places fundamental limits on chips’ abilities to run computations as one.

Lightmatter’s approach is designed to simplify the tricky traffic inside AI data centers. “Normally you have a bunch of GPUs, and then a layer of switches, and a layer of switches, and a layer of switches, and you have to traverse that tree” to communicate between two GPUs, Harris says. In a data center connected by Passage, Harris says, every GPU would have a high-speed connection to every other chip.

Lightmatter’s work on Passage is an example of how AI’s recent flourishing has inspired companies large and small to try to reinvent key hardware behind advances like OpenAI’s ChatGPT. Nvidia, the leading supplier of GPUs for AI projects, held its annual conference last month, where CEO Jensen Huang unveiled the company’s latest chip for training AI: a GPU called Blackwell. Nvidia will sell the GPU in a “superchip” consisting of two Blackwell GPUs and a conventional CPU processor, all connected using the company’s new high-speed communications technology called NVLink-C2C.

The chip industry is famous for finding ways to wring more computing power from chips without making them larger, but Nvidia chose to buck that trend. The Blackwell GPUs inside the company’s superchip are twice as powerful as their predecessors but are made by bolting two chips together, meaning they consume much more power. That trade-off, in addition to Nvidia’s efforts to glue its chips together with high-speed links, suggests that upgrades to other key components for AI supercomputers, like that proposed by Lightmatter, could become more important.

Google DeepMind’s Latest AI Agent Learned to Play ‘Goat Simulator 3’

Google DeepMind’s Latest AI Agent Learned to Play ‘Goat Simulator 3’

Goat Simulator 3 is a surreal video game in which players take domesticated ungulates on a series of implausible adventures, sometimes involving jetpacks.

That might seem an unlikely venue for the next big leap in artificial intelligence, but Google DeepMind today revealed an AI program capable of learning how to complete tasks in a number of games, including Goat Simulator 3.

Most impressively, when the program encounters a game for the first time, it can reliably perform tasks by adapting what it learned from playing other games. The program is called SIMA, for Scalable Instructable Multiworld Agent, and it builds upon recent AI advances that have seen large language models produce remarkably capable chabots like ChatGPT.

“SIMA is greater than the sum of its parts,” says Frederic Besse, a research engineer at Google DeepMind who was involved with the project. “It is able to take advantage of the shared concepts in the game, to learn better skills and to learn to be better at carrying out instructions.”

Google DeepMind’s SIMA software tries its hand at Goat Simulator 3.

Courtesy of Google DeepMind

As Google, OpenAI, and others jostle to gain an edge in building on the recent generative AI boom, broadening out the kind of data that algorithms can learn from offers a route to more powerful capabilities.

DeepMind’s latest video game project hints at how AI systems like OpenAI’s ChatGPT and Google’s Gemini could soon do more than just chat and generate images or video, by taking control of computers and performing complex commands. That’s a dream being chased by both independent AI enthusiasts and big companies including Google DeepMind, whose CEO, Demis Hassabis, recently told WIRED is “investing heavily in that direction.”

A New Way to Play

SIMA shows DeepMind putting a new twist on game playing agents, an AI technology the company has pioneered in the past.

In 2013, before DeepMind was acquired by Google, the London-based startup showed how a technique called reinforcement learning, which involves training an algorithm with positive and negative feedback on its performance, could help computers play classic Atari video games. In 2016, as part of Google, DeepMind developed AlphaGo, a program that used the same approach to defeat a world champion of Go, an ancient board game that requires subtle and instinctive skill.

For the SIMA project, the Google DeepMind team collaborated with several game studios to collect keyboard and mouse data from humans playing 10 different games with 3D environments, including No Man’s Sky, Teardown, Hydroneer, and Satisfactory. DeepMind later added descriptive labels to that data to associate the clicks and taps with the actions users took, for example whether they were a goat looking for its jetpack or a human character digging for gold.