Select Page
For Smarter Robots, Just Add Humans

For Smarter Robots, Just Add Humans

Teleoperating a physical robot could become an important job in future, according to Sanctuary AI, based in Vancouver, Canada. The company also believes that this might provide a way to train robots how to perform tasks that are currently well out of their (mechanical) reach, and imbue machines with a physical sense of the world some argue is needed to unlock human-level artificial intelligence.

Industrial robots are powerful, precise, and mostly stubbornly stupid. They cannot apply the kind of precision and responsiveness needed to perform delicate manipulation tasks. That’s partly why the use of robots in factories is still relatively limited, and still requires an army of human workers to assemble all the fiddly bits into the guts of iPhones. 

But when such work is nothing for humans, why not forgo the complexity of trying to design an algorithm to do the job?

Here’s one of Sanctuary’s robots—the top half of a humanoid—doing a range of sophisticated manipulation tasks. Offscreen, a human wearing a VR headset and sensor-laden gloves is operating the robot remotely.

Sanctuary recently ran what it calls the first “real world” test of one of its robots, by having a humanoid like this one work in a store not far from the startup’s headquarters. The company believes that making it possible to do physical work remotely could help address the labor shortages that many companies are seeing today.

Some robots already get some remote assistance from humans when they get stuck, as I’ve written about. The limits of AI mean that robots working in restaurants, offices, and on the street as delivery mules are flummoxed by unusual situations. The difficulty of pulling off fully autonomous driving, for example, means that some firms are working to put remotely piloted trucks on the roads. 

Sanctuary’s founders, Geordie Rose and Suzanne Gilbert, ran Kindred, another company doing robotic teleoperation that was acquired in 2020 by Ocado, a UK supermarket firm that uses automation extensively. In this video the pair talk about the company’s history and plans for the future.  

The aim is ultimately to use data from humans teleoperating the robots to teach algorithms to do more tasks autonomously. Gilbert, Sanctuary’s CTO, believes that achieving humanlike intelligence in machines will require them to interact with and learn from the physical world. (Sorry, ChatGPT.)

OpenAI, the company behind ChatgGPT, is also taking an interest in teleoperated humanoids. It is leading a $23.5 million investment in 1X, a startup developing a human-like robot. “The OpenAI Startup Fund believes in the approach and impact that 1X can have on the future of work,” Brad Lightcap, OpenAI’s COO and manager of the OpenAI Startup Fund says.

The ALOHA teleoperation system.Courtesy of Tony Zhao/UC Berkeley

For humans to help robots with teleoperation, AI might also need to be developed to ease the collaboration between person and machine. Chelsea Finn, an assistant professor at UC Berkeley, recently shared details of a fascinating research project that involves using machine learning to allow cheap teleoperated robot arms to work smoothly and accurately. The technology may make it easier for humans to operate robots remotely for more situations.

I don’t think I’d much enjoy teleoperating a robot all day—especially if I knew that robot would someday turn around and kick me out the door. But it might make working from home a possibility for more people, and also make certain types of job more widely accessible. Alternatively, we may have just gotten a glimpse of a potentially dystopian future of the workplace.

This is an edition of WIRED’s Fast Forward newsletter, a weekly dispatch from the future by Will Knight, exploring AI advances and other technology set to change our lives.

Roblox Is Bringing Generative AI to Its Gaming Universe

Roblox Is Bringing Generative AI to Its Gaming Universe

Roblox is testing a tool that could accelerate the process of building and altering in-game objects by getting artificial intelligence to write the code. The tool lets anyone playing Roblox create items such as buildings, terrain, and avatars, change the appearance and behavior of those things, and give them new interactive properties by typing what they want to achieve in natural language rather than complex code.

“Say I need a gleaming metal sword for an experience I’m creating,” says Daniel Sturman, CTO at Roblox. “It should be really easy to create that.”

Sturman showed WIRED the new Roblox tool generating the code needed to create objects and modify their appearance and behavior. In the demo, typing “red pain, reflective metal finish,” or “purple foil, crushed pattern, reflective,” into a chat window changed the appearance of a sports car in the game. It was also possible to add new game behavior by entering “Blink the headlines every time the user presses “B”, and “Make it float.”

Technology dubbed generative AI has captured attention and investment over the past year by showing algorithms can produce seemingly coherent text and aesthetically pleasing images when given a short text prompt. The technology relies on AI models trained with lots of data, in the form of text or images scraped from the web, and is also at work in the viral chatbot ChatGPT. Some AI researchers are experimenting with generative techniques for generating video and 3D content but this is mostly at an early stage.  

Producing computer code was one of the first practical applications for generative AI, and Microsoft and Amazon already sell tools that can autowrite useful blocks of software. But Roblox’s announcement shows how companies can adapt code-writing capabilities to create their own generative AI products aimed at people who may not be experienced coders.

Sturman says the approach holds promise for Roblox because so many of the  games on its platform are made by individuals or small teams. “We have everything on our platform from studios, down to 12 year olds who have had an incredible idea come out of a summer camp,” Sturman says.

Roblox says the code-making AI it uses relies on a combination of in-house technology and capabilities from outside, although it is not disclosing where from. Currently the company is only training its AI using game content that is in the public domain. Sturman says Roblox will tread carefully to ensure that users do not object to having their creations fed into generative AI algorithms.

Microsoft was the first to harness the latest generation of AI for coding, through a deal with OpenAI, which has adapted a general purpose language technology called GPT to power a code generator called Codex. Microsoft enhanced the Codex’s coding abilities by feeding it more data from GitHub, a popular repository for software development, and it has made it available through its Visual Studio programming application.

Visual Studio and other AI-enabled programming environments typically write code in response to a developer’s comment or when the user starts typing. The startup Replit, which makes a popular online programming tool, recently launched a chatbot-like interface that will not only write code but answer programming questions. 

What Chatbot Bloopers Reveal About the Future of AI

What Chatbot Bloopers Reveal About the Future of AI

What a difference seven days makes in the world of generative AI.

Last week Satya Nadella, Microsoft’s CEO, was gleefully telling the world that the new AI-infused Bing search engine would “make Google dance” by challenging its long-standing dominance in web search. 

The new Bing uses a little thing called ChatGPT—you may have heard of it—which represents a significant leap in computers’ ability to handle language. Thanks to advances in machine learning, it essentially figured out for itself how to answer all kinds of questions by gobbling up trillions of lines of text, much of it scraped from the web. 

Google did, in fact, dance to Satya’s tune by announcing Bard, its answer to ChatGPT, and promising to use the technology in its own search results. Baidu, China’s biggest search engine, said it was working on similar technology.

But Nadella might want to watch where his company’s fancy footwork is taking it.

In demos Microsoft gave last week, Bing seemed capable of using ChatGPT to offer complex and comprehensive answers to queries. It came up with an itinerary for a trip to Mexico City, generated financial summaries, offered product recommendations that collated information from numerous reviews, and offered advice on whether an item of furniture would fit into a minivan by comparing dimensions posted online. 

WIRED had some time during the launch to put Bing to the test, and while it seemed skilled at answering many types of questions, it was decidedly glitchy and even unsure of its own name. And as one keen-eyed pundit noticed, some of the results that Microsoft showed off were less impressive than they first seemed. Bing appeared to make up some information on the travel itinerary it generated, and it left out some details that no person would be likely to omit. The search engine also mixed up Gap’s financial results by mistaking gross margin for unadjusted gross margin—a serious error for anyone relying on the bot to perform what might seem the simple task of summarizing the numbers. 

More problems have surfaced this week, as the new Bing has been made available to more beta testers. They appear to include arguing with a user about what year it is and experiencing an existential crisis when pushed to prove its own sentience. Google’s market cap dropped by a staggering $100 billion after someone noticed errors in answers generated by Bard in the company’s demo video.

Why are these tech titans making such blunders? It has to do with the weird way that ChatGPT and similar AI models really work—and the extraordinary hype of the current moment.

What’s confusing and misleading about ChatGPT and similar models is that they answer questions by making highly educated guesses. ChatGPT generates what it thinks should follow your question based on statistical representations of characters, words, and paragraphs. The startup behind the chatbot, OpenAI, honed that core mechanism to provide more satisfying answers by having humans provide positive feedback whenever the model generates answers that seem correct.

ChatGPT can be impressive and entertaining, because that process can produce the illusion of understanding, which can work well for some use cases. But the same process will “hallucinate” untrue information, an issue that may be one of the most important challenges in tech right now. 

The intense hype and expectation swirling around ChatGPT and similar bots enhances the danger. When well-funded startups, some of the world’s most valuable companies, and the most famous leaders in tech all say chatbots are the next big thing in search, many people will take it as gospel—spurring those who started the chatter to double down with more predictions of AI omniscience. Not only chatbots can get led astray by pattern matching without fact checking.

Audiobook Narrators Fear Spotify Used Their Voices to Train AI

Audiobook Narrators Fear Spotify Used Their Voices to Train AI

Starling believes Findaway has misused the material that authors and narrators entrusted it with. “This is immoral and illegal,” Starling told WIRED, “Rights holders have the copyrights for the audiobook production only, but no claim on the narrator’s voice.” She’s pausing the release of three upcoming titles she planned to distribute via Findaway.

Interest in automating the art of book narration has grown in recent years for business and technology reasons. Audiobook revenue has continued to grow even as book and ebook revenue has dipped, and synthetic voice technology has improved dramatically. A range of tools have cropped up that allow anyone to clone voices for synthetic narration with a click, but to advance them, companies still need hoards of data.

Across industries like entertainment and gaming, contracts that require voice actors to allow tech companies to train their AI models for generating digital narration on their work have become increasingly common, says Tim Friedlander, president of the US-based National Association of Voice Actors. Adobe, maker of Photoshop and other image software, recently began training its own AI algorithms on visual creatives’ work unless they opted out. 

“The voice is how voice actors make a living,” Friedlander added, “and this is literally taking the words out of our mouths without our consent.”

Google began offering free synthetic narration for books in 2020. When Apple announced its own set of digital audiobook narrators in January, the company said it hoped to eliminate the “cost and complexity” that producing a human-narrated audiobook can represent for small publishers and independent authors. The company’s Books app lists titles with AI narration as “narrated by digital voice based on a human narrator.”

Apple has used synthetic voice technology for years, including for the Siri virtual assistant, driving directions, and accessibility features. But some authors and narrators suspect that audio from their ebooks helped the company hone its technology to the complex task of narrating books. The length of audiobooks, the complexity of the material, and the impressive skills of talented narrators make voicing books arguably the toughest challenge for synthetic voice technology.

Applying synthetic voices to books also brings new business and cultural challenges. “Most of the companies developing these AI technologies come from the technology sector, rather than the entertainment sector,” says SAG-AFTRA’s Love. “They lack the relationships, history of protections, and reliance on approval rights voice actors have come to expect.” 

Several authors told WIRED that Findaway has emerged as a reliable distributor, offering lucrative deals to list audiobooks across several platforms. But they also say that Findaway frequently prompts people to agree to updated agreements, usually with minor changes, when they log in to their accounts. The company added the machine learning clause to its distribution agreements in 2019.

Many suspect they signed off on the machine learning clause without realizing it. “It’s on me for not initially noticing the addition and what it fully meant,” says Laura VanArendonk Baugh, an author based in Indianapolis, Indiana. “But the placement was kinda sneaky, too.”

The Chatbot Search Wars Have Begun

The Chatbot Search Wars Have Begun

This week the world’s largest search companies leaped into a contest to harness a powerful new breed of “generative AI” algorithms.

Most notably Microsoft announced that it is rewiring Bing, which lags some way behind Google in terms of popularity, to use ChatGPT—the insanely popular and often surprisingly capable chatbot made by the AI startup OpenAI. 

In case you’ve been living in outer space for the past few months, you’ll know that people are losing their minds over ChatGPT’s ability to answer questions in strikingly coherent and seemingly insightful and creative ways. Want to understand quantum computing? Need a recipe for whatever’s in the fridge? Can’t be bothered to write that high school essay? ChatGPT has your back. 

The all-new Bing is similarly chatty. Demos that the company gave at its headquarters in Redmond, and a quick test drive by WIRED’s Aarian Marshall, who attended the event, show that it can effortlessly generate a vacation itinerary, summarize the key points of product reviews, and answer tricky questions, like whether an item of furniture will fit in a particular car. It’s a long way from Microsoft’s hapless and hopeless Office assistant Clippy, which some readers may recall bothering them every time they created a new document.

Not to be outdone by Bing’s AI reboot, Google said this week that it would release a competitor to ChatGPT called Bard. (The name was chosen to reflect the creative nature of the algorithm underneath,  one Googler tells me.) The company, like Microsoft, showed how the underlying technology could answer some web searches and said it would start making the AI behind the chatbot available to developers. Google is apparently unsettled by the idea of being upstaged in search, which provides the majority of parent Alphabet’s revenue. And its AI researchers may be understandably a little miffed since they actually developed the machine learning algorithm at the heart of ChatGPT, known as a transformer, as well as a key technique used to make AI imagery, known as diffusion modeling.

Last but by no means least in the new AI search wars is Baidu, China’s biggest search company. It joined the fray by announcing another ChatGPT competitor, Wenxin Yiyan (文心一言), or “Ernie Bot” in English. Baidu says it will release the bot after completing internal testing this March.

These new search bots are examples of generative AI, a trend fueled by algorithms that can generate text, craft computer code, and dream up images in response to a prompt. The tech industry might be experiencing widespread layoffs, but interest in generative AI is booming, and VCs are imagining whole industries being rebuilt around this new creative streak in AI.

Generative language tools like ChatGPT will surely change what it means to search the web, shaking up an industry worth hundreds of billions of dollars annually, by making it easier to dig up useful information and advice. A web search may become less about clicking links and exploring sites and more about leaning back and taking a chatbot’s word for it. Just as importantly, the underlying language technology could transform many other tasks too, perhaps leading to email programs that write sales pitches or spreadsheets that dig up and summarize data for you. To many users, ChatGPT also seems to signal a shift in AI’s ability to understand and communicate with us. 

But there is, of course, a catch.