Teleoperating a physical robot could become an important job in future, according to Sanctuary AI, based in Vancouver, Canada. The company also believes that this might provide a way to train robots how to perform tasks that are currently well out of their (mechanical) reach, and imbue machines with a physical sense of the world some argue is needed to unlock human-level artificial intelligence.
Industrial robots are powerful, precise, and mostly stubbornly stupid. They cannot apply the kind of precision and responsiveness needed to perform delicate manipulation tasks. That’s partly why the use of robots in factories is still relatively limited, and still requires an army of human workers to assemble all the fiddly bits into the guts of iPhones.
But when such work is nothing for humans, why not forgo the complexity of trying to design an algorithm to do the job?
Here’s one of Sanctuary’s robots—the top half of a humanoid—doing a range of sophisticated manipulation tasks. Offscreen, a human wearing a VR headset and sensor-laden gloves is operating the robot remotely.
Sanctuary recently ran what it calls the first “real world” test of one of its robots, by having a humanoid like this one work in a store not far from the startup’s headquarters. The company believes that making it possible to do physical work remotely could help address the labor shortages that many companies are seeing today.
Some robots already get some remote assistance from humans when they get stuck, as I’ve written about. The limits of AI mean that robots working in restaurants, offices, and on the street as delivery mules are flummoxed by unusual situations. The difficulty of pulling off fully autonomous driving, for example, means that some firms are working to put remotely piloted trucks on the roads.
Sanctuary’s founders, Geordie Rose and Suzanne Gilbert, ran Kindred, another company doing robotic teleoperation that was acquired in 2020 by Ocado, a UK supermarket firm that uses automation extensively. In this video the pair talk about the company’s history and plans for the future.
The aim is ultimately to use data from humans teleoperating the robots to teach algorithms to do more tasks autonomously. Gilbert, Sanctuary’s CTO, believes that achieving humanlike intelligence in machines will require them to interact with and learn from the physical world. (Sorry, ChatGPT.)
OpenAI, the company behind ChatgGPT, is also taking an interest in teleoperated humanoids. It is leading a $23.5 million investment in 1X, a startup developing a human-like robot. “The OpenAI Startup Fund believes in the approach and impact that 1X can have on the future of work,” Brad Lightcap, OpenAI’s COO and manager of the OpenAI Startup Fund says.
The ALOHA teleoperation system.Courtesy of Tony Zhao/UC Berkeley
For humans to help robots with teleoperation, AI might also need to be developed to ease the collaboration between person and machine. Chelsea Finn, an assistant professor at UC Berkeley, recently shared details of a fascinating research project that involves using machine learning to allow cheap teleoperated robot arms to work smoothly and accurately. The technology may make it easier for humans to operate robots remotely for more situations.
I don’t think I’d much enjoy teleoperating a robot all day—especially if I knew that robot would someday turn around and kick me out the door. But it might make working from home a possibility for more people, and also make certain types of job more widely accessible. Alternatively, we may have just gotten a glimpse of a potentially dystopian future of the workplace.
This is an edition of WIRED’s Fast Forward newsletter, a weekly dispatch from the future by Will Knight, exploring AI advances and other technology set to change our lives.
Idaho senator Jim Risch, the top Republican on the Foreign Relations Committee—who also serves on the Intelligence Committee—says he’d be surprised if they didn’t mimic the digital pressure campaign that experts say caused the bank runs. “We see all kinds of input from foreign actors trying to do harm to the country, so it’s really an obvious avenue for somebody to try to do that,” Risch says.
Some experts think the threat is real. “The fear is not overblown,” Peter Warren Singer, strategist and senior fellow at New America, a Washington-based think tank, told WIRED via email. “Most cyber threat actors, whether criminals or states, don’t create new vulnerabilities, but notice and then take advantage of existing ones. And it is clear that both stock markets and social media are manipulatable. Add them together and you multiply the manipulation potential.”
In the aftermath of the GameStop meme-driven rally—which was partly fueled by a desire to wipe out hedge funds shorting the stock—experts warned the same techniques could be used to target banks. In a paper for the Carnegie Endowment, published in November 2021, Claudia Biancotti, a director at the Bank of Italy, and Paolo Ciocca, an Italian finance regulator, warned that financial institutions were vulnerable to similar market manipulation.
“Finance-focused virtual communities are growing in size and potential economic and social impact, as demonstrated by the role played by online groups of retail traders in the GameStop case,” they wrote, “Such communities are highly exposed to manipulation, and may represent a prime target for state and nonstate actors conducting malicious information operations.”
The government’s response to the Silicon Valley Bank collapse—depositors’ money was quickly protected—shows banks can be hardened against this kind of event, says Cristián Bravo Roman—an expert on AI, banking, and contagion risk at Western Ontario University. “All the measures that were taken to restore trust in the banking system limit the ability of a hostile attacker,” he says.
Roman says federal officials now see, or at least should see, the real cyberthreat of mass digital hysteria clearly, and may strengthen provisions designed to protect smaller banks against runs. “It completely depends on what happens after this,” Roman says. “The truth is, the banking system is just as political as it is economic.”
Preventing the swell of online panic, whether real or fabricated, is far more complicated. Social media sites in the US can’t be easily compelled to remove content, and they are protected by Section 230 of the Communications Decency Act of 1996, which shields tech companies from liability for what others write on their platforms. While that provision is currently being challenged in the US Supreme Court, it’s unlikely lawmakers would want to limit what many see as free speech.
“I don’t think that social media can be regulated to censor talk about a bank’s financial condition unless there is deliberate manipulation or misinformation, just as that might be in any other means of communicating,” says Senator Richard Blumenthal, a Connecticut Democrat.
“I don’t think we should offer a systemic response to a localized problem,” says North Dakota Republican senator Kevin Cramer—although he adds that he wants to hear “all the arguments.”
“We need to be very cautious to not get in the way of speech,” Cramer says. “But when speech becomes designed specifically to short a market, for example, or to lead to an unnecessary run on the bank, we have to be reasonable about it.”
While some members of Congress are using the run on Silicon Valley Bank to revive conversations about the regulation of social media platforms, other lawmakers are, once again, looking to tech companies themselves for solutions.“We need to be better at discovering and exposing bots. We need to understand the source,” says Senator Angus King, a Maine Independent.
King, a member of the Senate Intelligence Committee, says Washington can’t solve all of Silicon Valley’s problems, especially when it comes to cleaning up bots. “That has to be them,” he says. “We can’t do that.”
Screening social media content to remove abuse or other banned material is one of the toughest jobs in tech, but also one of the most undervalued. Content moderators for TikTok and Meta in Germany have banded together to demand more recognition for workers who are employed to keep some of the worst content off social platforms, in a rare moment of coordinated pushback by tech workers across companies.
The combined group met in Berlin last week to demand from the two platforms higher pay, more psychological support, and the ability to unionize and organize. The workers say the low pay and prestige unfairly makes moderators low-skilled workers in the eyes of German employment rules. One moderator who spoke to WIRED says that forced them to endure more than a year of immigration red tape to be able to stay in the country.
“We want to see recognition of moderation not as an easy job, but an extremely difficult, highly skilled job that actually requires a large amount of cultural and language expertise,” says Franziska Kuhles, who has worked as a content moderator for TikTok for four years. She is one of 11 elected members chosen to represent workers at the company’s Berlin office as part of an employee-elected works council. “It should be recognized as a real career, where people are given the respect that comes with that.”
Last week’s meeting marked the first time that moderators from different companies have formally met with each other in Germany to exchange experiences and collaborate on unified demands for workplace changes.
TikTok, Meta, and other platforms rely on moderators like Kuhles to ensure that violent, sexual, and illegal content is removed. Although algorithms can help filter some content, more sensitive and nuanced tasks fall to human moderators. Much of this work is outsourced to third-party companies around the world, and moderators have often complained of low wages and poor working conditions.
Germany, which is a hub for moderating content across Europe and the Middle East, has relatively progressive labor laws that allow the creation of elected works councils, or Betriebsrat, inside companies, legally-recognized structures similar to but distinct from trade unions. Works councils must be consulted by employers over major company decisions and can have their members elected to company boards. TikTok workers in Germany formed a works council in 2022.
Hikmat El-Hammouri, regional organizer at Ver.di, a Berlin-based union that helped facilitate the meeting, calls the summit “the culmination of work by union organizers in the workplaces of social media companies to help these key online safety workers—content moderators—fight for the justice they deserve.” He hopes that TikTok and Meta workers teaming up can help bring new accountability to technology companies with workers in Germany.
TikTok, Meta, and Meta’s local moderation contractor did not respond to a request for comment.
Moderators from Kenya to India to the United States have often complained that their work is grueling, with demanding quotas and little time to make decisions on the content; many have reported suffering from post-traumatic stress disorder (PTSD) and psychological damage. In recognition of that, many companies offer some form of psychological counseling to moderation staff, but some workers say it is inadequate.
“Nobody’s defending CSAM,” says Barbora Bukovská, senior director for law and policy at Article 19, a digital rights group. “But the bill has the chance to violate privacy and legislate wild surveillance of private communication. How can that be conducive to democracy?”
The UK Home Office, the government department that is overseeing the bill’s development, did not supply an attributable response to a request for comment.
Children’s charities in the UK say that it’s disingenuous to portray the debate around the bill’s CSAM provisions as a black-and-white choice between privacy and safety. The technical challenges posed by the bill are not insurmountable, they say, and forcing the world’s biggest tech companies to invest in solutions makes it more likely the problems will be solved.
“Experts have demonstrated that it’s possible to tackle child abuse material and grooming in end-to-end encrypted environments,” says Richard Collard, associate head of child safety online policy at the British children’s charity NSPCC, pointing to a July paper published by two senior technical directors at GCHQ, the UK’s cyber intelligence agency, as an example.
Companies have started selling off-the-shelf products that claim the same. In February, London-based SafeToNet launched its SafeToWatch product that, it says, can identify and block child abuse material from ever being uploaded to messengers like WhatsApp. “It sits at device level, so it’s not affected by encryption,” says the company’s chief operating officer, Tom Farrell, who compares it to the autofocus feature in a phone camera. “Autofocus doesn’t allow you to take your image until it’s in focus. This wouldn’t allow you to take it before it proved that it was safe.”
WhatsApp’s Cathcart called for private messaging to be excluded entirely from the Online Safety Bill. He says that his platform is already reporting more CSAM to the National Center for Missing and Exploited Children (NCMEC) than Apple, Google, Microsoft, Twitter and TikTok combined.
Supporters of the bill disagree. “There’s a problem with child abuse in end-to-end encrypted environments,” says Michael Tunks, head of policy and public affairs at the British nonprofit Internet Watch Foundation, which has license to search the internet for CSAM.
WhatsApp might be doing better than some other platforms at reporting CSAM, but it doesn’t compare favorably with other Meta services that are not encrypted. Although Instagram and WhatsApp have the same number of users worldwide according to data platform Statista, Instagram made 3 million reports versus WhatsApp’s 1.3 million, the NCMEC says.
“The bill does not seek to undermine end-to-end encryption in any way,” says Tunks, who supports the bill in its current form, believing it puts the onus on companies to tackle the internet’s child abuse problem. “The online safety bill is very clear that scanning is specifically about CSAM and also terrorism,” he adds. “The government has been pretty clear they are not seeking to repurpose this for anything else.”
Nearly four hours into Tesla’s marathon Investor Day, someone in the audience tried again to bring Elon Musk, the Tesla (and Twitter and SpaceX) CEO back to the present day. From a stage at the Gigafactory in Austin, Texas, Musk had announced an ambitious “Master Plan 3” to save the world. For $10 trillion in manufacturing investment, Musk said, the world could move wholesale to a renewable electricity grid, powering electric cars, planes, and ships.
“Earth can and will move to a sustainable energy economy, and will do so in your lifetime,” Musk proclaimed. More details will be revealed in a forthcoming white paper, he said. But the presentation was short on specifics on the one part of the electric transition that is in Tesla’s gift: the next-generation vehicle it has been teasing for years, promising something that is more affordable, more efficient, and more efficiently built than anything in its current lineup. The vehicle, or group of vehicles, will be crucial to hitting Tesla’s goal of selling 20 million vehicles in 2030; it sold 1.3 million in 2022.
What, an investor asked the company’s executives, would that vehicle be? Musk declined to share. “We’d be jumping the gun if we answered your question,” he said, explaining that the company would hold a separate event to roll out the mystery vehicle somewhere down the line. Slides shown during the presentation just showed images of car-shaped forms under gray sheets.
Instead, 17 company executives shared some tidbits on the vehicle during a round robin of presentations focusing on everything from design to supply chains to manufacturing to environmental impacts and legal affairs.
The next-generation vehicle won’t be just one car, but an approach to building vehicles focusing on “affordability and desirability,” said Lars Moravy, Tesla’s vice president of vehicle engineering. It will be built at a new factory near Monterrey, Mexico, which was announced at the event Wednesday and will be Tesla’s sixth battery and electric vehicle plant. Executives said the next-gen vehicle would have a 40 percent smaller manufacturing footprint and would cut production costs by 50 percent.
Wall Street appears to have expected a bit more detail. By Thursday morning, the company’s stock price was down 5 percent.
“The much-anticipated theme of Master Plan 3 left me with more questions than answers,” Gene Munster, managing partner at Deepwater Asset Management, said in a note to investors.
“Musk and company failed to put the cherry on top—an actual look at a lower-priced Tesla, if only just conceptually,” Jessica Caldwell, executive director of insights at Edmunds, an auto industry research firm, said in an emailed commentary.
A truly affordable electric car has long been a target for the company. Tesla’s first Master Plan—published in 2006, before Musk was CEO—was simple but, at the time, radical: Build an electric sports car, and use that money to build cheaper and cheaper electric cars. The company touted its second electric sedan, the Model 3, as the battery-powered ride for the masses, but the car only sold at its target price of $35,000 for a limited time. Its base model now sells for $43,000. In the meantime, legacy automakers inspired by Tesla’s vision have stepped into the gap: The Chevrolet Bolt today starts at $26,500, and the Nissan Leaf at $28,000.
A second Master Plan, published in 2016, promised self-driving cars and shared robotaxis, and it promoted the carmaker’s (now struggling) solar panel business. The robots on wheels haven’t shown up yet—though Wednesday’s events did include a cameo from Optimus, a still-clunky prototype of a humanoid robot also being built by Tesla.
Musk rarely meets his self-imposed deadlines, but he’s always excelled at marshaling others to his cause with grand pronouncements and sprawling visions. Now he’s looking beyond cars, and even robots. “I really want today to be not only about investors who own Tesla stock, but anyone who is an investor in Earth,” he said.