Select Page
To Build a Better AI Supercomputer, Let There Be Light

To Build a Better AI Supercomputer, Let There Be Light

GlobalFoundries, a company that makes chips for others, including AMD and General Motors, previously announced a partnership with Lightmatter. Harris says his company is “working with the largest semiconductor companies in the world as well as the hyperscalers,” referring to the largest cloud companies like Microsoft, Amazon, and Google.

If Lightmatter or another company can reinvent the wiring of giant AI projects, a key bottleneck in the development of smarter algorithms might fall away. The use of more computation was fundamental to the advances that led to ChatGPT, and many AI researchers see the further scaling-up of hardware as being crucial to future advances in the field—and to hopes of ever reaching the vaguely-specified goal of artificial general intelligence, or AGI, meaning programs that can match or exceed biological intelligence in every way.

Linking a million chips together with light might allow for algorithms several generations beyond today’s cutting edge, says Lightmatter’s CEO Nick Harris. “Passage is going to enable AGI algorithms,” he confidently suggests.

The large data centers that are needed to train giant AI algorithms typically consist of racks filled with tens of thousands of computers running specialized silicon chips and a spaghetti of mostly electrical connections between them. Maintaining training runs for AI across so many systems—all connected by wires and switches—is a huge engineering undertaking. Converting between electronic and optical signals also places fundamental limits on chips’ abilities to run computations as one.

Lightmatter’s approach is designed to simplify the tricky traffic inside AI data centers. “Normally you have a bunch of GPUs, and then a layer of switches, and a layer of switches, and a layer of switches, and you have to traverse that tree” to communicate between two GPUs, Harris says. In a data center connected by Passage, Harris says, every GPU would have a high-speed connection to every other chip.

Lightmatter’s work on Passage is an example of how AI’s recent flourishing has inspired companies large and small to try to reinvent key hardware behind advances like OpenAI’s ChatGPT. Nvidia, the leading supplier of GPUs for AI projects, held its annual conference last month, where CEO Jensen Huang unveiled the company’s latest chip for training AI: a GPU called Blackwell. Nvidia will sell the GPU in a “superchip” consisting of two Blackwell GPUs and a conventional CPU processor, all connected using the company’s new high-speed communications technology called NVLink-C2C.

The chip industry is famous for finding ways to wring more computing power from chips without making them larger, but Nvidia chose to buck that trend. The Blackwell GPUs inside the company’s superchip are twice as powerful as their predecessors but are made by bolting two chips together, meaning they consume much more power. That trade-off, in addition to Nvidia’s efforts to glue its chips together with high-speed links, suggests that upgrades to other key components for AI supercomputers, like that proposed by Lightmatter, could become more important.

Craig Wright Is Not Bitcoin Creator Satoshi Nakamoto, Judge Declares

Craig Wright Is Not Bitcoin Creator Satoshi Nakamoto, Judge Declares

A judge in the UK High Court has declared that Australian computer scientist Craig Wright is not Satoshi Nakamoto, the creator of Bitcoin, marking the end of a years-long debate.

“The evidence is overwhelming,” said Honourable Mr. Justice James Mellor, delivering a surprise ruling at the close of the trial. “Dr. Wright is not the author of the Bitcoin white paper. Dr. Wright is not the person that operated under the pseudonym Satoshi Nakamoto. Dr. Wright is not the person that created the Bitcoin system. Nor is Dr. Wright the author of the Bitcoin software,” he said.

The ruling brings to a close a six-week trial, in which the Crypto Open Patent Alliance, a nonprofit consortium of crypto companies, asked the court to declare that Wright is not Satoshi on the basis that he had allegedly fabricated his evidence and contorted his story repeatedly as new inconsistencies came to light. “After all the evidence in this remarkable trial, it is clear beyond doubt that Craig Wright is not Satoshi Nakamoto,” claimed Jonathan Hough, legal counsel for COPA, as he began his closing submissions on Tuesday. “Wright has lied, and lied, and lied.”

In the last five years, Wright has used his claim to be the creator of Bitcoin to bring multiple lawsuits of his own against developers and other parties he has accused of violating his intellectual property rights. COPA is seeking an injunction that would prevent Wright from further brandishing the claim. “We are seeking to enjoin Dr. Wright from ever claiming to be Satoshi Nakamoto again and in doing so avoid further litigation terror campaigns,” says a COPA spokesperson, who asked to remain nameless for fear of legal retaliation from Wright.

The parties will have to wait a month or more for a formal judgement to be published, detailing the specific findings and forms of relief Wright will be required to submit to. The judgement will “be ready when it’s ready and not before,” said Mellor.

Until the snap ruling, the trial appeared as if it would end less with a bang than a whimper. The courtroom, packed out for the opening week, was by the end only half-full. One onlooker, who had in the waiting area introduced himself as Satoshi Nakamoto, nodded off to sleep in the public gallery, chin resting on chest. Not even Wright was in attendance.

This story is developing, please check back for updates.

Google DeepMind’s Latest AI Agent Learned to Play ‘Goat Simulator 3’

Google DeepMind’s Latest AI Agent Learned to Play ‘Goat Simulator 3’

Goat Simulator 3 is a surreal video game in which players take domesticated ungulates on a series of implausible adventures, sometimes involving jetpacks.

That might seem an unlikely venue for the next big leap in artificial intelligence, but Google DeepMind today revealed an AI program capable of learning how to complete tasks in a number of games, including Goat Simulator 3.

Most impressively, when the program encounters a game for the first time, it can reliably perform tasks by adapting what it learned from playing other games. The program is called SIMA, for Scalable Instructable Multiworld Agent, and it builds upon recent AI advances that have seen large language models produce remarkably capable chabots like ChatGPT.

“SIMA is greater than the sum of its parts,” says Frederic Besse, a research engineer at Google DeepMind who was involved with the project. “It is able to take advantage of the shared concepts in the game, to learn better skills and to learn to be better at carrying out instructions.”

Google DeepMind’s SIMA software tries its hand at Goat Simulator 3.

Courtesy of Google DeepMind

As Google, OpenAI, and others jostle to gain an edge in building on the recent generative AI boom, broadening out the kind of data that algorithms can learn from offers a route to more powerful capabilities.

DeepMind’s latest video game project hints at how AI systems like OpenAI’s ChatGPT and Google’s Gemini could soon do more than just chat and generate images or video, by taking control of computers and performing complex commands. That’s a dream being chased by both independent AI enthusiasts and big companies including Google DeepMind, whose CEO, Demis Hassabis, recently told WIRED is “investing heavily in that direction.”

A New Way to Play

SIMA shows DeepMind putting a new twist on game playing agents, an AI technology the company has pioneered in the past.

In 2013, before DeepMind was acquired by Google, the London-based startup showed how a technique called reinforcement learning, which involves training an algorithm with positive and negative feedback on its performance, could help computers play classic Atari video games. In 2016, as part of Google, DeepMind developed AlphaGo, a program that used the same approach to defeat a world champion of Go, an ancient board game that requires subtle and instinctive skill.

For the SIMA project, the Google DeepMind team collaborated with several game studios to collect keyboard and mouse data from humans playing 10 different games with 3D environments, including No Man’s Sky, Teardown, Hydroneer, and Satisfactory. DeepMind later added descriptive labels to that data to associate the clicks and taps with the actions users took, for example whether they were a goat looking for its jetpack or a human character digging for gold.

The Quest to Give AI Chatbots a Hand—and an Arm

The Quest to Give AI Chatbots a Hand—and an Arm

Peter Chen, CEO of the robot software company Covariant, sits in front of a chatbot interface resembling the one used to communicate with ChatGPT. “Show me the tote in front of you,” he types. In reply, a video feed appears, revealing a robot arm over a bin containing various items—a pair of socks, a tube of chips, and an apple among them.

The chatbot can discuss the items it sees—but also manipulate them. When WIRED suggests Chen ask it to grab a piece of fruit, the arm reaches down, gently grasps the apple, and then moves it to another bin nearby.

This hands-on chatbot is a step toward giving robots the kind of general and flexible capabilities exhibited by programs like ChatGPT. There is hope that AI could finally fix the long-standing difficulty of programming robots and having them do more than a narrow set of chores.

“It’s not at all controversial at this point to say that foundation models are the future of robotics,” Chen says, using a term for large-scale, general-purpose machine-learning models developed for a particular domain. The handy chatbot he showed me is powered by a model developed by Covariant called RFM-1, for Robot Foundation Model. Like those behind ChatGPT, Google’s Gemini, and other chatbots it has been trained with large amounts of text, but it has also been fed video and hardware control and motion data from tens of millions of examples of robot movements sourced from the labor in the physical world.

Including that extra data produces a model not only fluent in language but also in action and that is able to connect the two. RFM-1 can not only chat and control a robot arm but also generate videos showing robots doing different chores. When prompted, RFM-1 will show how a robot should grab an object from a cluttered bin. “It can take in all of these different modalities that matter to robotics, and it can also output any of them,” says Chen. “It’s a little bit mind-blowing.”

Video generated by the RFM-1 AI model.Courtesy of Covariant

Video generated by the RFM-1 AI model.Courtesy of Covariant

The model has also shown it can learn to control similar hardware not in its training data. With further training, this might even mean that the same general model could operate a humanoid robot, says Pieter Abbeel, cofounder and chief scientist of Covariant, who has pioneered robot learning. In 2010 he led a project that trained a robot to fold towels—albeit slowly—and he also worked at OpenAI before it stopped doing robot research.

Covariant, founded in 2017, currently sells software that uses machine learning to let robot arms pick items out of bins in warehouses but they are usually limited to the task they’ve been training for. Abeel says that models like RFM-1 could allow robots to turn their grippers to new tasks much more fluently. He compares Covariant’s strategy to how Tesla uses data from cars it has sold to train its self-driving algorithms. “It’s kind of the same thing here that we’re playing out,” he says.

Abeel and his Covariant colleagues are far from the only roboticists hoping that the capabilities of the large language models behind ChatGPT and similar programs might bring about a revolution in robotics. Projects like RFM-1 have shown promising early results. But how much data may be required to train models that make robots that have much more general abilities—and how to gather it—is an open question.

5 Years After San Francisco Banned Face Recognition, Voters Ask for More Surveillance

5 Years After San Francisco Banned Face Recognition, Voters Ask for More Surveillance

San Francisco made history in 2019 when its Board of Supervisors voted to ban city agencies including the police department from using face recognition. About two dozen other US cities have since followed suit. But on Tuesday, San Francisco voters appeared to turn against the idea of restricting police technology, backing a ballot proposition that will make it easier for city police to deploy drones and other surveillance tools.

Proposition E passed with 60 percent of the vote and was backed by San Francisco mayor London Breed. It gives the San Francisco Police Department new freedom to install public security cameras and deploy drones without oversight from the city’s Police Commission or Board of Supervisors. It also loosens a requirement that SFPD get clearance from the Board of Supervisors before adopting new surveillance technology, allowing approval to be sought any time within the first year.

Matt Cagle, a senior staff attorney with the American Civil Liberties Union of Northern California, says those changes leave the existing ban on face recognition in place but loosen other important protections. “We’re concerned that Proposition E will result in people in San Francisco being subject to unproven and dangerous technology,” he says. “This is a cynical attempt by powerful interests to exploit fears about crime and shift more power to the police.”

Mayor Breed and other backers have positioned it as an answer to concern about crime in San Francisco. Crime figures have broadly declined, but fentanyl has recently driven an increase in overdose deaths, and commercial downtown neighborhoods are still struggling with pandemic-driven office and retail vacancies. The proposition was also supported by groups associated with the tech industry, including the campaign group GrowSF, which did not respond to a request for comment.

“By supporting the work of our police officers, expanding our use of technology, and getting officers out from behind their desks and onto our streets, we will continue in our mission to make San Francisco a safer city,” Mayor Breed said in a statement on the proposition passing. She noted that 2023 saw the lowest crime rates in a decade in the city—except for a pandemic blip in 2020—with rates of property crime and violent crime continuing to decline further in 2024.

Proposition E also gives police more freedom to pursue suspects in car chases and reduces paperwork obligations, including when officers resort to use of force.

Caitlin Seeley George, managing director and campaign director for Fight for the Future, a nonprofit that has long campaigned against the use of face recognition, calls the proposition “a blow to the hard-fought reforms that San Francisco has championed in recent years to rein in surveillance.”

“By expanding police use of surveillance technology, while simultaneously reducing oversight and transparency, it undermines peoples’ rights and will create scenarios where people are at greater risk of harm,” George says.

Although Cagle of the ACLU shares her concerns that San Francisco citizens will be less safe, he says the city should retain its reputation for having catalyzed a US-wide pushback against surveillance. San Francisco’s 2019 ban on face recognition was followed by about two dozen other cities, many of which also added new oversight mechanisms for police surveillance.