This incessant surveillance is antidemocratic, and it’s also a loser’s game. The price of accurate intel increases asymptotically; there’s no way to know everything about natural systems, forcing guesses and assumptions; and just when a complete picture is starting to coalesce, some new player intrudes and changes the situational dynamic. Then the AI breaks. The near-perfect intelligence veers into psychosis, labeling dogs as pineapples, treating innocents as wanted fugitives, and barreling eighteen-wheelers into kindergarten busses that it sees as highway overpasses.
The dangerous fragility inherent to optimization is why the human brain did not, itself, evolve to be an optimizer. The human brain is data-light: It draws hypotheses from a few data points. And it never strives for 100 percent accuracy. It’s content to muck along at the threshold of functionality. If it can survive by being right 1 percent of the time, that’s all the accuracy it needs.
The brain’s strategy of minimal viability is a notorious source of cognitive biases that can have damaging consequences: close-mindedness, conclusion jumping, recklessness, fatalism, panic. Which is why AI’s rigorously data-driven method can help illuminate our blindspots and debunk our prejudices. But in counterbalancing our brain’s computational shortcomings, we don’t want to stray into the greater problem of overcorrection. There can be enormous practical upside to a good enough mentality: It wards off perfectionism’s destructive mental effects, including stress, worry, intolerance, envy, dissatisfaction, exhaustion, and self-judgment. A less-neurotic brain has helped our species thrive in life’s punch and wobble, which demands workable plans that can be flexed, via feedback, on the fly.
These antifragile neural benefits can all be translated into AI. Instead of pursuing faster machine-learners that crunch ever-vaster piles of data, we can focus on making AI more tolerant of bad information, user variance, and environmental turmoil. That AI would exchange near-perfection for consistent adequacy, upping reliability and operational range while sacrificing nothing essential. It would suck less energy, haywire less randomly, and place less psychological burdens on its mortal users. It would, in short, possess more of the earthly virtue known as common sense.
Here’s three specs for how.
Building AI to Brave Ambiguity
Five hundred years ago, Niccolò Machiavelli, the guru of practicality, pointed out that worldly success requires a counterintuitive kind of courage: the heart to venture beyond what we know with certainty. Life, after all, is too fickle to permit total knowledge, and the more that we obsess over ideal answers, the more that we hamper ourselves with lost initiative. So, the smarter strategy is to concentrate on intel that can be rapidly acquired—and to advance boldly in the absence of the rest. Much of that absent knowledge will prove unnecessary, anyway; life will bend in a different direction than we anticipate, resolving our ignorance by rendering it irrelevant.
We can teach AI to operate this same way by flipping our current approach to ambiguity. Right now, when a Natural Language Processor encounters a word—suit—that could denote multiple things—an article of clothing or a legal action—it devotes itself to analyzing ever greater chunks of correlated information in an effort to pinpoint the word’s exact meaning.
Last month, Stanford researchers declared that a new era of artificial intelligence had arrived, one built atop colossal neural networks and oceans of data. They said a new research center at Stanford would build—and study—these “foundational models” of AI.
Critics of the idea surfaced quickly—including at the workshop organized to mark the launch of the new center. Some object to the limited capabilities and sometimes freakish behavior of these models; others warn of focusing too heavily on one way of making machines smarter.
“I think the term ‘foundation’ is horribly wrong,” Jitendra Malik, a professor at UC Berkeley who studies AI, told workshop attendees in a video discussion.
Malik acknowledged that one type of model identified by the Stanford researchers—large language models that can answer questions or generate text from a prompt—has great practical use. But he said evolutionary biology suggests that language builds on other aspects of intelligence like interaction with the physical world.
“These models are really castles in the air; they have no foundation whatsoever,” Malik said. “The language we have in these models is not grounded, there is this fakeness, there is no real understanding.” He declined an interview request.
A research paper coauthored by dozens of Stanford researchers describes “an emerging paradigm for building artificial intelligence systems” that it labeled “foundational models.” Ever-larger AI models have produced some impressive advances in AI in recent years, in areas such as perception and robotics as well as language.
Large language models are also foundational to big tech companies like Google and Facebook, which use them in areas like search, advertising, and content moderation. Building and training large language models can require millions of dollars worth of cloud computing power; so far, that’s limited their development and use to a handful of well-heeled tech companies.
But big models are problematic, too. Language models inherit bias and offensive text from the data they are trained on, and they have zero grasp of common sense or what is true or false. Given a prompt, a large language model may spit out unpleasant language or misinformation. There is also no guarantee that these large models will continue to produce advances in machine intelligence.
The Stanford proposal has divided the research community. “Calling them ‘foundation models’ completely messes up the discourse,” says Subbarao Kambhampati, a professor at Arizona State University. There is no clear path from these models to more general forms of AI, Kambhampati says.
Thomas Dietterich, a professor at Oregon State University and former president of the Association for the Advancement of Artificial Intelligence, says he has “huge respect” for the researchers behind the new Stanford center, and he believes they are genuinely concerned about the problems these models raise.
But Dietterich wonders if the idea of foundational models isn’t partly about getting funding for the resources needed to build and work on them. “I was surprised that they gave these models a fancy name and created a center,” he says. “That does smack of flag planting, which could have several benefits on the fundraising side.”
Stanford has also proposed the creation of a National AI Cloud to make industry-scale computing resources available to academics working on AI research projects.
Emily M. Bender, a professor in the linguistics department at the University of Washington, says she worries that the idea of foundational models reflects a bias toward investing in the data-centric approach to AI favored by industry.
Bender says it is especially important to study the risks posed by big AI models. She coauthored a paper, published in March, that drew attention to problems with large language models and contributed to the departure of two Google researchers. But she says scrutiny should come from multiple disciplines.
“There are all of these other adjacent, really important fields that are just starved for funding,” she says. “Before we throw money into the cloud, I would like to see money going into other disciplines.”
Despite the executive orders and congressional hearings of the “Biden antitrust revolution,” the most profound anti-competitive shift is happening under policymakers’ noses: the cornering of artificial intelligence and automation by a handful of tech companies. This needs to change.
There is little doubt that the impact of AI will be widely felt. It is shaping product innovations, creating new research, discovery, and development pathways, and reinventing business models. AI is making inroads in the development of autonomous vehicles, which may eventually improve road safety, reduce urban congestion, and help drivers make better use of their time. AI recently predicted the molecular structure of almost every protein in the human body, and it helped develop and roll out a Covid vaccine in record time. The pandemic itself may have accelerated AI’s incursion—in emergency rooms for triage; in airports, where robots spray disinfecting chemicals; in increasingly automated warehouses and meatpacking plants; and in our remote workdays, with the growing presence of chatbots, speech recognition, and email systems that get better at completing our sentences.
Exactly how AI will affect the future of human work, wages, or productivity overall remains unclear. Though service and blue-collar wages have lately been on the rise, they’ve stagnated for three decades. According to MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo, 50 to 70 percent of this languishing can be attributed to the loss of mostly routine jobs to automation. White-collar occupations are also at risk as machine learning and smart technologies take on complex functions. According to McKinsey, while only about 10 percent of these jobs could disappear altogether, 60 percent of them may see at least a third of their tasks subsumed by machines and algorithms. Some researchers argue that while AI’s overall productivity impact has been so far disappointing, it will improve; others are less sanguine. Despite these uncertainties, most experts agree that on net, AI will “become more of a challenge to the workforce,” and we should anticipate a flat to slightly negative impact on jobs by 2030.
Without intervention, AI could also help undermine democracy–through amplifying misinformation or enabling mass surveillance. The past year and a half has also underscored the impact of algorithmically powered social media, not just on the health of democracy, but on health care itself.
The overall direction and net impact of AI sits on a knife’s edge, unless AI R&D and applications are appropriately channeled with wider societal and economic benefits in mind. How can we ensure that?
A handful of US tech companies, including Amazon, Alibaba, Alphabet, Facebook, and Netflix, along with Chinese mega-players such as Baidu, are responsible for $2 of every $3 spent globally on AI. They’re also among the top AI patent holders. Not only do their outsize budgets for AI dwarf others’, including the federal government’s, they also emphasize building internally rather than buying AI. Even though they buy comparatively little, they’ve still cornered the AI startup acquisition market. Many of these are early-stage acquisitions, meaning the tech giants integrate the products from these companies into their own portfolios or take IP off the market if it doesn’t suit their strategic purposes and redeploy the talent. According to research from my Digital Planet team, US AI talent is intensely concentrated. The median number of AI employees in the field’s top five employers—Amazon, Google, Microsoft, Facebook, and Apple—is some 18,000, while the median for companies six to 24 is about 2,500—and it drops significantly from there. Moreover, these companies have near-monopolies of data on key behavioral areas. And they are setting the stage to become the primary suppliers of AI-based products and services to the rest of the world.
Each key player has areas of focus consistent with its business interests: Google/Alphabet spends disproportionately on natural language and image processing and on optical character, speech, and facial recognition. Amazon does the same on supply chain management and logistics, robotics, and speech recognition. Many of these investments will yield socially beneficial applications, while others, such as IBM’s Watson—which aspired to become the go-to digital decision tool in fields as diverse as health care, law, and climate action—may not deliver on initial promises, or may fail altogether. Moonshot projects, such as level 4 driverless cars, may have an excessive amount of investment put against them simply because the Big Tech players choose to champion them. Failures, disappointments, and pivots are natural to developing any new technology. We should, however, worry about the concentration of investments in a technology so fundamental and ask how investments are being allocated overall. AI, arguably, could have more profound impact than social media, online retail, or app stores—the current targets of antitrust. Google CEO Sundar Pichai may have been a tad overdramatic when he declared that AI will have more impact on humanity than fire, but that alone ought to light a fire under the policy establishment to pay closer attention.
In recent years, researchers have used artificial intelligence to improve translation between programming languages or automatically fix problems. The AI system DrRepair, for example, has been shown to solve most issues that spawn error messages. But some researchers dream of the day when AI can write programs based on simple descriptions from non-experts.
On Tuesday, Microsoft and OpenAI shared plans to bring GPT-3, one of the world’s most advanced models for generating text, to programming based on natural language descriptions. This is the first commercial application of GPT-3 undertaken since Microsoft invested $1 billion in OpenAI last year and gained exclusive licensing rights to GPT-3.
“If you can describe what you want to do in natural language, GPT-3 will generate a list of the most relevant formulas for you to choose from,” said Microsoft CEO Satya Nadella in a keynote address at the company’s Build developer conference. “The code writes itself.”
Courtesy of Microsoft
Microsoft VP Charles Lamanna told WIRED the sophistication offered by GPT-3 can help people tackle complex challenges and empower people with little coding experience. GPT-3 will translate natural language into PowerFx, a fairly simple programming language similar to Excel commands that Microsoft introduced in March.
This is the latest demonstration of applying AI to coding. Last year at Microsoft’s Build, OpenAI CEO Sam Altman demoed a language model fine-tuned with code from GitHub that automatically generates lines of Python code. As WIRED detailed last month, startups like SourceAI are also using GPT-3 to generate code. IBM last month showed how its Project CodeNet, with 14 million code samples from more than 50 programming languages, could reduce the time needed to update a program with millions of lines of Java code for an automotive company from one year to one month.
Microsoft’s new feature is based on a neural network architecture known as Transformer, used by big tech companies including Baidu, Google, Microsoft, Nvidia, and Salesforce to create large language models using text training data scraped from the web. These language models continually grow larger. The largest version of Google’s BERT, a language model released in 2018, had 340 million parameters, a building block of neural networks. GPT-3, which was released one year ago, has 175 billion parameters.
Such efforts have a long way to go, however. In one recent test, the best model succeeded only 14 percent of the time on introductory programming challenges compiled by a group of AI researchers.
The greatest failure of the digital age is how far removed it is from nature. The microchip has no circadian rhythm, nor has the computer breath. The network is incorporeal. This may represent an existential risk for life on Earth. I believe we have to make a decision: Succumb to pushing more of our brain time and economy into unnatural online constructs, or build the digital anew in a way that is rooted in nature.
Nature is excessive, baroque. Its song is not ours alone. We share this planet with 8 million nonhuman species, yet we scarcely think of how they move through the world. There is no way for wild animals, trees, or other species to make themselves known to us online or to express their preferences to us. The only value most of them have is the sum value of their processed body parts. Those that are not eaten are forgotten, or perhaps never remembered: Only 2 million of them are recorded by science.
This decade will be the most destructive for nonhuman life in recorded history. It could also be the most regenerative. Nonhuman life-forms may soon gain some agency in the world. I propose the invention of an Interspecies Money. I’m not talking about Dogecoin, the meme of a Shiba Inu dog that’s become a $64 billion cryptocurrency (as of today). I’m talking about a digital currency that could allow several hundred billion dollars to be held by other beings simply on account of being themselves and no other and being alive in the world. It is possible they will be able to spend and invest this digital currency to improve their lives. And because the services they ask for—recognition, security, room to grow, nutrition, even veterinary care—will often be provided by poor communities in the tropics, human lives will also be improved.
Money needs to cross the species divide. Whoa, I know. King Julien with a credit card. Flower grenades into the meaning of life. Bear with me. If money, as some economic theorists suggest, is a form of memory, it is obvious that nonhuman species are unseen by the market economy because no money has ever been assigned by them. In order to preserve the survival of some species it is necessary in some situations, usually when they are in direct competition with humans, to give them economic advantage. An orchid, a baobab tree, a dugong, an orangutan, even at some future point the trace lines of a mycelial network—all of these should hold money.
We have the technology to start building Interspecies Money now. Indeed, it sometimes seems to me that the living system (Gaia or otherwise) is in fact producing the tools needed to protect complex life at precisely the moment it is most needed: fintech solutions in mobile money, digital wallets, and cryptocurrencies, which have shown that it is possible to address micropayments accurately and cheaply; cloud computing firms, which have demonstrated that large amounts of data can be stored and processed, even in countries that favor data sovereignty; hardware, which has become smarter and cheaper. Single-board computers (Raspberry Pis), camera traps, microphones, and other cheap sensors, energy solutions in solar arrays and batteries, internet connectivity, flying and ground robots, low-orbit satellite systems, and the pervasiveness of smartphones make it plausible to build a verification system in the wild that is trusted by the markets.
The first requirement of Interspecies Money is to provide a digital identity of an individual animal, or a herd, or a type (depending on size, population dynamics, and other characteristics of the organisms). This can be done through many methods. Birds may be identified by sound, insects by genetics, trees by probability. For most wild animals it will be done by sight. Some may be observed constantly, others only glimpsed. For instance, the digital identity of rare Hirola antelopes in Kenya and Somalia, of which there are only 500 in existence, will be minted from images gathered on mobile phones, camera traps, and drones by community rangers. The identity serves as a digital twin, which in legal and practical terms holds the money and releases it based on the services the life-form requires.