Last month, Stanford researchers declared that a new era of artificial intelligence had arrived, one built atop colossal neural networks and oceans of data. They said a new research center at Stanford would build—and study—these “foundational models” of AI.
Critics of the idea surfaced quickly—including at the workshop organized to mark the launch of the new center. Some object to the limited capabilities and sometimes freakish behavior of these models; others warn of focusing too heavily on one way of making machines smarter.
“I think the term ‘foundation’ is horribly wrong,” Jitendra Malik, a professor at UC Berkeley who studies AI, told workshop attendees in a video discussion.
Malik acknowledged that one type of model identified by the Stanford researchers—large language models that can answer questions or generate text from a prompt—has great practical use. But he said evolutionary biology suggests that language builds on other aspects of intelligence like interaction with the physical world.
“These models are really castles in the air; they have no foundation whatsoever,” Malik said. “The language we have in these models is not grounded, there is this fakeness, there is no real understanding.” He declined an interview request.
A research paper coauthored by dozens of Stanford researchers describes “an emerging paradigm for building artificial intelligence systems” that it labeled “foundational models.” Ever-larger AI models have produced some impressive advances in AI in recent years, in areas such as perception and robotics as well as language.
Large language models are also foundational to big tech companies like Google and Facebook, which use them in areas like search, advertising, and content moderation. Building and training large language models can require millions of dollars worth of cloud computing power; so far, that’s limited their development and use to a handful of well-heeled tech companies.
But big models are problematic, too. Language models inherit bias and offensive text from the data they are trained on, and they have zero grasp of common sense or what is true or false. Given a prompt, a large language model may spit out unpleasant language or misinformation. There is also no guarantee that these large models will continue to produce advances in machine intelligence.
The Stanford proposal has divided the research community. “Calling them ‘foundation models’ completely messes up the discourse,” says Subbarao Kambhampati, a professor at Arizona State University. There is no clear path from these models to more general forms of AI, Kambhampati says.
Thomas Dietterich, a professor at Oregon State University and former president of the Association for the Advancement of Artificial Intelligence, says he has “huge respect” for the researchers behind the new Stanford center, and he believes they are genuinely concerned about the problems these models raise.
But Dietterich wonders if the idea of foundational models isn’t partly about getting funding for the resources needed to build and work on them. “I was surprised that they gave these models a fancy name and created a center,” he says. “That does smack of flag planting, which could have several benefits on the fundraising side.”
Stanford has also proposed the creation of a National AI Cloud to make industry-scale computing resources available to academics working on AI research projects.
Emily M. Bender, a professor in the linguistics department at the University of Washington, says she worries that the idea of foundational models reflects a bias toward investing in the data-centric approach to AI favored by industry.
Bender says it is especially important to study the risks posed by big AI models. She coauthored a paper, published in March, that drew attention to problems with large language models and contributed to the departure of two Google researchers. But she says scrutiny should come from multiple disciplines.
“There are all of these other adjacent, really important fields that are just starved for funding,” she says. “Before we throw money into the cloud, I would like to see money going into other disciplines.”
“I don’t know how you square all of that analysis, and all of the pro-competitive justifications Apple has for its closed ecosystem, with the judge then saying, ‘But I’m going to force Apple to permit competitors to put up signpost in Apple’s ecosystem,’” says Paul Swanson, an antitrust attorney in Denver. “I don’t see how those two things go together.”
Epic Games CEO Tim Sweeney might agree. In a pugnacious tweet Friday, Sweeney said, “Today’s ruling isn’t a win for developers or for consumers. Epic is fighting for fair competition among in-app payment methods and app stores for a billion consumers.” The Verge reports that Epic plans to appeal the verdict. (Epic Games did not respond to a request for comment.) Fortnite won’t be back on iOS until “Epic can offer in-app payment in fair competition with Apple in-app payment, passing along the savings to consumers,” Sweeney tweeted.
Games industry and antitrust experts say the ruling is impactful, but not surprising. “It was very much an uphill battle for Epic to win the case,” says Florian Ederer, associate professor of economics at the Yale School of Management. At the same time, he says, the ruling was foreshadowed by growing international scrutiny over Apple’s anti-steering provisions. In August, South Korean regulators approved a bill forcing Apple and Google, a defendant in another Epic-led case, to allow payment systems other than their own. Days later, Japan’s Fair Trade Commission closed its investigation into Apple’s App Store, determining that Apple must let so-called reader apps—which include the likes of Netflix, Spotify, and Amazon Kindle—encourage users to sign up, and potentially make payments, through those companies’ own websites. Rogers’ ruling could have a much bigger financial impact, however, because, as her opinion notes, the vast majority of App Store payments come from gaming apps.
Within 90 days, App Store developers will be able to circumvent the 30 percent commission by adding in-app buttons or links to their own websites with their own payment systems. “Developers aren’t going to get all of that—they’re not going to entirely circumvent that 30 percent,” says Ederer. “But that’s a big win for developers.” He theorizes that any more cash surplus could act as a developer incentive to help ship more products or maintain them for longer, even if some users choose to take the easy route and go through Apple’s in-app payment system.
More payment systems can bring confusion, the stated enemy of Apple’s streamline-obsessed enterprise. “In the long term, with the absence of a vertically integrated platform, you’re going to have lots of different payment providers trying to get your business,” says Joost van Dreunen, a New York University Stern School of Business lecturer and author of One Up, a book on the global games business. “They’re all going to be fighting on the margin. There will be a growing number of transactors and payment processors trying to get a piece.” That may confuse users accustomed to “click and go” or “swipe here, done” systems. And with new payment processing systems, users may feel there is less transparency and trust in an already opaque, complicated digital market.
While Epic Games won a major on-the-ground battle, Apple may have won its moral one: Apple can claim users are not trapped in its iOS ecosystem so much as inhabiting it. “Today the Court has affirmed what we’ve known all along: the App Store is not in violation of antitrust law,” an Apple spokesperson said in a statement. “Apple faces rigorous competition in every segment in which we do business, and we believe customers and developers choose us because our products and services are the best in the world.”
The ruling is another crack in Apple’s walled garden. “It’s starting to show some wear and tear,” van Dreunen says. “It’s not the pristine, impervious organization it thought it would be.” And if today’s ruling is indeed appealed, its fight isn’t over yet.
Additional reporting by Gilad Edelman.
More Great WIRED Stories
First came the statements from reproductive organizations. Then came the tech companies.
The day after the US Supreme Court decided not to block a law in Texas banning most abortions after six weeks, Dallas-based Match Group, which owns Tinder, OkCupid, and Hinge, sent a memo to its employees. “The company generally does not take political stands unless it is relevant to our business,” CEO Shar Dubey wrote. “But in this instance, I personally, as a woman in Texas, could not keep silent.” The company set up a fund to cover travel expenses for employees seeking care outside of Texas. Bumble, headquartered in Austin, set up a similar fund.
Senate Bill 8, which took effect last week, enables private citizens to sue anyone “aiding and abetting” an abortion, including providers, counselors, or even rideshare drivers providing transportation to a clinic. Uber and Lyft, which are based in California, said they would cover legal costs for drivers implicated by the law. “This law is incompatible with people’s basic rights to privacy, our community guidelines, the spirit of rideshare, and our values as a company,” Lyft wrote in a statement to drivers. The company also said it would donate $1 million to Planned Parenthood.
“We are deeply concerned about how this law will impact our employees in the state,” wrote Jeremy Stoppelman, the CEO of Yelp, which has some employees in Texas. Stoppelman had previously signed a 2019 open letter calling abortion bans “bad for business,” along with the CEOs of Twitter, Slack, Postmates, and Zoom.
Such overtures have become more common in recent years, particularly among prominent technology companies. Businesses in 2021 are required to have a point of view, it seems, and have used their platforms to advocate for policies on immigration, gay rights, and climate change. Last summer, in the wake of the Black Lives Matter protests, nearly every major tech company put out a statement denouncing racism and vowing to support anti-racist work. “To be silent is to be complicit,” the official Netflix account tweeted. (Speaking out has not shielded companies from criticism of their own records, particularly on diversity and inclusion.)
One could say that corporate opinions have become the norm, at least among a certain kind of company. Companies that have remained silent on SB 8—including a number of major Texas-based employers—have been criticized for not taking a stand. Hewlett-Packard, which moved its headquarters from Silicon Valley to Houston last year, encouraged employees “to engage in the political process where they live and work and make their voices heard through advocacy and at the voting booth.” Abortion rights have become one of the most divisive issues in the United States: Six in 10 Americans say it should be legal in all or most cases, according to a recent Pew survey; nearly 4 in 10 believe the opposite.
Few major companies have come out with full-throated praise of the Texas law, which is among the most restrictive in the country. (On Thursday, the Justice Department sued Texas to stop it.) When the head of Georgia-based video game company Tripwire Interactive tweeted in support of the Supreme Court’s decision, he was criticized by thousands online, including some of his own employees. He soon stepped down from his role; the company issued a statement apologizing and committing to fostering “a more positive environment.”
For a tech company, a strong stance on social issues can be an extension of its brand, and even a recruiting tool. One LinkedIn survey, from 2018, found that the majority of people would take a pay cut to work somewhere that aligned with their values.
Later that week, in a video now viewed tens of thousands of times, Jada Brooke fanned the flames. She’d spoken to a family member of Dylan’s, she said, who was “on our side and agrees that something’s not right here.” “I had a vision of him being kicked down a set of stairs … That was actually verified to me,” she told viewers, providing no evidence. She said she’d had a vision of a shallow grave between two trees, 5 or 6 feet apart, on a property that also held a red and white truck. That led a Truro resident named Dawn to a field that held a red and white horse trailer. Inspired, a band of residents broke into the trailer. They found a pile of dry hay, which Brooke called suspicious for its lack of mold. Brooke triumphantly pointed out that the trailer, which sat in front of a stand of trees, was proof her vision had been accurate. “If I go quiet or something in the group for a while, just remember, I have six kids of my own, I home-school four. I’m a very involved mother. My kids don’t go missing, you know what I mean?”
The abuse spilled beyond accusations about the couple’s parenting. Jason received scam ransom notes from online trolls; one included a doctored picture of Dylan’s face, battered with bruises over his right eye and a deep gash on his lip. “You must transfer 3 bitcoins,” the message read, “within 72 hours.” The sender, a Facebook account under the name Brad, told Jason he’d release his son once the transfer was made, and if he didn’t, he’d never see him again. “You have 3 days to save Dylan’s life,” he wrote.
After six days, with no new evidence—no footprints or debris or credible sightings—the police called off their search. Nothing but rain boots. But Jason didn’t stop. He walked the creek bed day after day, drawing dozens of locals to help. The GoFundMe page would raise about $12,500 for the family. Ashley and Jason offered it up as a reward for any information.
Jason handed out lapel pins, a blue ribbon and a green ribbon intertwined. He gave away key chains bearing his son’s face. He ordered bumper stickers of Dylan looking upward, mismatched eyes scanning the sky. “Do you want some swag?” he asked me sadly, the first time we met. He handed me a green and blue bracelet and a sticker. Maybe, he said, if I put it on my car back home, two provinces over, someone there would see it and call in a sighting.
In Canada, parents receive a benefit if one of their children goes missing or dies in a likely crime. Because local police didn’t label the incident a crime, Ashley and Jason didn’t qualify. “No one gives you a pamphlet on how to be a missing child’s mother,” Ashley says. By October, with the province’s lockdown lifted and the dealership fully open again, she went back to work.
For months, Facebook group members examined the case’s scant evidence, gnashing details like bolts of hardening chewing gum. It was a dizzying, dystopian fun house of rumor and speculation. Theories raged: To many, the grandmother’s story didn’t track. Others believed she was covering for her daughter. That the family was collecting money on a GoFundMe page meant they’d gotten rid of Dylan because they needed the money—for booze or drugs or both. At one point, the groups’ ranks topped 23,000 people, the same as the entire population of Truro.
By the end of September 2020, the harassment and threats had gotten so bad that one group member began to research the laws that govern cyberbullying in the province and even contacted a local lawyer named Allison Harris. Harris knew about the missing boy—Dylan’s story was in the news for weeks after his disappearance—but she was shocked to learn about the abuse the online sleuthing community had spawned. Just a year and a half out of law school, Harris exudes an air of utter unflappability. She speaks in clipped, exacting sentences, and even her smile seems precise when it reveals a perfectly centered gap between her front teeth. Harris was one of just two lawyers in the province who had argued online personal injury cases in court. She told the group member to have Ashley and Jason get in touch and, after hearing their story, offered her services pro bono.
Together the three of them set to work documenting thousands of abusive screenshots, hundreds of awful messages, dozens of death threats. They wrote letters to the administrators of two of the Facebook groups, asking them to shut down. At first, both refused, though one changed her mind after becoming the target of a harassment campaign within her own group. “This case has surprised me,” Harris says. “Instead of appreciating that they’re doing damage and harm, they seem to feel they have a right to have these groups.” (Still, the groups were like a hydra: When one shut down, Ashley and Jason’s most vocal detractors simply started others under untraceable noms de plume like “Holiday Precious.”)
The administrators of the second group were local Truro residents: a couple named April Moulton and Tom Hurley who lived down the road from the backyard where Dylan was last seen. Moulton, who has dyed red hair and Cheshire-cat eyes, was certain she was doing critical work, her stout hands weighed down with silver rings on almost every finger as she examined the minutiae of the case, parsing rumored fiction from rumored fact, Hurley shuffling back and forth behind her. They didn’t know Jason or Ashley before Dylan’s story hit headlines, but they emerged as two of the most vocal proponents demanding justice for the boy. They knew as well as anyone what it was to lose a child.
Police around the country have drastically increased their use of geofence warrants, a widely criticized investigative technique that collects data from any user’s device that was in a specified area within a certain time range, according to new figures shared by Google. Law enforcement has served geofence warrants to Google since 2016, but the company has detailed for the first time exactly how many it receives.
The report shows that requests have spiked dramatically in the past three years, rising as much as tenfold in some states. In California, law enforcement made 1,909 requests in 2020, compared to 209 in 2018. Similarly, geofence warrants in Florida leaped from 81 requests in 2018 to more than 800 last year. In Ohio, requests rose from seven to 400 in that same time.
Across all 50 states, geofence requests to Google increased from 941 in 2018 to 11,033 in 2020 and now make up more than 25 percent of all data requests the company receives from law enforcement.
A single geofence request could include data from hundreds of bystanders. In 2019, a single warrant in connection with an arson resulted in nearly 1,500 device identifiers being sent to the Bureau of Alcohol, Tobacco, Firearms, and Explosives. Dozens of civil liberties groups and privacy advocates have called for banning the technique, arguing it violates Fourth Amendment protections against unreasonable searches, particularly for protesters. Now, Google’s transparency report has revealed the scale at which people nationwide may have faced the same violation.
“There’s always collateral damage,” says Jake Laperruque, senior policy counsel for the Constitution Project at the nonprofit Project on Government Oversight. Because of their inherently wide scope, geofence warrants can give police access to location data from people who have no connection to criminal activities.
“We vigorously protect the privacy of our users while supporting the important work of law enforcement,” Google said in a statement to WIRED. “We developed a process specifically for these requests that is designed to honor our legal obligations while narrowing the scope of data disclosed.”
Just this week, Forbes revealed that Google granted police in Kenosha, Wisconsin, access to user data from bystanders who were near a library and a museum that was set on fire last August, during the protests that followed the murder of George Floyd. Google handed over the “GPS coordinates and data, device data, device IDs,” and time stamps for anyone at the library for a period of two hours; at the museum, for 25 minutes. Similarly, Minneapolis police requested Google user data from anyone “within the geographical region” of a suspected burglary at an AutoZone store last year, two days after protests began.
Laperruque argues that geofence warrants could have a “chilling effect,” as people forgo their right to protest because they fear being targeted by surveillance. Just this week, Kenosha lawmakers debated a bill that would make attending a “riot” a felony. Critics noted that such a bill could penalize anyone attending peaceful demonstrations that, because of someone else’s actions, become violent. Similarly, geofence data could be used as evidence of guilt not just by being loosely associated with someone else in a crowd but by simply being there in the first place.
Geofence warrants work differently from typical search warrants. Usually, officers identify a suspect or person of interest, then obtain a warrant from a judge to search the person’s home or belongings.
With geofence warrants, police start with the time and location that a suspected crime took place, then request data from Google for the devices surrounding that location at that time, usually within a one- to two-hour window. If Google complies, it will supply a list of anonymized data about the devices in the area: GPS coordinates, the time stamps of when they were in the area, and an anonymized identifier, known as a reverse location obfuscation identifier, or RLOI.