Despite the magnitude of that potential finding, federal science has mostly left SETI out of its spreadsheets for decades. But in a 2018 reversal, NASA hosted a workshop to determine how best to search for alien technology. And outside of that support, scientists have also started a slew of new projects, and begun training more fledgling researchers. The alien hunt, in other words, is having a bit of a moment.
The History of the Hunt for Aliens
It all began in the middle of nowhere: Green Bank, West Virginia. The site’s remoteness is precisely why, in the 1950s, astronomers decided to build radio telescopes way out here, far from the contaminating influence of human technology. One of Green Bank’s early employees was a man named Frank Drake. Drake, like many scientists, read a 1959 Nature paper by physicists Guiseppe Cocconi and Philip Morrison, who suggested that if a person wanted to find intelligent aliens (here, “intelligent” means capable of using technology to transmit an identifiable signal) they might try picking up radio broadcasts, and they suggested a range of frequencies scientists could search. This fired Drake up, and in 1960 the observatory’s director agreed to let him point an 85-foot telescope at two sun-esque stars, tuning it in to the kinds of transmissions that could come from technology and not from stars, gas, or galaxies.
It didn’t, but the effort, called Project Ozma, kicked off the modern SETI enterprise. A year later, Green Bank hosted a secret National Academy of Sciences meeting at which Drake presented the now-famous and now-eponymous Drake Equation. It posits that if you know how often stars are born in the galaxy, what percentage have planets, what number of those planets are habitable, what fraction of habitable planets are inhabited, what fraction of inhabitants are intelligent, what fraction develop interstellar communication, and how long technologically intelligent civilizations survive, you could figure out how many extraterrestrial societies await your discovery. It was never meant to be be precise math: It was just a meeting agenda.
Around a decade later, NASA convened a study called “Project Cyclops.” In it, scientists laid out what alien contact might look like and how, engineering-wise, they might accomplish it. They devised a hypothetical radio telescope made of many antennas that work together as one. While at full scale, it would be cost between $36 billion and $60 billion in 2018 dollars, the attendees suggested it be made in modular fashion—a few antennas here, check for aliens. No aliens? Add a few more, look some more. Still nothing? Break ground again. Etc.
The project never happened, but it did inspire Berkeley professor Stuart Bowyer and Berkeley student Jill Tarter to start a smaller-scale program called SERENDIP: the Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations. Berkeley now has a SETI Research Center, and SERENDIP still exists—in its sixth iteration, using both the Green Bank Telescope and the Arecibo telescope in Puerto Rico. Since the turn of the century, the program has also let you help process data through the SETI@Home program, which uses your idle CPU to hunt for potential communications.
In the 70s, Ohio State started a SETI project at its Big Ear Observatory, and caught the famous WOW! Signal—a mysterious burst of radio waves that has captured attention for decades but, sorry, is not aliens. For a while, NASA had a nascent SETI program, and when it officially began operations in 1992, astronomers had good reason to hope for a “hello” from light-years away: Around this time, scientists discovered the first-ever planet beyond our solar system, around a pulsar, and would soon find another one, this time orbiting a star like the Sun. All those aliens, turns out, might have at least a few places to live. Plus, back on Earth, scientists were learning more about the badass microbes that live in hot, cold, acidic, basic, salty, radioactive, and just generally unpleasant spots. If life could find a way in all that mess, why not around Zeta Reticuli?
But politicians did not always favor pursuit of such extraterrestrials, extremophile and/or intelligent and/or otherwise. And the next year, Congress voted to terminate the NASA project’s funding.
Since then—a little more than 25 years ago—NASA has had no SETI programs. Scientists, though, aren’t easy to stop—especially not when the end result of their quest could be some kind of cosmic salvation. And so the former NASA team privatized their efforts, and began a program called Project Phoenix, backed by some of Silicon Valley’s early tycoons. For nine years, from 1995-2004, they did the work they’d planned to do under NASA’s banner on their own terms, through the nonprofit SETI Institute. From Arecibo, Green Bank, Jodrell Bank Observatory in England, and the Parkes radio telescope in Australia, they sought after radio broadcasts from the great beyond.
Humanity just can’t make up its mind about cannabis. For thousands of years, humans have used the stuff as medicine or to travel on spiritual quests. That, though, didn’t quite suit the British, who banned cannabis in colonial India. Then in the 20th century, the United States government declared war on marijuana, and most of the world followed suit.
But today, state after state is calling out the federal government on its absurd claim that weed should be a schedule I drug—an extreme danger with no medical benefits—and should fall in the same category as heroin. Even on the federal level, congressional reps like Elizabeth Warren are fighting to end the criminalization of cannabis use. The fact is, scientists have proven cannabis can treat a range of ills and that it’s actually much safer than alcohol. The twisty-turny journey of cannabis has landed us back at a central truth: It’s actually a powerful medicine that can help treat what ails the human body.
Yet as governments come around to the fact that the war on cannabis—which has had a massively disproportionate effect on black Americans—is both insane and unwinnable, the drug remains largely mysterious. The root of the problem: Unlike a relatively simple drug like alcohol, cannabis is made up of hundreds of compounds in addition to THC, all interacting in ways scientists are just beginning to understand.
But therein lies the beauty of it. Things are getting real nerdy with cannabis science. So let us guide you through the haze.
The History of Cannabis
The cannabis plant probably originated in Central Asia, and may have been one of the first plants cultivated by humans. In addition to its psychoactive charms, cannabis gave early growers nutritious seeds to eat and useful fibers for rope. (Today, the industry makes rope out of hemp, a variety of the plant with little to no THC, and therefore no psychoactivity. Hemp fibers are even making their way into construction materials.) And our ancestors were aware of some of the medicinal benefits of cannabis: The ancient Chinese deity Shennong, or “God Farmer,” recommended that cultivators grow “hemp elixir” to treat the sick. Cannabis has a particularly rich history in India, where it has been used for thousands of years as a spiritual aid.
Even as great societies of metal and stone formed, cannabis remained an indispensable crop. Ancient Rome, for instance, wouldn’t have been the sea power it was without super-strong hemp sails and ropes. The British and Spanish, too, powered their world-spanning empires with hemp riggings. George Washington grew the bejesus out of cannabis.
All the while, it wasn’t like humanity had forgotten that cannabis was also good for getting high. Mexico in particular emerged as a major cultivator of psychoactive strains in the early 1900s, and that cannabis wafted over the border into the United States. Then, in 1937, the US passed the Marijuana Tax Act, which effectively criminalized the drug. And in 1970 the Controlled Substances Act branded cannabis a schedule I drug, essentially equating it with the devil himself.
As with the prohibition of alcohol, banning the consumption of cannabis just drove the drug underground. Which brings us to the legend of Northern California, mecca of cannabis production. Over the last few decades, cultivators have hidden themselves in the wildlands, producing perhaps 75 percent of the domestically grown cannabis consumed in the US. Growers here have selected plant generation after plant generation for high THC content, to the point where you can now regularly find flower with 25, even 30 percent THC, whereas a few decades ago the average was around 5 percent.
While Northern California’s growers were proving themselves masters of cannabis cultivation, the plant remained—and to large degree still remains—mysterious. That’s because it’s extremely difficult for researchers to study a schedule I drug. Until 2016, for instance, the DEA claimed a monopoly on the official supply of research cannabis, licensing a single farm at the University of Mississippi that produced legendarily crappy weed that looks nothing like what’s out in the market. (Like, literally. It’s so bad it doesn’t even look or smell like weed as we consumers know it.)
That regulatory wall, though, is crumbling, and science is rejoicing.
The Future of Cannabis
Throughout history, humans have used cannabis as a medicine without the confirmation of methodical scientific studies. The Aka people of the Congo River basin, for example, use the drug to ward off intestinal worms. Anecdotally, cannabis is great for treating pain as well.
By the 1960s, the US government was using powerful mainframe computers to store and process an enormous amount of data on nearly every American. Corporations also used the machines to analyze sensitive information including consumer purchasing habits. There were no laws dictating what kind of data they could collect. Worries over supercharged surveillance soon emerged, especially after the publication of Vance Packard’s 1964 book, The Naked Society, which argued that technological change was causing the unprecedented erosion of privacy.
The next year, President Lyndon Johnson’s administration proposed merging hundreds of federal databases into one centralized National Data Bank. Congress, concerned about possible surveillance, pushed back and organized a Special Subcommittee on the Invasion of Privacy. Lawmakers worried the data bank, which would “pool statistics on millions of Americans,” could “possibly violate their secret lives,” The New York Timesreported at the time. The project was never realized. Instead, Congress passed a series of laws governing the use of personal data, including the Fair Credit Reporting Act in 1970 and the Privacy Act in 1974. The regulations mandated transparency but did nothing to prevent the government and corporations from collecting information in the first place, argues technology historian Margaret O’Mara.
Toward the end of the 1960s, some scholars, including MIT political scientist Ithiel de Sola Pool, predicted that new computer technologies would continue to facilitate even more invasive personal data collection. The reality they envisioned began to take shape in the mid-1990s, when many Americans started using the internet. By the time most everyone was online, though, one of the first privacy battles over digital data brokers had already been fought: In 1990, Lotus Corporation and the credit bureau Equifax teamed up to create Lotus MarketPlace: Households, a CD-ROM marketing product that was advertised to contain names, income ranges, addresses, and other information about more than 120 million Americans. It quickly caused an uproar among privacy advocates on digital forums like Usenet; over 30,000 people contacted Lotus to opt out of the database. It was ultimately canceled before it was even released. But the scandal didn’t stop other companies from creating massive data sets of consumer information in the future.
Several years later, ads began permeating the web. In the beginning, online advertising remained largely anonymous. While you may have seen ads for skiing if you looked up winter sports, websites couldn’t connect you to your real identity. (HotWired.com, the online version of WIRED, was the first website to run a banner ad in 1994, as part of a campaign for AT&T.) Then, in 1999, digital ad giant DoubleClick ignited a privacy scandal when it tried to de-anonymize its ads by merging with the enormous data broker Abacus Direct.
Privacy groups argued that DoubleClick could have used personal information collected by the data broker to target ads based on people’s real names. They petitioned the Federal Trade Commission, arguing that the practice would amount to unlawful tracking. As a result, DoubleClick sold the firm at a loss in 2006, and the Network Advertising Initiative was created, a trade group that developed standards for online advertising, including requiring companies to notify users when their personal data is being collected.
The Future of Personal Data Collection
Personal information is currently collected primarily through screens, when people use computers and smartphones. The coming years will bring the widespread adoption of new data-guzzling devices, like smart speakers, censor-embedded clothing, and wearable health monitors. Even those who refrain from using these devices will likely have their data gathered, by things like facial recognition-enabled surveillance cameras installed on street corners. In many ways, this future has already begun: Taylor Swift fans have had their face data collected, and Amazon Echos are listening in on millions of homes.
We haven’t decided, though, how to navigate this new data-filled reality. Should colleges be permitted to digitally track their teenage applicants? Do we really want health insurance companies monitoring our Instagram posts? Governments, artists, academics, and citizens will think about these questions and plenty more.
And as scientists push the boundaries of what’s possible with artificial intelligence, we will also need to learn to make sense of personal data that isn’t even real, at least in that it didn’t come from humans. For example, algorithms are already generating “fake” data for other algorithms to train on. So-called deepfake technology allows propagandists and hoaxers to leverage social media photos to make videos depicting events that never happened. AI can now create millions of synthetic faces that don’t belong to anyone, altering the meaning of stolen identity. This fraudulent data could further distort social media and other parts of the internet. Imagine trying to discern whether a Tinder match or the person you followed on Instagram actually exists.
But all of these enterprises are businesses, not philanthropic vision boards. Is making life casually spacefaring and seriously interplanetary actually a plausible financial prospect? And—more important—is it actually a desirable one?
Let’s start with low-key suborbital space tourism, of the type Virgin Galactic and Blue Origin would like to offer. Some economists see this as fairly feasible: If we know one thing about the world, it’s that some subset of the population will always have too much money and will get to spend it on cool things unattainable for the plebs. If such flights become routine, though, their price could go down, and space tourism could follow the trajectory of the commercial aviation industry, which used to be for the wealthy and is now home to Spirit Airlines. Some also speculate that longer, orbital flights—and sleepovers in cushy six-star space hotels (the extra star is for the space part)—could follow.
After there’s a market for space hotels, more infrastructure could follow. And if you’re going to build something for space, it might be easier and cheaper to build it in space, with materials from space, rather than spending billions to launch all the materials you need. Maybe moon miners and manufacturers could establish a proto-colony, which could lead to some people living there permanently.
Or not. Who knows? I can’t see the future, and neither can you, and neither can these billionaires.
But with long journeys or permanent residence come problems more complicated than whether money is makeable or whether it’s possible to build a cute town square out of moon dust. The most complicated part of human space exploration will always be the human.
We weak creatures evolved in the environment of this planet. Mutations and adaptations cropped up to make us uniquely suited to living here—and so uniquely not suited to living in space, or in Valles Marineris. It’s too cold or too hot; there’s no air to breathe; you can’t eat potatoes grown in your own shit for the rest of your unnatural life. Your personal microbes may influence everything from digestion to immunity to mood, in ways scientists don’t yet understand, and although they also don’t understand how space affects that microbiome, it probably won’t be the same if you live on an extraterrestrial crater as it would be in your apartment.
Plus, in lower gravity, your muscles go slack. The fluids inside you pool strangely. Drugs don’t always works as expected. The shape of your brain changes. Your mind goes foggy. The backs of your eyeballs flatten. And then there’s the radiation, which can deteriorate tissue, cause cardiovascular disease, mess with your nervous system, give you cancer, or just induce straight-up radiation sickness till you die. If your body holds up, you still might lose it on your fellow crew members, get homesick (planetsick), and you will certainly be bored out of your skull on the journey and during the tedium and toil to follow.
Maybe there’s a technological future in which we can mitigate all of those effects. After all, many things that were once unimaginable—from vaccines to quantum mechanics—are now fairly well understood. But the billionaires don’t, for the most part, work on the people problems: When they speak of space cities, they leave out the details—and their money goes toward the physics, not the biology.
They also don’t talk so much about the cost or the ways to offset it. But Blue Origin and SpaceX both hope to collaborate with NASA (i.e. use federal money) for their far-off-Earth ventures, making this particular kind of private spaceflight more of a public-private partnership. They’ve both already gotten many millions in contracts with NASA and the Department of Defense for nearer-term projects, like launching national-security satellites and developing more infrastructure to do so more often. Virgin, meanwhile, has a division called Virgin Orbit that will send up small satellites, and SpaceX aims to create its own giant smallsat constellation to provide global internet coverage. And at least for the foreseeable future, it’s likely their income will continue to flow more from satellites than from off-world infrastructure. In that sense, even though they’re New Space, they’re just conventional government contractors.
So, if the money is steadier nearby, why look farther off than Earth orbit? Why not stick to the lucrative business of sending up satellites or enabling communications? Yes, yes, the human spirit. OK, sure, survivability. Both noble, energizing goals. But the backers may also be interested in creating international-waters-type space states, full of the people who could afford the trip (or perhaps indentured workers who will labor in exchange for the ticket). Maybe the celestial population will coalesce into a utopian society, free of the messes we’ve made of this planet. Humans could start from scratch somewhere else, scribble something new and better on extraterrestrial tabula rasa soil. Or maybe, as it does on Earth, history would repeat itself, and human baggage will be the heaviest cargo on the colonial ships. After all, wherever you go, there you are.
Maybe we’d be better off as a species if we stayed home and looked our problems straight in the eye. That’s the conclusion science fiction author Gary Westfahl comes to in an essay called “The Case Against Space.” Westfahl doesn’t think innovation happens when you switch up your surroundings and run from your difficulties, but rather when you stick around and deal with the situation you created.
Besides, most Americans don’t think big-shot human space travel is a national must-do at all, at least not with their money. According to a 2018 Pew poll, more than 60 percent of people say NASA’s top priorities should be to monitor the climate and watch for Earth-smashing asteroids. Just 18 and 13 percent think the same of a human trip to Mars or the moon, respectively. The People, in other words, are more interested in caring for this planet, and preserving the life on it, than they are in making some other world livable.
But maybe that doesn’t matter: History is full of billionaires who do what they want, and it’s full of societal twists and turns dictated by their direction. Besides, if even a fraction of a percent of the US population signed on to a long-term space mission, their spaceship would still carry the biggest extraterrestrial settlement ever to travel the solar system. And even if it wasn’t an oasis, or a utopia, it would still be a giant leap.
Last updated January 30, 2019
Enjoyed this deep dive? Check out more WIRED Guides.
In the past five years, autonomous driving has gone from “maybe possible” to “definitely possible” to “inevitable” to “how did anyone ever think this wasn’t inevitable?” to “now commercially available.” In December 2018, Waymo, the company that emerged from Google’s self-driving-car project, officially started its commercial self-driving-car service in the suburbs of Phoenix. The details of the program—it’s available only to a few hundred vetted riders, and human safety operators will remain behind the wheel—may be underwhelming but don’t erase its significance. People are now paying for robot rides.
And it’s just a start. Waymo will expand the service’s capability and availability over time. Meanwhile, its onetime monopoly has evaporated. Smaller startups like May Mobility and Drive.ai are running small-scale but revenue-generating shuttle services. Every significant automaker is pursuing the tech, eager to rebrand and rebuild itself as a “mobility provider” before the idea of car ownership goes kaput. Ride-hailing companies like Lyft and Uber are hustling to dismiss the profit-gobbling human drivers who now shuttle their users about. Tech giants like Apple, IBM, and Intel are looking to carve off their slice of the pie. Countless hungry startups have materialized to fill niches in a burgeoning ecosystem, focusing on laser sensors, compressing mapping data, setting up service centers, and more.
This 21st-century gold rush is motivated by the intertwined forces of opportunity and survival instinct. By one account, driverless tech will add $7 trillion to the global economy and save hundreds of thousands of lives in the next few decades. Simultaneously, it could devastate the auto industry and its associated gas stations, drive-thrus, taxi drivers, and truckers. Some people will prosper. Most will benefit. Many will be left behind.
It’s worth remembering that when automobiles first started rumbling down manure-clogged streets, people called them horseless carriages. The moniker made sense: Here were vehicles that did what carriages did, minus the hooves. By the time “car” caught on as a term, the invention had become something entirely new. Over a century, it reshaped how humanity moves and thus how (and where and with whom) humanity lives. This cycle has restarted, and the term “driverless car” will soon seem as anachronistic as “horseless carriage.” We don’t know how cars that don’t need human chauffeurs will mold society, but we can be sure a similar gear shift is on the way.
The First Self-Driving Cars
Just over a decade ago, the idea of being chauffeured around by a string of zeros and ones was ludicrous to pretty much everybody who wasn’t at an abandoned Air Force base outside Los Angeles, watching a dozen driverless cars glide through real traffic. That event was the Urban Challenge, the third and final competition for autonomous vehicles put on by Darpa, the Pentagon’s skunkworks arm.
At the time, America’s military-industrial complex had already thrown vast sums and years of research trying to make unmanned trucks. It had laid a foundation for this technology, but stalled when it came to making a vehicle that could drive at practical speeds, through all the hazards of the real world. So, Darpa figured, maybe someone else—someone outside the DOD’s standard roster of contractors, someone not tied to a list of detailed requirements but striving for a slightly crazy goal—could put it all together. It invited the whole world to build a vehicle that could drive across California’s Mojave Desert, and whoever’s robot did it the fastest would get a million-dollar prize.
The 2004 Grand Challenge was something of a mess. Each team grabbed some combination of the sensors and computers available at the time, wrote their own code, and welded their own hardware, looking for the right recipe that would take their vehicle across 142 miles of sand and dirt of the Mojave. The most successful vehicle went just seven miles. Most crashed, flipped, or rolled over within sight of the starting gate. But the race created a community of people—geeks, dreamers, and lots of students not yet jaded by commercial enterprise—who believed the robot drivers people had been craving for nearly forever were possible, and who were suddenly driven to make them real.
They came back for a follow-up race in 2005 and proved that making a car drive itself was indeed possible: Five vehicles finished the course. By the 2007 Urban Challenge, the vehicles were not just avoiding obstacles and sticking to trails but following traffic laws, merging, parking, even making safe, legal U-turns.
When Google launched its self-driving car project in 2009, it started by hiring a team of Darpa Challenge veterans. Within 18 months, they had built a system that could handle some of California’s toughest roads (including the famously winding block of San Francisco’s Lombard Street) with minimal human involvement. A few years later, Elon Musk announced Tesla would build a self-driving system into its cars. And the proliferation of ride-hailing services like Uber and Lyft weakened the link between being in a car and owning that car, helping set the stage for a day when actually driving that car falls away too. In 2015, Uber poached dozens of scientists from Carnegie Mellon University—a robotics and artificial intelligence powerhouse—to get its effort going.
Digital data breaches started long before widespread use of the internet, yet they were similar in many respects to the leaks we see today. One early landmark incident occurred in 1984, when the credit reporting agency TRW Information Systems (now Experian) realized that one of its database files had been breached. The trove was protected by a numeric passcode that someone lifted from an administrative note at a Sears store and posted on an “electronic bulletin board”—a sort of rudimentary Google Doc that people could access and alter using their landline phone connection. From there, anyone who knew how to view the bulletin board could have used the password to access the data stored in the TRW file: personal data and credit histories of 90 million Americans. The password was exposed for a month. At the time, TRW said that it changed the database password as soon as it found out about the situation. Though the incident is dwarfed by last year’s breach of the credit reporting agency Equifax (discussed below), the TRW lapse was a warning to data firms everywhere—one that many clearly didn’t heed.
Large-scale breaches like the TRW incident occurred sporadically as years went by and the internet matured. By the early 2010s, as mobile devices and the Internet of Things greatly expanded interconnectivity, the problem of data breaches became especially urgent. Stealing username/password pairs or credit card numbers—even breaching a trove of data aggregated from already public sources—could give attackers the keys to someone’s entire online life. And certain breaches in particular helped fuel a growing dark web economy of stolen user data.
One of these incidents was a breach of LinkedIn in 2012 that initially seemed to expose 6.5 million passwords. The data was hashed, or cryptographically scrambled, as a protection to make it unintelligible and therefore difficult to reuse, but hackers quickly started “cracking” the hashes to expose LinkedIn users’ actual passwords. Though LinkedIn itself took precautions to reset impacted account passwords, attackers still got plenty of mileage out of them by finding other accounts around the web where users had reused the same password. That all too common lax password hygiene means a single breach can haunt users for years.
The LinkedIn hack also turned out to be even worse than it first appeared. In 2016 a hacker known as “Peace” started selling account information, particularly email addresses and passwords, from 117 million LinkedIn users. Data stolen from the LinkedIn breach has been repurposed and re-sold by criminals ever since, and attackers still have some success exploiting the data to this day, since so many people reuse the same passwords across numerous accounts for years.
Data breaches didn’t truly become dinner table fodder, though, until the end of 2013 and 2014, when major retailers Target, Neiman Marcus, and Home Depot suffered massive breaches one after the other. The Target hack, first publicly disclosed in December 2013, impacted the personal information (like names, addresses, phone numbers, and email addresses) of 70 million Americans and compromised 40 million credit card numbers. Just a few weeks later, in January 2014, Neiman Marcus admitted that its point-of-sale systems had been hit by the same malware that infected Target, exposing the information of about 110 million Neiman Marcus customers, along with 1.1 million credit and debit card numbers. Then, after months of fallout from those two breaches, Home Depot announced in September 2014 that hackers had stolen 56 million credit and debit card numbers from its systems by installing malware on the company’s payment terminals.
See What’s Next in Tech With the Fast Forward Newsletter
From artificial intelligence and self-driving cars to transformed cities and new startups, sign up for the latest news.
An even more devastating and sinister attack was taking place at the same time, though. The Office of Personnel Management is the administrative and HR department for US government employees. The department manages security clearances, conducts background checks, and keeps records on every past and present federal employee. If you want to know what’s going on inside the US government, this is the department to hack. So China did.
Hackers linked to the Chinese government infiltrated OPM’s network twice, first stealing the technical blueprints for the network in 2013, then initiating a second attack shortly thereafter in which they gained control of the administrative server that managed the authentication for all other server logins. In other words, by the time OPM fully realized what had happened and acted to remove the intruders in 2015, the hackers had been able to steal tens of millions of detailed records about every aspect of federal employees’ lives, including 21.5 million Social Security numbers and 5.6 million fingerprint records. In some cases, victims weren’t even federal employees, but were simply connected in some way to government workers who had undergone background checks. (Those checks include all sorts of extremely specific information, like maps of a subject’s family, friends, associates, and children.)
Pilfered OPM data never circulated online or showed up on the black market, likely because it was stolen for its intelligence value rather than its street value. Reports indicated that Chinese operatives may have used the information to supplement a database cataloging US citizens and government activity.
Today, data breaches are so common that the cybersecurity industry even has a phrase—“breach fatigue”—to describe the indifference that can come from such an overwhelming and seemingly hopeless string of events. And while tech companies, not to mention regulators, are starting to take data protection more seriously, the industry has yet to turn the corner. In fact, some of the most disheartening breaches yet have been disclosed in the last couple of years.
Yahoo lodged repeated contenders for the distinction of all-time biggest data breach when it made an extraordinary series of announcements beginning in September 2016. First, the company disclosed that an intrusion in 2014 compromised personal information from 500 million user accounts. Then, two months later, Yahoo added that it had suffered a separate breach in August 2013 that exposed a billion accounts. Sounds like a pretty unassailable lead in the race to the data-breach bottom, right? And yet! In October 2017, the company said that after further investigation it was revising its estimate of 1 billion accounts to 3 billion—or every Yahoo account that existed in August 2013.
There are few companies that even have billions of user accounts to lose, but there are still other ways for a breach to be worse than the Yahoo debacles. For example, the credit monitoring firm Equifax disclosed a massive breach at the beginning of September, which exposed personal information for 147.9 million people. The data included birth dates, addresses, some driver’s license numbers, about 209,000 credit card numbers, and Social Security numbers—meaning that almost half the US population potentially had their crucial secret identifier exposed. Because the information stolen from Equifax was so sensitive, it’s widely considered the worst corporate data breach ever. At least for now.