Select Page
The WIRED Guide to Net Neutrality

The WIRED Guide to Net Neutrality

The Bush-era FCC took a first pass at anti-discrimination rules for the internet in a policy statement in 2005. It prohibited internet service providers from blocking legal content or preventing customers from connecting the devices of their choosing to their internet connections. Under this policy, the FCC ordered Comcast in 2008 to stop slowing connections that used the peer-to-peer file-sharing software BitTorrent, which was often used for digital piracy but also had legitimate uses. Comcast sued the FCC, arguing the agency had overstepped its bounds. A federal court agreed, ruling that the FCC had failed to make the legal case that it had the authority to enforce the 2005 policy statement.

In 2010, the Obama-era FCC passed a more detailed net neutrality order that it hoped would stand up to legal scrutiny. But the agency was sued again, this time by Verizon, and in 2014 the same court ruled the agency didn’t have the authority to impose net neutrality regulations on services that weren’t considered common carriers under Title II of the Communications Act, like traditional telephone services.

Later that year, the FCC floated a new proposal that net neutrality proponents worried would allow internet “fast lanes.” The idea drew the ire of comedian John Oliver, who encouraged viewers of his show Last Week Tonight to file comments to express their support for net neutrality. The flood of comments crashed the FCC’s website. The agency eventually received 21.9 million comments on the issue, shattering the record previously held by Janet Jackson’s 2004 Super Bowl “wardrobe malfunction.”

Then-FCC chair Wheeler eventually changed tack and decided to reclassify broadband providers as Title II carriers, though with fewer obligations than landline telephone operators. The FCC passed its sweeping net neutrality order in 2015, and was again sued by telecommunications firms. The same federal court that shot down the FCC’s previous attempts at net neutrality rules finally sided with the agency, ruling that the 2015 rules were legal. An industry group appealed that decision to the Supreme Court, which has yet to hear the case.

Meanwhile, control of the FCC changed as a result of the 2016 election. In January 2017, President Trump appointed Republican FCC Commissioner Ajit Pai as the agency’s new chair. In April, he announced a plan to reverse the 2015 net neutrality order. The FCC website was once again flooded with comments. But this time, observers noticed that a huge number of comments, many of which opposed net neutrality, were filed not by people but by bots.

The December 2017 FCC vote effectively threw out the 2015 rules in their entirety. The FCC’s new rules drop the common-carrier status for broadband providers, as well as any restrictions on blocking or throttling content. In place of those restrictions, the new rules only require that internet service providers disclose information about their network-management practices. It will now be up to the Federal Trade Commission to protect consumers from alleged net neutrality violations. But the FTC is only an enforcement agency: It can’t create new rules. That means that unless a net neutrality violation is also illegal under existing fair-competition laws, there’s not much the agency can do about it. Outright blocking a competitor may well be an antitrust violation, but creating fast lanes for companies that pay extra for special treatment might not be.

What Is Net Neutrality The Complete WIRED Guide

The Future of Net Neutrality

The future of net neutrality is now in the hands of Congress, the courts, and the states. Twenty-one state attorneys general sued the FCC in January 2018 to block the new rules and restore the old ones; so did several consumer-advocacy groups. A federal court decided mostly in the FCC’s favor in 2019 but ruled that the agency couldn’t override state-level net neutrality laws.

Several states have already passed such laws. Washington became the first in March 2018, and Oregon followed soon after. California passed one of the most comprehensive net neutrality laws of all, but the rules are currently on hold amidst a legal challenge from the federal government. Governors of Hawaii, Montana, New Jersey, New York, and Vermont have passed executive orders banning state agencies from doing business with broadband providers that don’t uphold the principles of net neutrality.

In the meantime, you can expect broadband providers to slowly take advantage of their new freedom. They probably won’t take big overt steps to slow down or block competing services, especially not while courts are still deliberating the FCC’s latest decision. But you can expect to see more of the practices that carriers already employ, like letting their own content bypass data limits. For example, AT&T lets you watch its DirecTV Now video service without having it count against your data plan, but watching Netflix or Hulu still chews through your limit.

What Is Net Neutrality The Complete WIRED Guide

Learn More

  • Here’s How the End of Net Neutrality Will Change the Internet
    If you want to know more about what broadband providers are most likely to do once the net neutrality rules go away, start here. We take a deeper look at the ways companies already use data caps to shape your internet experience, and what clues these practices provide about what the future holds.

  • The Covid-19 Pandemic Shows the Virtues of Net Neutrality
    It might seem quaint to worry about net neutrality during the coronavirus pandemic. But the crisis has made the internet more important than ever, highlights why people need unfettered access content, and illustrates why key arguments against net neutrality just don’t hold up.

  • California Net Neutrality Bill Would Go Beyond Original Protections
    Several states have passed executive orders to protect net neutrality, and both Oregon and Washington have passed their own rules as well. But so far no state’s protections are quite as robust as the Obama-era FCC rules. California could change that. The state passsed the toughest net neutrality bill yet.It’s on hold pending the resolution of a challenge from the Department of Justice, but it sets an example for other states.

  • Why Trump Supporters Should Love Net Neutrality
    Net neutrality is a partisan issue in Washington, but it shouldn’t be. Here’s why conservatives should fight big cable and embrace net neutrality.

  • The FCC Says Net Neutrality Cripples Investment. That’s Not True
    We took a hard look at earnings reports from the broadband industry, and found that one of the biggest claims made by net neutrality opponents is false. In fact, some broadband providers actually invested more on infrastructure after the 2015 net neutrality rules passed.

  • How Bots Broke the FCC’s Public Comment SystemThe FCC received an unprecedented number of comments on its plan to reverse its net neutrality protections. But researchers think the vast majority of those comments came from bots. We took a look at the evidence and what it means for the future of online debate.

  • FCC Plan to Kill Net Neutrality Rules Could Hurt Students
    Broadband plays a crucial role in education, from grade school to career retraining. Here’s how the end of net neutrality could set students back.

  • This Is Ajit Pai, Nemesis of Net NeutralityFCC chair Ajit Pai might be the most hated man on the internet. WIRED tracks his progression from nerdy high school student to policy wonk to head of the country’s top telecom regulator.

  • Plus! Local legislation and more WIRED net neutrality coverage.


This guide was last updated on May 4, 2020.

Enjoyed this deep dive? Check out more WIRED Guides.

What Is Net Neutrality? The Complete WIRED Guide

What Is Net Neutrality? The Complete WIRED Guide

Net neutrality is the idea that internet service providers like Comcast and Verizon should treat all content flowing through their cables and cell towers equally. That means they shouldn’t be able to slide some data into “fast lanes” while blocking or otherwise discriminating against other material. In other words, these companies shouldn’t be able to block you from accessing a service like Skype, or slow down Netflix or Hulu, in order to encourage you to keep your cable package or buy a different video-streaming service.

The Federal Communications Commission spent years, under both the Bush and Obama administrations, trying to enforce net neutrality protections. After a series of legal defeats at the hands of broadband providers, the FCC passed a sweeping net neutrality order in 2015. But in December 2017, the now Republican-controlled FCC voted to jettison that order, freeing broadband providers to block or throttle content as they see fit unless Congress or the courts block the agency’s decision.

Net neutrality advocates have long argued that keeping the internet an open playing field is crucial for innovation. If broadband providers pick favorites online, new companies and technologies might never have the chance to grow. For example, had internet providers blocked or severely limited video streaming in the mid-2000s, we might not have Netflix or YouTube today. Other advocates highlight the importance of net neutrality to free expression: a handful of large telecommunications companies dominate the broadband market, which puts an enormous amount of power into their hands to suppress particular views or limit online speech to those who can pay the most.

Most large broadband providers promised not to block or throttle content ahead of the ruling, but all four major mobile carriers began slowing down at least some video content before the FCC had fully repealed the rules. Net neutrality advocates worry things could get worse. A broadband provider might, for example, allow some companies to pay for priority treatment on broadband networks. The fear is that, over time, companies and organizations that either can’t afford priority treatment, or simply aren’t offered access to it, will fall by the wayside.

What Is Net Neutrality The Complete WIRED Guide

The History of Net Neutrality

Columbia University law professor Tim Wu coined the term “network neutrality” in a 2003 paper about online discrimination. At the time, some broadband providers, including Comcast, banned home internet users from accessing virtual private networks (VPNs), while others, like AT&T, banned users from using Wi-Fi routers. Wu worried that broadband providers’ tendency to restrict new technologies would hurt innovation in the long term, and called for anti-discrimination rules.

The Bush-era FCC took a first pass at anti-discrimination rules for the internet in a policy statement in 2005. It prohibited internet service providers from blocking legal content or preventing customers from connecting the devices of their choosing to their internet connections. Under this policy, the FCC ordered Comcast in 2008 to stop slowing connections that used the peer-to-peer file-sharing software BitTorrent, which was often used for digital piracy but also had legitimate uses. Comcast sued the FCC, arguing the agency had overstepped its bounds. A federal court agreed, ruling that the FCC had failed to make the legal case that it had the authority to enforce the 2005 policy statement.

In 2010, the Obama-era FCC passed a more detailed net neutrality order that it hoped would stand up to legal scrutiny. But the agency was sued again, this time by Verizon, and in 2014 the same court ruled the agency didn’t have the authority to impose net neutrality regulations on services that weren’t considered common carriers under Title II of the Communications Act, like traditional telephone services.

Later that year, the FCC floated a new proposal that net neutrality proponents worried would allow internet “fast lanes.” The idea drew the ire of comedian John Oliver, who encouraged viewers of his show Last Week Tonight to file comments to express their support for net neutrality. The flood of comments crashed the FCC’s website. The agency eventually received 21.9 million comments on the issue, shattering the record previously held by Janet Jackson’s 2004 Super Bowl “wardrobe malfunction.”

Then-FCC chair Wheeler eventually changed tack and decided to reclassify broadband providers as Title II carriers, though with fewer obligations than landline telephone operators. The FCC passed its sweeping net neutrality order in 2015, and was again sued by telecommunications firms. The same federal court that shot down the FCC’s previous attempts at net neutrality rules finally sided with the agency, ruling that the 2015 rules were legal. An industry group appealed that decision to the Supreme Court, which has yet to hear the case.

The WIRED Guide to Robots

The WIRED Guide to Robots

Modern robots are not unlike toddlers: It’s hilarious to watch them fall over, but deep down we know that if we laugh too hard, they might develop a complex and grow up to start World War III. None of humanity’s creations inspires such a confusing mix of awe, admiration, and fear: We want robots to make our lives easier and safer, yet we can’t quite bring ourselves to trust them. We’re crafting them in our own image, yet we are terrified they’ll supplant us.

But that trepidation is no obstacle to the booming field of robotics. Robots have finally grown smart enough and physically capable enough to make their way out of factories and labs to walk and roll and even leap among us. The machines have arrived.

You may be worried a robot is going to steal your job, and we get that. This is capitalism, after all, and automation is inevitable. But you may be more likely to work alongside a robot in the near future than have one replace you. And even better news: You’re more likely to make friends with a robot than have one murder you. Hooray for the future!

The Complete History And Future of Robots

The History of Robots

The definition of “robot” has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek’s play R.U.R., or Rossum’s Universal Robots. “Robot” comes from the Czech for “forced labor.” These robots were robots more in spirit than form, though. They looked like humans, and instead of being made of metal, they were made of chemical batter. The robots were far more efficient than their human counterparts, and also way more murder-y—they ended up going on a killing spree.

R.U.R. would establish the trope of the Not-to-Be-Trusted Machine (e.g., Terminator, The Stepford Wives, Blade Runner, etc.) that continues to this day—which is not to say pop culture hasn’t embraced friendlier robots. Think Rosie from The Jetsons. (Ornery, sure, but certainly not homicidal.) And it doesn’t get much family-friendlier than Robin Williams as Bicentennial Man.

The real-world definition of “robot” is just as slippery as those fictional depictions. Ask 10 roboticists and you’ll get 10 answers—how autonomous does it need to be, for instance. But they do agree on some general guidelines: A robot is an intelligent, physically embodied machine. A robot can perform tasks autonomously to some degree. And a robot can sense and manipulate its environment.

Think of a simple drone that you pilot around. That’s no robot. But give a drone the power to take off and land on its own and sense objects and suddenly it’s a lot more robot-ish. It’s the intelligence and sensing and autonomy that’s key.

But it wasn’t until the 1960s that a company built something that started meeting those guidelines. That’s when SRI International in Silicon Valley developed Shakey, the first truly mobile and perceptive robot. This tower on wheels was well-named—awkward, slow, twitchy. Equipped with a camera and bump sensors, Shakey could navigate a complex environment. It wasn’t a particularly confident-looking machine, but it was the beginning of the robotic revolution.

Around the time Shakey was trembling about, robot arms were beginning to transform manufacturing. The first among them was Unimate, which welded auto bodies. Today, its descendants rule car factories, performing tedious, dangerous tasks with far more precision and speed than any human could muster. Even though they’re stuck in place, they still very much fit our definition of a robot—they’re intelligent machines that sense and manipulate their environment.

Robots, though, remained largely confined to factories and labs, where they either rolled about or were stuck in place lifting objects. Then, in the mid-1980s Honda started up a humanoid robotics program. It developed P3, which could walk pretty darn good and also wave and shake hands, much to the delight of a roomful of suits. The work would culminate in Asimo, the famed biped, which once tried to take out President Obama with a well-kicked soccer ball. (OK, perhaps it was more innocent than that.)

Today, advanced robots are popping up everywhere. For that you can thank three technologies in particular: sensors, actuators, and AI.

So, sensors. Machines that roll on sidewalks to deliver falafel can only navigate our world thanks in large part to the 2004 Darpa Grand Challenge, in which teams of roboticists cobbled together self-driving cars to race through the desert. Their secret? Lidar, which shoots out lasers to build a 3-D map of the world. The ensuing private-sector race to develop self-driving cars has dramatically driven down the price of lidar, to the point that engineers can create perceptive robots on the (relative) cheap.

Lidar is often combined with something called machine vision—2-D or 3-D cameras that allow the robot to build an even better picture of its world. You know how Facebook automatically recognizes your mug and tags you in pictures? Same principle with robots. Fancy algorithms allow them to pick out certain landmarks or objects.

Sensors are what keep robots from smashing into things. They’re why a robot mule of sorts can keep an eye on you, following you and schlepping your stuff around; machine vision also allows robots to scan cherry trees to determine where best to shake them , helping fill massive labor gaps in agriculture.

The Complete History And Future of Robots

The Complete History And Future of Robots

Modern robots are not unlike toddlers: It’s hilarious to watch them fall over, but deep down we know that if we laugh too hard, they might develop a complex and grow up to start World War III. None of humanity’s creations inspires such a confusing mix of awe, admiration, and fear: We want robots to make our lives easier and safer, yet we can’t quite bring ourselves to trust them. We’re crafting them in our own image, yet we are terrified they’ll supplant us.

But that trepidation is no obstacle to the booming field of robotics. Robots have finally grown smart enough and physically capable enough to make their way out of factories and labs to walk and roll and even leap among us. The machines have arrived.

You may be worried a robot is going to steal your job, and we get that. This is capitalism, after all, and automation is inevitable. But you may be more likely to work alongside a robot in the near future than have one replace you. And even better news: You’re more likely to make friends with a robot than have one murder you. Hooray for the future!

The Complete History And Future of Robots

The History of Robots

The definition of “robot” has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek’s play R.U.R., or Rossum’s Universal Robots. “Robot” comes from the Czech for “forced labor.” These robots were robots more in spirit than form, though. They looked like humans, and instead of being made of metal, they were made of chemical batter. The robots were far more efficient than their human counterparts, and also way more murder-y—they ended up going on a killing spree.

R.U.R. would establish the trope of the Not-to-Be-Trusted Machine (e.g., Terminator, The Stepford Wives, Blade Runner, etc.) that continues to this day—which is not to say pop culture hasn’t embraced friendlier robots. Think Rosie from The Jetsons. (Ornery, sure, but certainly not homicidal.) And it doesn’t get much family-friendlier than Robin Williams as Bicentennial Man.

The real-world definition of “robot” is just as slippery as those fictional depictions. Ask 10 roboticists and you’ll get 10 answers—how autonomous does it need to be, for instance. But they do agree on some general guidelines: A robot is an intelligent, physically embodied machine. A robot can perform tasks autonomously to some degree. And a robot can sense and manipulate its environment.

Think of a simple drone that you pilot around. That’s no robot. But give a drone the power to take off and land on its own and sense objects and suddenly it’s a lot more robot-ish. It’s the intelligence and sensing and autonomy that’s key.

But it wasn’t until the 1960s that a company built something that started meeting those guidelines. That’s when SRI International in Silicon Valley developed Shakey, the first truly mobile and perceptive robot. This tower on wheels was well-named—awkward, slow, twitchy. Equipped with a camera and bump sensors, Shakey could navigate a complex environment. It wasn’t a particularly confident-looking machine, but it was the beginning of the robotic revolution.

Around the time Shakey was trembling about, robot arms were beginning to transform manufacturing. The first among them was Unimate, which welded auto bodies. Today, its descendants rule car factories, performing tedious, dangerous tasks with far more precision and speed than any human could muster. Even though they’re stuck in place, they still very much fit our definition of a robot—they’re intelligent machines that sense and manipulate their environment.

Robots, though, remained largely confined to factories and labs, where they either rolled about or were stuck in place lifting objects. Then, in the mid-1980s Honda started up a humanoid robotics program. It developed P3, which could walk pretty darn good and also wave and shake hands, much to the delight of a roomful of suits. The work would culminate in Asimo, the famed biped, which once tried to take out President Obama with a well-kicked soccer ball. (OK, perhaps it was more innocent than that.)

Today, advanced robots are popping up everywhere. For that you can thank three technologies in particular: sensors, actuators, and AI.

So, sensors. Machines that roll on sidewalks to deliver falafel can only navigate our world thanks in large part to the 2004 Darpa Grand Challenge, in which teams of roboticists cobbled together self-driving cars to race through the desert. Their secret? Lidar, which shoots out lasers to build a 3-D map of the world. The ensuing private-sector race to develop self-driving cars has dramatically driven down the price of lidar, to the point that engineers can create perceptive robots on the (relative) cheap.

The WIRED Guide to Climate Change

The WIRED Guide to Climate Change

One of the more visible consequences of climate change is playing out in California. In recent years, wildfires have grown measurably more intense thanks to climate change. They burn hotter and faster, leading to catastrophes like the Camp Fire north of San Francisco, which virtually obliterated the 27,000-person town of Paradise, becoming the deadliest and most destructive wildfire in state history. It’s a matter of terrible timing: Usually California gets at least a little bit of rain in the fall that rehydrates vegetation, but no longer. This dryness coincides with seasonal winds that whip in from the east, further drying out vegetation and providing a turbo boost for fires.

Then in late 2019, Australia suffered an unprecedentedly brutal fire season, giving the world perhaps the most dramatic manifestation of the climate crisis to date. In an average year, 1 percent of Australia’s famous eucalyptus forests might burn. But in the 2019–2020 fire season, 21 percent went up in flames, obliterating whole ecosystems and likely dooming many species into extinction. The conflagrations were so bad, models predicted it would be 80 years before something like that happened.

Meanwhile, at the Earth’s poles, landscapes are transforming quickly and dramatically. The Arctic and Antarctica are warming twice as fast as the rest of the planet, which is of course leading to the rapid melting of glaciers, which in turn raises sea levels. But the land itself is also in literal upheaval. As frozen soils known as permafrost rapidly thaw, [massive holes](https://www.wired.com/story/abrupt-permafrost-thaw/) are opening up in the Arctic. This releases CO2 and the even more potent greenhouse gas, methane, kicking off a terrible feedback loop: More emissions from the Arctic landscape means more warming, and more thawing, and more emissions.

As glaciers continue to pour meltwater into the ocean, sea levels are quickly climbing. And it’s not just the volume of extra water in the oceans that people have to worry about: As water warms, it expands, pushing sea levels even higher. Miami is already seeing more severe flooding, and on the other side of the world in Indonesia, Jakarta is both drowning in rising seas and sinking because the city has pumped up too much groundwater, leaving the land to collapse like an empty water bottle.

All of this has led 97 percent of climate scientists to agree that warming trends are very likely the result of human activity. And in 1988, that bulk of research led to the founding of the United Nations’ Intergovernmental Panel on Climate Change, which has now issued five assessment reports documenting all the available scientific, technical, and economic information on climate change. The fourth report, in 2007, was the first to clearly state that the climate was unequivocally warming—and that human-created greenhouse gases were very likely to blame.

Just because the panel came to a consensus doesn’t mean everyone else did, though. In 2009, climate scientists had their own WikiLeaks scandal, when climate deniers released a trove of emails from scientists, including the one behind the famous 1999 “hockey stick” graph showing a sharp upturn in global temperature after the Industrial Revolution—one that was clearly sharper than the many global warmings and coolings the Earth has seen. Excerpts taken out of context from those emails showed that researcher, Michael Mann, purportedly conspiring to statistically manipulate his data. Placed back in context, they clearly didn’t.

Political controversy has continued to call into question scientists’ consensus on data supporting the concept of human-caused climate change, motivated by the financial incentives of the fossil fuel industry. But in 2015, the world’s leaders appeared to transcend those squabbles. On December 12, after two weeks of deliberations at the 21st United Nations Conference on Climate Change in Le Bourget, France, 195 countries agreed on the language in what’s known as the Paris agreement. The goal is to keep average global temperature increase to below 2 degrees Celsius above pre-Industrial levels, and as close to 1.5 degrees as possible. It does so by having each country submit a commitment to reduce emissions and collectively bear the economic burden of a shift from fossil fuels—while acknowledging that developing nations would be denied a certain amount of growth if they had to give up cheap energy.

Climate Change: The Complete WIRED Guide

Climate Change: The Complete WIRED Guide

The world is busted. For decades, scientists have carefully accumulated data that confirms what we hoped wasn’t true: The greenhouse gas emissions that have steadily spewed from cars and planes and factories, the technologies that powered a massive period of economic growth, came at an enormous cost to the planet’s health. Today, we know that, absent any change in our behavior, the average global temperature will rise as much as 4 degrees Celsius by the end of the century. Global sea levels will rise by up to 6 feet. Along with those shifts will come radical changes in weather patterns around the globe, leaving coastal communities and equatorial regions forever changed—and potentially uninhabitable.

Strike that. We are already seeing the effects of a dramatically changed climate, from extended wildfire seasons to worsening storm surges. Now, true, any individual weather anomaly is unlikely to be solely the result of industrial emissions, and maybe your particular part of the world has been spared so far. But that’s little solace when the historical trends are so terrifyingly real. (Oh, and while it used to take mathematicians months to calculate how the odds of specific extreme weather events were affected by humans, they’ve knocked that data-crunching time down to weeks.)

Thankfully, it seems most of the world’s nation-states are beyond quibbling over the if of climate change—they’re moving rapidly onto the what now? The 2015 Paris climate agreement marked a turning point in the conversation about planetary pragmatics. Renewable energy in the form of wind and solar is actually becoming competitive with fossil fuels. And the world’s biggest cities are driving sustainable policy choices in a way that rival the contributions of some countries. Scientists and policymakers are also beginning to explore a whole range of last-ditch efforts—we’re talking some serious sci-fi stuff here—to deliberately, directly manipulate the environment. To keep the climate livable, we may need to prepare for a new era of geoengineering.

Climate Change The Complete WIRED Guide

How this Global Climate Shift Got Started

If we want to go all the way back to the beginning, we could take you to the Industrial Revolution—the point after which climate scientists start to see a global shift in temperature and atmospheric carbon dioxide levels. In the late 1700s, as coal-fired factories started churning out steel and textiles, the United States and other developed nations began pumping out its byproducts. Coal is a carbon-rich fuel, so when it combusts with oxygen, it produces heat along with another byproduct: carbon dioxide. Other carbon-based fuels, like natural gas, do the same in different proportions.

When those emissions entered the atmosphere, they acted like an insulating blanket, preventing the sun’s heat from escaping into space. Over the course of history, atmospheric carbon dioxide levels have varied—a lot. Models of ancient climate activity, hundreds of millions of years back, put carbon dioxide levels as high as several thousand parts per million. In the past half-million years or so, they’ve fluctuated between about 180 and 300 parts per million. But they haven’t fluctuated this fast. Today, atmospheric CO~2~ is at 407 ppm—roughly one and a half times as high as it was just two centuries ago. And we know for certain that extra greenhouse gas is from humans; analysis of the carbon isotopes in the atmosphere show that the majority of the extra CO~2~ comes from fossil fuels.

Image may contain Nature Outdoors Universe Space Astronomy Outer Space and Night

Radiation from the sun hits the Earth’s atmosphere. Some of it travels down to warm the Earth’s surface (A), while some of it bounces right back into space (B). Some of the energy, though, is absorbed by molecules of greenhouse gases—carbon dioxide, water, methane, and nitrous oxide—that prevent it from escaping (C). Over time, the trapped energy contributes to global warming.

The result: extreme weather. There’s global warming, of course; the Earth’s average temperature has increased 1.1 degrees Celsius since the late 19th century. But it goes further. As oceans absorb heat and polar ice sheets melt, hurricane seasons become more severe as warm water from the oceans kicks warm, moist air into the atmosphere. Sea levels rise—about 8 inches in the past century. Critically, the rate of these changes is increasing.

One of the more visible consequences of climate change is playing out in California. In recent years, wildfires have grown measurably more intense thanks to climate change. They burn hotter and faster, leading to catastrophes like the Camp Fire north of San Francisco, which virtually obliterated the 27,000-person town of Paradise, becoming the deadliest and most destructive wildfire in state history. It’s a matter of terrible timing: Usually California gets at least a little bit of rain in the fall that rehydrates vegetation, but no longer. This dryness coincides with seasonal winds that whip in from the east, further drying out vegetation and providing a turbo boost for fires.

Then in late 2019, Australia suffered an unprecedentedly brutal fire season, giving the world perhaps the most dramatic manifestation of the climate crisis to date. In an average year, 1 percent of Australia’s famous eucalyptus forests might burn. But in the 2019–2020 fire season, 21 percent went up in flames, obliterating whole ecosystems and likely dooming many species into extinction. The conflagrations were so bad, models predicted it would be 80 years before something like that happened.