The inside of a tokamak—the donut-shaped vessel designed to contain a nuclear fusion reaction—presents a special kind of chaos. Hydrogen atoms are smashed together at unfathomably high temperatures, creating a whirling, roiling plasma that’s hotter than the surface of the sun. Finding smart ways to control and confine that plasma will be key to unlocking the potential of nuclear fusion, which has been mooted as the clean energy source of the future for decades. At this point, the science underlying fusion seems sound, so what remains is an engineering challenge. “We need to be able to heat this matter up and hold it together for long enough for us to take energy out of it,” says Ambrogio Fasoli, director of the Swiss Plasma Center at École Polytechnique Fédérale de Lausanne.
That’s where DeepMind comes in. The artificial intelligence firm, backed by Google parent company Alphabet, has previously turned its hand to video games and protein folding, and has been working on a joint research project with the Swiss Plasma Center to develop an AI for controlling a nuclear fusion reaction.
In stars, which are also powered by fusion, the sheer gravitational mass is enough to pull hydrogen atoms together and overcome their opposing charges. On Earth, scientists instead use powerful magnetic coils to confine the nuclear fusion reaction, nudging it into the desired position and shaping it like a potter manipulating clay on a wheel. The coils have to be carefully controlled to prevent the plasma from touching the sides of the vessel: this can damage the walls and slow down the fusion reaction. (There’s little risk of an explosion as the fusion reaction cannot survive without magnetic confinement).
But every time researchers want to change the configuration of the plasma and try out different shapes that may yield more power or a cleaner plasma, it necessitates a huge amount of engineering and design work. Conventional systems are computer-controlled and based on models and careful simulations, but they are, Fasoli says, “complex and not always necessarily optimized.”
DeepMind has developed an AI that can control the plasma autonomously. A paper published in the journal Nature describes how researchers from the two groups taught a deep reinforcement learning system to control the 19 magnetic coils inside TCV, the variable-configuration tokamak at the Swiss Plasma Center, which is used to carry out research that will inform the design of bigger fusion reactors in future. “AI, and specifically reinforcement learning, is particularly well suited to the complex problems presented by controlling plasma in a tokamak,” says Martin Riedmiller, control team lead at DeepMind.
The neural network—a type of AI setup designed to mimic the architecture of the human brain—was initially trained in a simulation. It started by observing how changing the settings on each of the 19 coils affected the shape of the plasma inside the vessel. Then it was given different shapes to try to recreate in the plasma. These included a D-shaped cross-section close to what will be used inside ITER (formerly the International Thermonuclear Experimental Reactor), the large-scale experimental tokamak under construction in France, and a snowflake configuration that could help dissipate the intense heat of the reaction more evenly around the vessel.
DeepMind’s neural network was able to manipulate the plasma inside a fusion reactor into a number of different shapes that fusion researchers have been exploring.Illustration: DeepMind & SPC/EPFL
DeepMind’s AI was able to autonomously figure out how to create these shapes by manipulating the magnetic coils in the right way—both in the simulation, and when the scientists ran the same experiments for real inside the TCV tokamak to validate the simulation. It represents a “significant step,” says Fasoli, one that could influence the design of future tokamaks or even speed up the path to viable fusion reactors. “It’s a very positive result,” says Yasmin Andrew, a fusion specialist at Imperial College London, who was not involved in the research. “It will be interesting to see if they can transfer the technology to a larger tokamak.”
Fusion offered a particular challenge to DeepMind’s scientists because the process is both complex and continuous. Unlike a turn-based game like Go, which the company has famously conquered with its AlphaGo AI, the state of a plasma constantly changes. And to make things even harder, it can’t be continuously measured. It is what AI researchers call an “under–observed system.”
“Sometimes algorithms which are good at these discrete problems struggle with such continuous problems,” says Jonas Buchli, a research scientist at DeepMind. “This was a really big step forward for our algorithm because we could show that this is doable. And we think this is definitely a very, very complex problem to be solved. It is a different kind of complexity than what you have in games.”
The character of conflict between nations has fundamentally changed. Governments and militaries now fight on our behalf in the “gray zone,” where the boundaries between peace and war are blurred. They must navigate a complex web of ambiguous and deeply interconnected challenges, ranging from political destabilization and disinformation campaigns to cyberattacks, assassinations, proxy operations, election meddling, or perhaps even human-made pandemics. Add to this list the existential threat of climate change (and its geopolitical ramifications) and it is clear that the description of what now constitutes a national security issue has broadened, each crisis straining or degrading the fabric of national resilience.
Traditional analysis tools are poorly equipped to predict and respond to these blurred and intertwined threats. Instead, in 2022 governments and militaries will use sophisticated and credible real-life simulations, putting software at the heart of their decision-making and operating processes. The UK Ministry of Defence, for example, is developing what it calls a military Digital Backbone. This will incorporate cloud computing, modern networks, and a new transformative capability called a Single Synthetic Environment, or SSE.
This SSE will combine artificial intelligence, machine learning, computational modeling, and modern distributed systems with trusted data sets from multiple sources to support detailed, credible simulations of the real world. This data will be owned by critical institutions, but will also be sourced via an ecosystem of trusted partners, such as the Alan Turing Institute.
An SSE offers a multilayered simulation of a city, region, or country, including high-quality mapping and information about critical national infrastructure, such as power, water, transport networks, and telecommunications. This can then be overlaid with other information, such as smart-city data, information about military deployment, or data gleaned from social listening. From this, models can be constructed that give a rich, detailed picture of how a region or city might react to a given event: a disaster, epidemic, or cyberattack or a combination of such events organized by state enemies.
Defense synthetics are not a new concept. However, previous solutions have been built in a standalone way that limits reuse, longevity, choice, and—crucially—the speed of insight needed to effectively counteract gray-zone threats.
National security officials will be able to use SSEs to identify threats early, understand them better, explore their response options, and analyze the likely consequences of different actions. They will even be able to use them to train, rehearse, and implement their plans. By running thousands of simulated futures, senior leaders will be able to grapple with complex questions, refining policies and complex plans in a virtual world before implementing them in the real one.
One key question that will only grow in importance in 2022 is how countries can best secure their populations and supply chains against dramatic weather events coming from climate change. SSEs will be able to help answer this by pulling together regional infrastructure, networks, roads, and population data, with meteorological models to see how and when events might unfold.
The future has a history. The good news is that it’s one from which we can learn; the bad news is that we very rarely do. That’s because the clearest lesson from the history of the future is that knowing the future isn’t necessarily very useful. But that has yet to stop humans from trying.
Take Peter Turchin’s famed prediction for 2020. In 2010 he developed a quantitative analysis of history, known as cliodynamics, that allowed him to predict that the West would experience political chaos a decade later. Unfortunately, no one was able to act on that prophecy in order to prevent damage to US democracy. And of course, if they had, Turchin’s prediction would have been relegated to the ranks of failed futures. This situation is not an aberration.
Rulers from Mesopotamia to Manhattan have sought knowledge of the future in order to obtain strategic advantages—but time and again, they have failed to interpret it correctly, or they have failed to grasp either the political motives or the speculative limitations of those who proffer it. More often than not, they have also chosen to ignore futures that force them to face uncomfortable truths. Even the technological innovations of the 21st century have failed to change these basic problems—the results of computer programs are, after all, only as accurate as their data input.
There is an assumption that the more scientific the approach to predictions, the more accurate forecasts will be. But this belief causes more problems than it solves, not least because it often either ignores or excludes the lived diversity of human experience. Despite the promise of more accurate and intelligent technology, there is little reason to think the increased deployment of AI in forecasting will make prognostication any more useful than it has been throughout human history.
People have long tried to find out more about the shape of things to come. These efforts, while aimed at the same goal, have differed across time and space in several significant ways, with the most obvious being methodology—that is, how predictions were made and interpreted. Since the earliest civilizations, the most important distinction in this practice has been between individuals who have an intrinsic gift or ability to predict the future, and systems that provide rules for calculating futures. The predictions of oracles, shamans, and prophets, for example, depended on the capacity of these individuals to access other planes of being and receive divine inspiration. Strategies of divination such as astrology, palmistry, numerology, and Tarot, however, depend on the practitioner’s mastery of a complex theoretical rule-based (and sometimes highly mathematical) system, and their ability to interpret and apply it to particular cases. Interpreting dreams or the practice of necromancy might lie somewhere between these two extremes, depending partly on innate ability, partly on acquired expertise. And there are plenty of examples, in the past and present, that involve both strategies for predicting the future. Any internet search on “dream interpretation” or “horoscope calculation” will throw up millions of hits.
In the last century, technology legitimized the latter approach, as developments in IT (predicted, at least to some extent, by Moore’s law) provided more powerful tools and systems for forecasting. In the 1940s, the analog computer MONIAC had to use actual tanks and pipes of colored water to model the UK economy. By the 1970s, the Club of Rome could turn to the World3 computer simulation to model the flow of energy through human and natural systems via key variables such as industrialization, environmental loss, and population growth. Its report, Limits to Growth, became a best seller, despite the sustained criticism it received for the assumptions at the core of the model and the quality of the data that was fed into it.
At the same time, rather than depending on technological advances, other forecasters have turned to the strategy of crowdsourcing predictions of the future. Polling public and private opinions, for example, depends on something very simple—asking people what they intend to do or what they think will happen. It then requires careful interpretation, whether based in quantitative (like polls of voter intention) or qualitative (like the Rand corporation’s DELPHI technique) analysis. The latter strategy harnesses the wisdom of highly specific crowds. Assembling a panel of experts to discuss a given topic, the thinking goes, is likely to be more accurate than individual prognostication.
One of the streaming music apps I use creates customized playlists for me, and it’s scarily good at predicting songs I’m going to like. Does that make me boring?
—Playing It Safe
Dear Playing It Safe,
I once read somewhere that if you want to slowly drive someone mad, resolve, for a week or so, to occasionally mutter, “I knew you were going to say that” after they make some casual remark. The logic, as far as I can tell, is that by convincing a person that their thoughts are entirely predictable, you steadily erode their sense of agency until they can no longer conceive of themselves as an autonomous being. I have no idea whether this actually works—I’ve never been sadistic enough to try it. But if its premise is correct, we all must be slowly losing our minds. How many times a day are we reminded that our actions can be precisely anticipated? Predictive text successfully guesses how we’re going to respond to emails. Amazon suggests the very book that we’ve been meaning to read. It’s rare these days to finish typing a Google query before autocomplete finishes our thought, a reminder that our medical anxieties, our creative projects, and our relationship dilemmas are utterly unoriginal.
For those of us raised in the crucible of late-capitalist individualism, we who believe our souls to be as unique as our thumbprints and as unduplicable as a snowflake, the idea that our interests fall into easily discernible patterns is deeply, perhaps even existentially, unsettling. In fact, Playing It Safe, I’m willing to bet that your real anxiety is not that you’re boring but that you’re not truly free. If your taste can be so easily inferred from your listening history and the data streams of “users like you” (to borrow the patronizing argot of prediction engines), are you actually making a choice? Is it possible that your ineffable and seemingly spontaneous delight at hearing that Radiohead song you loved in college is merely the inflexible mathematical endpoint of the vector of probabilities that have determined your personality since birth?
While this anxiety may feel new, it stems from a much older problem about prediction and personal freedom, one that first emerged in response to the belief in divine foreknowledge. If God can see the future with perfect accuracy, then aren’t human actions necessarily predetermined? How could we act otherwise? A scientific version of the problem was posed by the 19th-century French physicist Pierre-Simon Laplace, who imagined a cosmic superintelligence that knew every detail about the universe, down to the exact position of all its atoms. If this entity (now known as Laplace’s demon) understood everything about the present world and possessed an intellect “vast enough to submit the data to analysis,” it could perfectly predict the future, revealing that all events, including our own actions, belong to a long domino chain of cause-and-effect that extends back to the birth of the universe.
The algorithm that predicts your musical preferences is less sophisticated than the cosmic intellect Laplace had in mind. But it still reveals, to a lesser degree, the extent to which your actions are constrained by your past choices and certain generalized probabilities of human behavior. And it’s not difficult to extrapolate what predictive technologies might expose about our sense of agency once they become even better at anticipating our actions and emotional states—perhaps even surpassing our own self-knowledge. Will we accept their recommendations for whom to marry, or whom to vote for, just as we now do their suggestions for what to watch and what to read? Will police departments arrest likely criminals before they commit the crime, as they do in Minority Report, tipped off by the oracular predictions of digital precogs? Several years ago, Amazon filed a patent for “anticipatory shipping,” banking on the hope the company would soon be able to correctly guess our orders (and start preparing them for dispatch) before we made the purchase.
If the revelation of your own dullness is merely the first stirrings of this new reality, how should you respond? One option would be to rebel and try to prove its assumptions false. Act out of character. When you have an inclination to do something, do the precise opposite. Listen to music you hate. Make choices that will reroute your data stream. This is the solution arrived at by Dostoevsky’s narrator in Notes From the Underground, who takes up irrational and self-damaging actions simply to prove that he is not enslaved to the inflexible calculations of rational self-interest. The novel was written during the heyday of rational egoism, when certain utopian thinkers believed that human behavior could be reduced to a series of logical rules so as to maximize well-being and create the ideal society. The narrator insists that most people would find such a world intolerable because it would destroy their belief in individual freedom. We value our autonomy over all the comforts and the advantages that scientific determinism offers—so much so, he argues, that we would seek out absurdity or even self-harm in order to prove that we are free. If science ever definitively proves that humans act according to these fatalistic rules, we would destroy ourselves “for the sole purpose of sending all these logarithms to the devil and living once more according to our own stupid will!”
It’s a rousing passage, though as predictions go it’s not especially prescient. Few of us today appear to be tormented by the comforts of predictive analytics. In fact, the conveniences they offer are deemed so desirable that we often collude with them. On Spotify, we “like” the songs we enjoy, contributing one more shard to the emerging mosaic of our digital personhood. On TikTok, we quickly scroll past posts that don’t reflect our dominant interests, lest the all-seeing algorithm mistake our curiosity for invested interest. Perhaps you have paused, once or twice, before watching a Netflix film that diverges from your usual taste, or hesitated before Googling a religious question, lest it take you for a true believer and skew your future search results. If you want to optimize your recommendations, the best thing to do is to act as much like “yourself” as possible, to remain resolutely and eternally in character—which is to say, to act in a way that is entirely contrary to the real complexities of human nature.
With that said, I don’t advise embracing the irrational or acting against your own interests. It will not make you happy, nor will it prove a point. Randomness is a poor substitute for genuine freedom. Instead, perhaps you should reconsider the unstated premise of your query, which is that your identity is defined by your consumer choices. Your fear that you’ve become boring might have less to do with your supposedly vanilla taste than the fact that these platforms have conditioned us to see our souls through the lens of formulaic categories that are designed to be legible to advertisers. It’s all too easy to mistake our character for the bullet points that grace our bios: our relationship status, our professional affiliations, the posts and memes and threads that we’ve liked, the purchases we’ve made, and the playlists we’ve built.
A complication of infection known as sepsis is the number one killer in US hospitals. So it’s not surprising that more than 100 health systems use an early warning system offered by Epic Systems, the dominant provider of US electronic health records. The system throws up alerts based on a proprietary formula tirelessly watching for signs of the condition in a patient’s test results.
But a new study using data from nearly 30,000 patients in University of Michigan hospitals suggests Epic’s system performs poorly. The authors say it missed two-thirds of sepsis cases, rarely found cases medical staff did not notice, and frequently issued false alarms.
Karandeep Singh, an assistant professor at University of Michigan who led the study, says the findings illustrate a broader problem with the proprietary algorithms increasingly used in health care. “They’re very widely used, and yet there’s very little published on these models,” Singh says. “To me that’s shocking.”
The study was published Monday in JAMA Internal Medicine. An Epic spokesperson disputed the study’s conclusions, saying the company’s system has “helped clinicians save thousands of lives.”
Epic’s is not the first widely used health algorithm to trigger concerns that technology supposed to improve health care is not delivering, or even actively harmful. In 2019, a system used on millions of patients to prioritize access to special care for people with complex needs was found to lowball the needs of Black patients compared to white patients. That prompted some Democratic senators to ask federal regulators to investigate bias in health algorithms. A study published in April found that statistical models used to predict suicide risk in mental health patients performed well for white and Asian patients but poorly for Black patients.
The way sepsis stalks hospital wards has made it a special target of algorithmic aids for medical staff. Guidelines from the Centers for Disease Control and Prevention to health providers on sepsis encourage use of electronic medical records for surveillance and predictions. Epic has several competitors offering commercial warning systems, and some US research hospitals have built their own tools.
Automated sepsis warnings have huge potential, Singh says, because key symptoms of the condition, such as low blood pressure, can have other causes, making it difficult for staff to spot early. Starting sepsis treatment such as antibiotics just an hour sooner can make a big difference to patient survival. Hospital administrators often take special interest in sepsis response, in part because it contributes to US government hospital ratings.
Singh runs a lab at Michigan researching applications of machine learning to patient care. He got curious about Epic’s sepsis warning system after being asked to chair a committee at the university’s health system created to oversee uses of machine learning.
As Singh learned more about the tools in use at Michigan and other health systems, he became concerned that they mostly came from vendors that disclosed little about how they worked or performed. His own system had a license to use Epic’s sepsis prediction model, which the company told customers was highly accurate. But there had been no independent validation of its performance.
Singh and Michigan colleagues tested Epic’s prediction model on records for nearly 30,000 patients covering almost 40,000 hospitalizations in 2018 and 2019. The researchers noted how often Epic’s algorithm flagged people who developed sepsis as defined by the CDC and the Centers for Medicare and Medicaid Services. And they compared the alerts that the system would have triggered with sepsis treatments logged by staff, who did not see Epic sepsis alerts for patients included in the study.
The researchers say their results suggest Epic’s system wouldn’t make a hospital much better at catching sepsis and could burden staff with unnecessary alerts. The company’s algorithm did not identify two-thirds of the roughly 2,500 sepsis cases in the Michigan data. It would have alerted for 183 patients who developed sepsis but had not been given timely treatment by staff.