Select Page
The Power and Pitfalls of AI for US Intelligence

The Power and Pitfalls of AI for US Intelligence

In one example of the IC’s successful use of AI, after exhausting all other avenues—from human spies to signals intelligence—the US was able to find an unidentified WMD research and development facility in a large Asian country by locating a bus that traveled between it and other known facilities. To do that, analysts employed algorithms to search and evaluate images of nearly every square inch of the country, according to a senior US intelligence official who spoke on background with the understanding of not being named.

While AI can calculate, retrieve, and employ programming that performs limited rational analyses, it lacks the calculus to properly dissect more emotional or unconscious components of human intelligence that are described by psychologists as system 1 thinking.

AI, for example, can draft intelligence reports that are akin to newspaper articles about baseball, which contain structured non-logical flow and repetitive content elements. However, when briefs require complexity of reasoning or logical arguments that justify or demonstrate conclusions, AI has been found lacking. When the intelligence community tested the capability, the intelligence official says, the product looked like an intelligence brief but was otherwise nonsensical.

Such algorithmic processes can be made to overlap, adding layers of complexity to computational reasoning, but even then those algorithms can’t interpret context as well as humans, especially when it comes to language, like hate speech.

AI’s comprehension might be more analogous to the comprehension of a human toddler, says Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to clients from violence to disinformation. “For example, AI can understand the basics of human language, but foundational models don’t have the latent or contextual knowledge to accomplish specific tasks,” Curwin says.

“From an analytic perspective, AI has a difficult time interpreting intent,” Curwin adds. “Computer science is a valuable and important field, but it is social computational scientists that are taking the big leaps in enabling machines to interpret, understand, and predict behavior.”

In order to “build models that can begin to replace human intuition or cognition,” Curwin explains, “researchers must first understand how to interpret behavior and translate that behavior into something AI can learn.”

Although machine learning and big data analytics provide predictive analysis about what might or will likely happen, it can’t explain to analysts how or why it arrived at those conclusions. The opaqueness in AI reasoning and the difficulty vetting sources, which consist of extremely large data sets, can impact the actual or perceived soundness and transparency of those conclusions.

Transparency in reasoning and sourcing are requirements for the analytical tradecraft standards of products produced by and for the intelligence community. Analytic objectivity is also statuatorically required, sparking calls within the US government to update such standards and laws in light of AI’s increasing prevalence.

Machine learning and algorithms when employed for predictive judgments are also considered by some intelligence practitioners as more art than science. That is, they are prone to biases, noise, and can be accompanied by methodologies that are not sound and lead to errors similar to those found in the criminal forensic sciences and arts.

Russia’s Killer Drone in Ukraine Raises Fears About AI in Warfare

Russia’s Killer Drone in Ukraine Raises Fears About AI in Warfare

A Russian “suicide drone” that boasts the ability to identify targets using artificial intelligence has been spotted in images of the ongoing invasion of Ukraine.

Photographs showing what appears to be the KUB-BLA, a type of lethal drone known as a “loitering munition” sold by ZALA Aero, a subsidiary of the Russian arms company Kalashnikov, have appeared on Telegram and Twitter in recent days. The pictures show damaged drones that appear to have either crashed or been shot down.

With a wingspan of 1.2 meters, the sleek white drone resembles a small pilotless fighter jet. It is fired from a portable launch, can travel up to 130 kilometers per hour for 30 minutes, and deliberately crashes into a target, detonating a 3-kilo explosive.

ZALA Aero, which first demoed the KUB-BLA at a Russian air show in 2019, claims in promotional material that it features “intelligent detection and recognition of objects by class and type in real time.”

The drone itself may do little to alter the course of the war in Ukraine, as there is no evidence that Russia is using them widely so far. But its appearance has sparked concern about the potential for AI to take a greater role in making lethal decisions.

“The notion of a killer robot—where you have artificial intelligence fused with weapons—that technology is here, and it’s being used,” says Zachary Kallenborn, a research affiliate with the National Consortium for the Study of Terrorism and Responses to Terrorism (START).

Advances in AI have made it easier to incorporate autonomy into weapons systems, and have raised the prospect that more capable systems could eventually decide for themselves who to kill. A UN report published last year concluded that a lethal drone with this capability may have been used in the Libyan civil war.

It is unclear if the drone may have been operated in this way in Ukraine. One of the challenges with autonomous weapons may prove to be the difficulty of determining when full autonomy is used in a lethal context, Kallenborn says.

The KUB-BLA images have yet to be verified by official sources, but the drone is known to be a relatively new part of Russia’s military arsenal. Its use would also be consistent with Russia’s shifting strategy in the face of the unexpectedly strong Ukrainian resistance, says Samuel Bendett, an expert on Russia’s military with the defense think tank CNA.

Bendett says Russia has built up its drone capabilities in recent years, using them in Syria and acquiring more after Azerbaijani forces demonstrated their effectiveness against Armenian ground military in the 2020 ​​Nagorno-Karabakh war. “They are an extraordinarily cheap alternative to flying manned missions,” he says. “They are very effective both militarily and of course psychologically.”

The fact that Russia seems to have used few drones in Ukraine early on may be due to misjudging the resistance or because of effective Ukrainian countermeasures.

But drones have also highlighted a key vulnerability in Russia’s invasion, which is now entering its third week. Ukrainian forces have used a remotely operated Turkish-made drone called the TB2 to great effect against Russian forces, shooting guided missiles at Russian missile launchers and vehicles. The paraglider-sized drone, which relies on a small crew on the ground, is slow and cannot defend itself, but it has proven effective against a surprisingly weak Russian air campaign.

Simulation Tech Can Help Predict the Biggest Threats

Simulation Tech Can Help Predict the Biggest Threats

The character of conflict between nations has fundamentally changed. Governments and militaries now fight on our behalf in the “gray zone,” where the boundaries between peace and war are blurred. They must navigate a complex web of ambiguous and deeply interconnected challenges, ranging from political destabilization and disinformation campaigns to cyberattacks, assassinations, proxy operations, election meddling, or perhaps even human-made pandemics. Add to this list the existential threat of climate change (and its geopolitical ramifications) and it is clear that the description of what now constitutes a national security issue has broadened, each crisis straining or degrading the fabric of national resilience.

Traditional analysis tools are poorly equipped to predict and respond to these blurred and intertwined threats. Instead, in 2022 governments and militaries will use sophisticated and credible real-life simulations, putting software at the heart of their decision-making and operating processes. The UK Ministry of Defence, for example, is developing what it calls a military Digital Backbone. This will incorporate cloud computing, modern networks, and a new transformative capability called a Single Synthetic Environment, or SSE.

This SSE will combine artificial intelligence, machine learning, computational modeling, and modern distributed systems with trusted data sets from multiple sources to support detailed, credible simulations of the real world. This data will be owned by critical institutions, but will also be sourced via an ecosystem of trusted partners, such as the Alan Turing Institute.

An SSE offers a multilayered simulation of a city, region, or country, including high-quality mapping and information about critical national infrastructure, such as power, water, transport networks, and telecommunications. This can then be overlaid with other information, such as smart-city data, information about military deployment, or data gleaned from social listening. From this, models can be constructed that give a rich, detailed picture of how a region or city might react to a given event: a disaster, epidemic, or cyberattack or a combination of such events organized by state enemies.

Defense synthetics are not a new concept. However, previous solutions have been built in a standalone way that limits reuse, longevity, choice, and—crucially—the speed of insight needed to effectively counteract gray-zone threats.

National security officials will be able to use SSEs to identify threats early, understand them better, explore their response options, and analyze the likely consequences of different actions. They will even be able to use them to train, rehearse, and implement their plans. By running thousands of simulated futures, senior leaders will be able to grapple with complex questions, refining policies and complex plans in a virtual world before implementing them in the real one.

One key question that will only grow in importance in 2022 is how countries can best secure their populations and supply chains against dramatic weather events coming from climate change. SSEs will be able to help answer this by pulling together regional infrastructure, networks, roads, and population data, with meteorological models to see how and when events might unfold.

The History of Predicting the Future

The History of Predicting the Future

The future has a history. The good news is that it’s one from which we can learn; the bad news is that we very rarely do. That’s because the clearest lesson from the history of the future is that knowing the future isn’t necessarily very useful. But that has yet to stop humans from trying.

Take Peter Turchin’s famed prediction for 2020. In 2010 he developed a quantitative analysis of history, known as cliodynamics, that allowed him to predict that the West would experience political chaos a decade later. Unfortunately, no one was able to act on that prophecy in order to prevent damage to US democracy. And of course, if they had, Turchin’s prediction would have been relegated to the ranks of failed futures. This situation is not an aberration. 

Rulers from Mesopotamia to Manhattan have sought knowledge of the future in order to obtain strategic advantages—but time and again, they have failed to interpret it correctly, or they have failed to grasp either the political motives or the speculative limitations of those who proffer it. More often than not, they have also chosen to ignore futures that force them to face uncomfortable truths. Even the technological innovations of the 21st century have failed to change these basic problems—the results of computer programs are, after all, only as accurate as their data input.

There is an assumption that the more scientific the approach to predictions, the more accurate forecasts will be. But this belief causes more problems than it solves, not least because it often either ignores or excludes the lived diversity of human experience. Despite the promise of more accurate and intelligent technology, there is little reason to think the increased deployment of AI in forecasting will make prognostication any more useful than it has been throughout human history.

People have long tried to find out more about the shape of things to come. These efforts, while aimed at the same goal, have differed across time and space in several significant ways, with the most obvious being methodology—that is, how predictions were made and interpreted. Since the earliest civilizations, the most important distinction in this practice has been between individuals who have an intrinsic gift or ability to predict the future, and systems that provide rules for calculating futures. The predictions of oracles, shamans, and prophets, for example, depended on the capacity of these individuals to access other planes of being and receive divine inspiration. Strategies of divination such as astrology, palmistry, numerology, and Tarot, however, depend on the practitioner’s mastery of a complex theoretical rule-based (and sometimes highly mathematical) system, and their ability to interpret and apply it to particular cases. Interpreting dreams or the practice of necromancy might lie somewhere between these two extremes, depending partly on innate ability, partly on acquired expertise. And there are plenty of examples, in the past and present, that involve both strategies for predicting the future. Any internet search on “dream interpretation” or “horoscope calculation” will throw up millions of hits.

In the last century, technology legitimized the latter approach, as developments in IT (predicted, at least to some extent, by Moore’s law) provided more powerful tools and systems for forecasting. In the 1940s, the analog computer MONIAC had to use actual tanks and pipes of colored water to model the UK economy. By the 1970s, the Club of Rome could turn to the World3 computer simulation to model the flow of energy through human and natural systems via key variables such as industrialization, environmental loss, and population growth. Its report, Limits to Growth, became a best seller, despite the sustained criticism it received for the assumptions at the core of the model and the quality of the data that was fed into it.

At the same time, rather than depending on technological advances, other forecasters have turned to the strategy of crowdsourcing predictions of the future. Polling public and private opinions, for example, depends on something very simple—asking people what they intend to do or what they think will happen. It then requires careful interpretation, whether based in quantitative (like polls of voter intention) or qualitative (like the Rand corporation’s DELPHI technique) analysis. The latter strategy harnesses the wisdom of highly specific crowds. Assembling a panel of experts to discuss a given topic, the thinking goes, is likely to be more accurate than individual prognostication.