Select Page

With the US economy just starting to recover from Covid-19 and millions still out of work, Congress authorized expanded unemployment benefits that supplement state assistance programs. While it’s laudable to fortify struggling Americans during an ongoing crisis, bad actors have made unemployment fraud a serious problem. Unfortunately, the many states seeking to stop fraud through surveillance are installing biased systems that may do far more harm than good. Predictably, these systems are making mistakes, and when they do, they largely punish BIPOC, trans, and gender-nonconforming Americans

Twenty-one states have turned to high-tech biometric ID verification services that use computer vision to determine if people are who they claim to be. This is the same technology that allows users to unlock their phone with their face—a one-to-one matching process where software infers if your facial features match the ones stored on a single template. But while facial verification is common for consumer devices, it’s relatively rare for government services. It should stay that way.

You might believe facial verification is harmless because the controversies that won’t go away mostly revolve around facial recognition. The police use facial recognition when they run a suspect’s image against a database containing mug shots or driver’s license photos, where an algorithm attempts to find a match. Relying on facial-recognition technology led police to wrongfully arrest at least three Black men, and there are likely many more.

But facial verification can also be biased. When errors arise, and they already have in California, they’ve historically and disproportionately centered on gendered and racial demographics. For benefits fraud programs, government dependence on facial verification creates a heightened risk that people of color, trans, and nonbinary applicants will have their claims slow-walked or even denied. These outcomes can make it hard to keep the lights on or a roof over your head. Even worse, law enforcement might unduly interrogate vulnerable people because biased algorithms cast skepticism on who they are. Such interactions could lead to wrongful arrest, prosecution, and government liens for those who’ve done nothing more than flunk a flawed algorithmic test.

It’s sadly predictable that government agencies are creating conditions for perpetuating algorithmic injustice. When Michigan rolled out its Integrated Data Automated System in 2013, the initiative was characterized as a success for flagging five times as many unemployment fraud cases and bringing in $65 million in new fines and fees. As it turns out, the software was unreliable. The system wrongly flagged tens of thousands of Michiganders, and humans rubber-stamped the automated judgments, resulting in bankruptcy and worse.

Increasing the dependence on smartphone apps like ID.me also increases the stakes of the digital divide. Many lower-income and elderly Americans are at risk of being shut out of essential government services, simply because they don’t have a phone with a camera and web browser.

As with any expansion of biometric analysis, there’s a second, potent threat. Our use of facial verification increasingly normalizes the expectation that our bodies should be used as a form of government ID. Whenever the government embraces facial verification and recognition, it creates momentum for further creep.

The good news is stopping fraud doesn’t require biometrics. Acknowledging that other alternatives exist requires admitting it’s a significant problem that Americans lack a secure digital ID system. Patchwork responses mean some systems will use part or all of your social security number—an outcome that turned social security numbers into a valuable target for hackers. Other systems use credit card transactions and credit history. These approaches are error-prone and underinclusive, especially since 7.1 millions American households remain unbanked.

What’s needed is a secure identity document, something like a digital driver’s license or state ID that comes with a secure cryptographic key. This way, users can provide the authenticating information without being enrolled in automated systems marred by bias and creeping surveillance. Although this isn’t a perfect solution, we shouldn’t be looking for one. Any system that makes it possible to conclusively prove your identity at any time, such as a universal ID that everyone is required to possess, is a mass surveillance tool. By adopting incremental, privacy-preserving digital ID strategies, we can mitigate the risk of benefits fraud and other forms of identity theft while also preserving privacy, equity, and civil rights.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.


More Great WIRED Stories