Select Page
A Black Woman Invented Home Security. Why Did It Go So Wrong?

A Black Woman Invented Home Security. Why Did It Go So Wrong?

Amazon is not the only one. This trend can also be seen with the rise of automated license plate reader systems for individual neighborhoods, Google’s partnership with ADT, and the company’s launch of “smart” security cameras that offer the ability to define “events” to record, recognize friendly faces, and detect noises such as glass breaking. As tech giants seek to saturate every aspect of our lives, home security has become a 50 billion dollar business in the United States alone.

In keeping with its surveillance expansion over the years, Amazon’s Ring partnered with more than 400 police departments across the country, after a successful multiyear strategy to turn law enforcement into part-time doorbell sales agents and cement the term “porch pirate” into our lexicon. The behemoth then cynically attempted to counter the obvious racial consequences of this in its own consumer-driven way. In 2020 it debuted the Ring dash cam with a Traffic Stop mode that allows drivers to say “Alexa I’m being pulled over,” at which point Alexa will begin recording the subsequent traffic stop. The company that has made so much hay enabling surveillance, supercharging the ability to blast out racist notions about who belongs in a neighborhood and acting as a gentrifying force now throws a bone to people who may be guilty of “driving while Black.” This is very much the same logic that drove the push for body cams. In both instances, the results in terms of protecting Black lives have not lived up to the claims of advocates.

In Dark Matters: On the Surveillance of Blackness, Simone Browne, professor of Black Studies in the Department of African and African Diaspora Studies at the University of Texas at Austin, suggests that anti-Black racism is fundamentally coded into all our systems of vision, oversight, observation, and surveillance. She argues that there is no such thing as a system of surveillance, at least when human beings are involved, that does not add to anti-Blackness. According to Browne, “The historical formation of surveillance is not outside the historical formation of slavery.”

No amount of advances in technology will change the basic truth that surveillance and carceral technology exist to serve those in control. The narratives about police response times and accountability have remained the same, even though the 50-plus years since Brown’s patent have seen far more surveillance in both public and private spaces. This calls into question prevailing assumptions about what keeps communities safe—a point that’s been made repeatedly by community activists and police abolitionists. Brown’s invention is not evidence of some kind of conscious complicity with repressive technologies; rather, it demonstrates that the repressive function of technologies lies in their imbrication in pervasive notions of race.

Many of these tools have become agents of gentrification. They offload the “policing” of Black folks in public spaces to individuals who become de facto cops. Early advertisements for the Ring were explicit about this, even promising bounties in the form of free products. Though the company has toned this rhetoric down in recent years, a key aspect of Ring and Neighbors is still the assertion that by owning the device, you are doing your part to “fight crime.”

Narratives about how a given surveillance technology will improve the way policing works for and in Black communities have similarly remained relatively stable over time. Claims about improved police response times, increased safety and accountability, more safety or better community relations continually mark the introduction of new surveillance technologies—from police body cams to Project Green Light in Detroit, Stingrays or surveillance planes in Baltimore, neighborhood automated license plate readers, and Ring Doorbells. While this may be indicative of what communities demand from policing, there is an alternative read: The promises remain both the same and undelivered because these technologies exist to further entrench the surveillance of Black and brown bodies as a practice that is foundational to how law enforcement operates in this country. Put another way, these technologies nibble around the edges of problems that are systemic. More and better forms of surveillance have not, nor will they ever, be a solution to these issues.

Remarkably, like Amazon and other private providers, US cities and states make assertions about more surveillance producing more safety, despite the fact that other countries have already tested this idea and found it wanting. The United Kingdom has what is reputed to be the largest network of CCTV cameras in a democracy, with between 4 million and 5.9 million cameras in use as of 2015, many of them operated not by government but by businesses and individuals. Yet even the Surveillance Commissioner for the UK and Wales worried that the point of the cameras was to “build a surveillance society,” not to prevent crime, as there is little evidence that cameras deter crime, and the crimes they do impact tend to be property crimes rather than violent ones. This is as incontrovertible empirical proof as one could ask for that visual and audio surveillance of the environment does not create safer communities.

Someone Snuck a Card Skimmer Into Costco to Nab Shopper Data

Someone Snuck a Card Skimmer Into Costco to Nab Shopper Data

This week, security researchers from Google uncovered a so-called watering hole attack that indiscriminately targeted Apple devices in Hong Kong. Hackers compromised media and pro-democracy websites in the region to distribute malware to any visitors from an iPhone or Mac, placing a backdoor that let them steal data, download files, and more. Google didn’t attribute the campaign to any specific actor, but did note that “the activity and targeting is consistent with a government-backed actor.” The incident echoes the 2019 revelation that China had targeted thousands of iPhones in a similar manner—at the time, a wake-up call that iOS security isn’t as infallible as it’s perceived.

The Justice Department also announced its most significant ransomware enforcement actions yet, arresting one alleged hacker associated with the notorious REvil group and seizing $6.1 million of cryptocurrency from another. There’s still a long way to go to rein in the broader ransomware threat, but showing that law enforcement can actually extract a consequence is an important start. 

If you’ve noticed that TikTok is pushing you to connect more with friends and family—rather than limiting your feed to talented and engaging strangers—you’re not alone. The platform has taken some unprecedented steps in recent months to figure out who your friends are in real life, raising concerns about both privacy and whether TikTok’s changes will undermine what makes the social network so appealing in the first place.

Lastly, at this week’s RE:WIRED conference we spoke with Jen Easterly, director of the Cybersecurity and Information Security Agency, about the challenges she and the US government as a whole face from increasingly sophisticated adversaries. Having come up through the ranks via the NSA and the Pentagon, Easterly is used to offensive cyber operations. Her job now? Play some defense. Preferably, she says, with the help of the broader hacker community.

And there’s more! Each week we round up all the security news WIRED didn’t cover in depth. Click on the headlines to read the full stories, and stay safe out there.

You may normally associate card-skimmer attacks—which impersonate credit card readers to steal your payment info—with ATMs and gas pumps, to the extent that you think of them at all. But recently someone placed a card-skimming device in a Costco warehouse, of all places. An employee discovered the interloping equipment during a “routine check,” according to a report from BleepingComputer. The company has informed people whose credit card info may have been stolen. It’s a good reminder to double-check where you stick your plastic—or stick with NFC payments.

Earlier this week, Robinhood disclosed a “security incident” in which a hacker used social engineering to access an email list of 5 million people, the full names of 2 million people, and the name, date of birth, and zip codes of 310 people. Motherboard went on to report that the attackers had in fact accessed internal tools that could have let them disable two-factor authentication for users, log them out of their accounts, and view their balance and trading information. Robinhood says that customer accounts weren’t tampered with, but that doesn’t help much with the fact that they apparently could have been quite easily.

Spyware manufacturer NSO Group has been no stranger to controversy lately, and was recently placed on the US Entity List because it allegedly “developed and supplied spyware to foreign governments that used these tools to maliciously target government officials, journalists, businesspeople, activists, academics, and embassy workers.” Now, researchers at the nonprofit Frontline Defenders say they’ve found the company’s Pegasus malware on the phones of six Palestinian activists. They couldn’t definitively tie the origin of the malware to a specific country or organization, but the incident is just the latest in a long line of surveillance malware being used where it expressly shouldn’t.


More Great WIRED Stories

What Is Imax Enhanced, and Should You Care?

What Is Imax Enhanced, and Should You Care?

The new Imax Enhanced format reclaims a huge chunk of that screen real estate. There’s still a little bit of black bar space—TVs usually have an aspect ratio of 1.77:1, which is slightly taller—but you’re getting about a 26 percent larger picture than traditional ultra widescreen movies.

Sometimes, when streaming services try to fix the letterboxing problem, they do so in ways that negatively affect the picture. For example, when Disney scaled up The Simpsons to fill the screen all the way to the sides, it ended up cropping out some details that were essential for certain jokes to land. With this new Imax Enhanced format, that space is being filled by parts of the picture that were there when the cameras first recorded the movie. You’re gaining data instead of losing it.

Do I Need to Upgrade My TV?

The thousand-dollar question any time we talk about new video formats is whether the TV you have can use it, or if you’ll have to upgrade. When it comes to the aspect ratio benefits above, there’s good news: You can play Imax Enhanced content on most TVs and enjoy the larger picture.

However, Imax Enhanced is more than just an aspect ratio. It also includes certifications and guidelines for HDR video, and Imax teamed up with DTS to add specifications for DTS audio. You can think of these as Imax’s alternatives to Dolby Vision and Dolby Atmos. Both are standards that are designed to get the best picture and audio quality out of your system. For that, you might need a new TV.

A number of TVs from companies like Sony, HiSense, and TCL already support Imax Enhanced, so you might have one already. Then there are sound systems to think of. You can have an Imax Enhanced–compatible TV but still use whatever soundbar you want, but to get the full benefits of the standard, you might need new audio hardware.

It’s possible that some hardware could get an update to support the Imax Enhanced specification. Many TVs or sound systems are technically capable of outputting the kind of color, brightness, or audio quality the standard requires, but shipped before it was introduced in 2018. There’s no guarantee your TV will ever get an update, but some devices have. On the other end, Disney+ is the only major streaming service to include Imax Enhanced movies and shows, but it’s not completely alone. There’s support in Sony’s Bravia Core, Rakuten TV, and a few other platforms.

Ultimately, not having Imax Enhanced support doesn’t mean you can’t enjoy any of these movies. But it’s a way to get the closest thing to a proper Imax screening in the comfort of your own home. Even if your screen isn’t as monstrous as what you find in theaters, you can still reclaim a lot of your TV’s real estate and get a picture that’s similar to what you saw the first time you watched a film in Imax.


If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.


More Great WIRED Stories

How Metadata From Encrypted Messages Can Keep Everyone Safer

How Metadata From Encrypted Messages Can Keep Everyone Safer

The future is encrypted. Real-time, encrypted chat apps like Signal and WhatsApp, and messaging apps like Telegram, WeChat, and Messenger—used by two out of five people worldwide—help safeguard privacy and facilitate our rights to organize, speak freely, and keep close contact with our communities.

They are intentionally built for convenience and speed, for person-to-person communication as well as large group connections. Yet it is these same conditions that have fueled abusive and illegal behavior, disinformation and hate speech, and hoaxes and scams; all to the detriment of the vast majority of their users. As early as 2018, investigative reports have explored the role that these very features played in dozens of deaths in India and Indonesia as well as elections in Nigeria and Brazil. The ease with which users can forward messages without verifying their accuracy means disinformation can spread quickly, secretly, and at significant scale. Some apps allow extremely large groups—up to 200,000—or have played host to organized encrypted propaganda machinery, breaking away from the original vision to emulate a “living room.” And some platforms have proposed profit-driven policy changes, allowing business users to leverage customer data in new and invasive ways, which ultimately erode privacy.

In response to the harms that these apps have enabled, prominent governments have urged platforms to implement so-called backdoors or employ client-side automated scans of messages. But such solutions erode everyone’s basic liberties and put many users at greater risk, as many have pointed out. These violating measures and other traditional moderation solutions that depend on access to content are rarely effective for combating online abuse, as shown in recent research by Stanford University’s Riana Pfefferkorn.

Product design changes, not backdoors, are key to reconciling the competing uses and misuses of encrypted messaging. While the content of individual messages can be harmful, it is the scale and virality of allowing them to spread that presents the real challenge by turning sets of harmful messages into a groundswell of debilitating societal forces. Already, researchers and advocates have analyzed how changes like forwarding limits, better labeling, and reducing group sizes could dramatically reduce the spread and severity of problematic content, organized propaganda, and criminal behavior. However, such work is done using workarounds such as tiplines and public groups. Without good datasets from platforms, audits of any real-world effectiveness of such changes is hampered.

The platforms could do a lot more. In order for such important product changes to become more effective, they need to share the “metadata of the metadata” with researchers. This comprises aggregated datasets showing how many users a platform has, where accounts are created and when, how information travels, which types of messages and format-types are fastest to spread, which messages are commonly reported, and how (and when) users are booted off. To be clear, this is not information that is typically referred to as “metadata,” which normally refers to information about any specific individual and can be deeply personal to users, such as one’s name, email address, mobile number, close contacts, and even payment information. It is important to protect the privacy of this type of personal metadata, which is why the United Nations Office of the High Commissioner for Human Rights rightly considers a user’s metadata to be covered by the right to privacy when applied to the online space.

Luckily, we do not need this level or type of data to start seriously addressing harms. Instead, companies must first be forthcoming to researchers and regulators about the nature and extent of the metadata they do collect, with whom they share such data, and how they analyze it to influence product design and revenue model choices. We know for certain that many private messaging platforms collect troves of information that include tremendous insights useful to both how they design and trial new product features, or when enticing investment and advertisers.

The aggregated, anonymized data they collect can, without compromising encryption and privacy, be used by platforms and researchers alike to shed light on important patterns. Such aggregated metadata could lead to game-changing trust and safety improvements through better features and design choices.

Metadata From Encrypted Messages Can Keep People Safe

Metadata From Encrypted Messages Can Keep People Safe

The future is encrypted. Real-time, encrypted chat apps like Signal and WhatsApp, and messaging apps like Telegram, WeChat, and Messenger—used by two out of five people worldwide—help safeguard privacy and facilitate our rights to organize, speak freely, and keep close contact with our communities.

They are intentionally built for convenience and speed, for person-to-person communication as well as large group connections. Yet it is these same conditions that have fueled abusive and illegal behavior, disinformation and hate speech, and hoaxes and scams; all to the detriment of the vast majority of their users. As early as 2018, investigative reports have explored the role that these very features played in dozens of deaths in India and Indonesia as well as elections in Nigeria and Brazil. The ease with which users can forward messages without verifying their accuracy means disinformation can spread quickly, secretly, and at significant scale. Some apps allow extremely large groups—up to 200,000—or have played host to organized encrypted propaganda machinery, breaking away from the original vision to emulate a “living room.” And some platforms have proposed profit-driven policy changes, allowing business users to leverage customer data in new and invasive ways, which ultimately erode privacy.

In response to the harms that these apps have enabled, prominent governments have urged platforms to implement so-called backdoors or employ client-side automated scans of messages. But such solutions erode everyone’s basic liberties and put many users at greater risk, as many have pointed out. These violating measures and other traditional moderation solutions that depend on access to content are rarely effective for combating online abuse, as shown in recent research by Stanford University’s Riana Pfefferkorn.

Product design changes, not backdoors, are key to reconciling the competing uses and misuses of encrypted messaging. While the content of individual messages can be harmful, it is the scale and virality of allowing them to spread that presents the real challenge by turning sets of harmful messages into a groundswell of debilitating societal forces. Already, researchers and advocates have analyzed how changes like forwarding limits, better labeling, and reducing group sizes could dramatically reduce the spread and severity of problematic content, organized propaganda, and criminal behavior. However, such work is done using workarounds such as tiplines and public groups. Without good datasets from platforms, audits of any real-world effectiveness of such changes is hampered.

The platforms could do a lot more. In order for such important product changes to become more effective, they need to share the “metadata of the metadata” with researchers. This comprises aggregated datasets showing how many users a platform has, where accounts are created and when, how information travels, which types of messages and format-types are fastest to spread, which messages are commonly reported, and how (and when) users are booted off. To be clear, this is not information that is typically referred to as “metadata,” which normally refers to information about any specific individual and can be deeply personal to users, such as one’s name, email address, mobile number, close contacts, and even payment information. It is important to protect the privacy of this type of personal metadata, which is why the United Nations Office of the High Commissioner for Human Rights rightly considers a user’s metadata to be covered by the right to privacy when applied to the online space.

Luckily, we do not need this level or type of data to start seriously addressing harms. Instead, companies must first be forthcoming to researchers and regulators about the nature and extent of the metadata they do collect, with whom they share such data, and how they analyze it to influence product design and revenue model choices. We know for certain that many private messaging platforms collect troves of information that include tremendous insights useful to both how they design and trial new product features, or when enticing investment and advertisers.

The aggregated, anonymized data they collect can, without compromising encryption and privacy, be used by platforms and researchers alike to shed light on important patterns. Such aggregated metadata could lead to game-changing trust and safety improvements through better features and design choices.

The Man, the Myth, and the Metaverse

The Man, the Myth, and the Metaverse

Despite Mark Zuckerberg bloviating about the world-changing virtues of the metaverse for 87 minutes last month, his Connect 2021 keynote’s most truthful and telling moment came in a disclaimer that appeared before he even began speaking. “Actual results may differ materially than those expressed or implied in our forward-looking statements,” it read. “We undertake no obligation to revise or publicly release the results of any revision to these forward-looking statements.”

The fine print wasn’t just a legalese caveat excusing the company’s liability against anyone unable to distinguish between design fictions and product launches (sorry to everyone who was dusting off their chess board, preparing to play with a holographic opponent). It was also a caveat for the professed intentions of Facebook, now Meta, that Zuckerberg extolled throughout his presentation. He suggested Meta was going to be a team player, leaning into the language of openness and interoperability; that his company would be a metaverse company, joining those that predate Facebook. But actual results, the disclaimer reminds, may differ. Likewise, while Zuckerberg described the metaverse as “the next platform” in a tidy lineage from desktop to networked to mobile computing, we should be concerned that his intended metaverse is “the final platform.” Zuckerberg’s narrative of the metaverse as information technology’s culmination has power because it reinforces a grander myth of progress; a myth that stretches back to the 19th century and shapes Silicon Valley’s self-understanding. This is also a myth of domination, erasure, and violence. Ironically, conceptualizing the metaverse as the final platform abruptly draws to a close the myth of progress, so potent because of its open-endedness. Unintentionally, Zuckerberg has provided critics and enthusiasts alike the opportunity to create new narratives.

VR, and the metaverse it now enables, has long been figured as the ultimate or final destination in the evolution of computing. This was first anticipated in 1965 in a short but memorable paper by Ivan Sutherland, a scientist at the vanguard of computer graphics, that imagined what he called “The Ultimate Display.” This was “a looking-glass into the mathematical wonderland” that engaged all bodily senses. Users stepping through this looking-glass would be immersed in “a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in … a bullet displayed in such a room would be fatal.” By 1968, Sutherland had built the Sword of Damocles, a behemoth head-mounted display that many recognize as the first VR prototype.

Decades later, in a 2015 TED talk, the founder of VR company Within, Chris Milk, echoed VR’s “ultimate” mythos when he described VR as “the ultimate empathy machine,” capable of making the wealthy West feel more deeply for those less advantaged. In a blog post a year later, Milk dubbed VR “the last medium” because it eliminates the external frame (a limited screen) and moves the mediated experience within us—“the embodied internet,” as Zuck describes in his keynote. VR is a platform, Milk wrote, “for sharing our inner self—our very humanity.” In October 2021, Meta announced that it had purchased Within, not for its humanitarian VR experiences, but for its pandemic-popular Supernatural fitness app.

Within is only the most recent of Facebook’s conquests to believe in VR’s “ultimate” stature. Facebook acquired Oculus for $2 billion in 2014. In a 2015 Time cover story about Oculus founder Palmer Luckey, he’s described as loving Neal Stephenson’s Snow Crash, the novel in which “the metaverse” is coined. However, according to Luckey, a book is inherently limited in “the stimulus it supplies.” VR, on the other hand, “is the final platform” in that its sensorial experiences will one day be limitless.