Select Page
How the Pentagon Learned to Use Targeted Ads to Find its Targets—and Vladimir Putin

How the Pentagon Learned to Use Targeted Ads to Find its Targets—and Vladimir Putin

Most alarmingly, PlanetRisk began seeing evidence of the US military’s own missions in the Locomotive data. Phones would appear at American military installations such as Fort Bragg in North Carolina and MacDill Air Force Base in Tampa, Florida—home of some of the most skilled US special operators with the Joint Special Operations Command and other US Special Operations Command units. They would then transit through third-party countries like Turkey and Canada before eventually arriving in northern Syria, where they were clustering at the abandoned Lafarge cement factory outside the town of Kobane.

It dawned on the PlanetRisk team that these were US special operators converging at an unannounced military facility. Months later, their suspicions would be publicly confirmed; eventually the US government would acknowledge the facility was a forward operating base for personnel deployed in the anti-ISIS campaign.

Even worse, through Locomotive, they were getting data in pretty close to real time. UberMedia’s data was usually updated every 24 hours or so. But sometimes, they saw movement that had occurred as recently as 15 or 30 minutes earlier. Here were some of the best trained special operations units in the world, operating at an unannounced base. Yet their precise, shifting coordinates were showing up in UberMedia’s advertising data. While Locomotive was a closely held project meant for government use, UberMedia’s data was available for purchase by anyone who could come up with a plausible excuse. It wouldn’t be difficult for the Chinese or Russian government to get this kind of data by setting up a shell company with a cover story, just as Mike Yeagley had done.

Initially, PlanetRisk was sampling data country by country, but it didn’t take long for the team to wonder what it would cost to buy the entire world. The sales rep at UberMedia provided the answer: For a few hundred thousand dollars a month, the company would provide a global feed of every phone on earth that the company could collect on. The economics were impressive. For the military and intelligence community, a few hundred thousand a month was essentially a rounding error—in 2020, the intelligence budget was $62.7 billion. Here was a powerful intelligence tool for peanuts.

Locomotive, the first version of which was coded in 2016, blew away Pentagon brass. One government official demanded midway through the demo that the rest of it be conducted inside a SCIF, a secure government facility where classified information could be discussed. The official didn’t understand how or what PlanetRisk was doing but assumed it must be a secret. A PlanetRisk employee at the briefing was mystified. “We were like, well, this is just stuff we’ve seen commercially,” they recall. “We just licensed the data.” After all, how could marketing data be classified?

Government officials were so enthralled by the capability that PlanetRisk was asked to keep Locomotive quiet. It wouldn’t be classified, but the company would be asked to tightly control word of the capability to give the military time to take advantage of public ignorance of this kind of data and turn it into an operational surveillance program.

And the same executive remembered leaving another meeting with a different government official. They were on the elevator together when one official asked, could you figure out who is cheating on their spouse?

Yeah, I guess you could, the PlanetRisk executive answered.

But Mike Yeagley wouldn’t last at PlanetRisk.

As the company looked to turn Locomotive from a demo into a live product, Yeagley started to believe that his employer was taking the wrong approach. It was looking to build a data visualization platform for the government. Yet again, Yeagley thought it would be better to provide the raw data to the government and let them visualize it in any way they choose. Rather than make money off of the number of users inside government that buy a software license, Mike Yeagley wanted to just sell the government the data for a flat fee.

A Vending Machine Error Revealed Secret Face Recognition Tech

A Vending Machine Error Revealed Secret Face Recognition Tech

Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting face recognition data without their consent.

The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, “Invenda.Vending.FacialRecognitionApp.exe,” displayed after the machine failed to launch a face recognition application that nobody expected to be part of the process of using a vending machine.

“Hey, so why do the stupid M&M machines have facial recognition?” SquidKid47 pondered.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Stanley sounded the alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines—without ever requesting consent.

This frustrated Stanley, who discovered that Canada’s privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls’ informational kiosks were secretly “using facial recognition software on unsuspecting patrons.”

Only because of that official investigation did Canadians learn that “over 5 million nonconsenting Canadians” were scanned into Cadillac Fairview’s database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive face recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

A University of Waterloo spokesperson, Rebecca Elming, eventually responded, confirming to CTV News that the school had asked to disable the vending machine software until the machines could be removed.

Students told CTV News that their confidence in the university’s administration was shaken by the controversy. Some students claimed on Reddit that they attempted to cover the vending machine cameras while waiting for the school to respond, using gum or Post-it notes. One student pondered whether “there are other places this technology could be being used” on campus.

Elming was not able to confirm the exact timeline for when the machines would be removed, other than telling Ars it would happen “as soon as possible.” Elming declined Ars’ request to clarify if there are other areas of campus collecting face recognition data. She also wouldn’t confirm, for any casual snackers on campus, when, if ever, students could expect the vending machines to be replaced with snack dispensers not equipped with surveillance cameras.

Invenda Claims Machines Are GDPR-Compliant

MathNEWS’ investigation tracked down responses from companies responsible for smart vending machines on the University of Waterloo’s campus.

Adaria Vending Services told MathNEWS that “what’s most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers.”

According to Adaria and Invenda, students shouldn’t worry about data privacy because the vending machines are “fully compliant” with the world’s toughest data privacy law, the European Union’s General Data Protection Regulation (GDPR).

“These machines are fully GDPR compliant and are in use in many facilities across North America,” Adaria’s statement said. “At the University of Waterloo, Adaria manages last mile fulfillment services—we handle restocking and logistics for the snack vending machines. Adaria does not collect any data about its users and does not have any access to identify users of these M&M vending machines.”

‘AI Girlfriends’ Are a Privacy Nightmare

‘AI Girlfriends’ Are a Privacy Nightmare

You shouldn’t trust any answers a chatbot sends you. And you probably shouldn’t trust it with your personal information either. That’s especially true for “AI girlfriends” or “AI boyfriends,” according to new research.

An analysis into 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots. Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of people’s data; use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.

Since OpenAI unleashed ChatGPT on the world in November 2022, developers have raced to deploy large language models and create chatbots that people can interact with and pay to subscribe to. The Mozilla research provides a glimpse into how this gold rush may have neglected people’s privacy, and into tensions between emerging technologies and how they gather and use data. It also indicates how people’s chat messages could be abused by hackers.

Many “AI girlfriend” or romantic chatbot services look similar. They often feature AI-generated images of women which can be sexualized or sit alongside provocative messages. Mozilla’s researchers looked at a variety of chatbots including large and small apps, some of which purport to be “girlfriends.” Others offer people support through friendship or intimacy, or allow role-playing and other fantasies.

“These apps are designed to collect a ton of personal information,” says Jen Caltrider, the project lead for Mozilla’s Privacy Not Included team, which conducted the analysis. “They push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.” For instance, screenshots from the EVA AI chatbot show text saying “I love it when you send me your photos and voice,” and asking whether someone is “ready to share all your secrets and desires.”

Caltrider says there are multiple issues with these apps and websites. Many of the apps may not be clear about what data they are sharing with third parties, where they are based, or who creates them, Caltrider says, adding that some allow people to create weak passwords, while others provide little information about the AI they use. The apps analyzed all had different use cases and weaknesses.

Take Romantic AI, a service that allows you to “create your own AI girlfriend.” Promotional images on its homepage depict a chatbot sending a message saying,“Just bought new lingerie. Wanna see it?” The app’s privacy documents, according to the Mozilla analysis, say it won’t sell people’s data. However, when the researchers tested the app, they found it “sent out 24,354 ad trackers within one minute of use.” Romantic AI, like most of the companies highlighted in Mozilla’s research, did not respond to WIRED’s request for comment. Other apps monitored had hundreds of trackers.

In general, Caltrider says, the apps are not clear about what data they may share or sell, or exactly how they use some of that information. “The legal documentation was vague, hard to understand, not very specific—kind of boilerplate stuff,” Caltrider says, adding that this may reduce the trust people should have in the companies.

23andMe Failed to Detect Account Intrusions for Months

23andMe Failed to Detect Account Intrusions for Months

Police took a digital rendering of a suspect’s face, generated using DNA evidence, and ran it through a facial recognition system in a troubling incident reported for the first time by WIRED this week. The tactic came to light in a trove of hacked police records published by the transparency collective Distributed Denial of Secrets. Meanwhile, information about United States intelligence agencies purchasing Americans’ phone location data and internet metadata without a warrant was revealed this week only after US senator Ron Wyden blocked the appointment of a new NSA director until the information was made public. And a California teen who allegedly used the handle Torswats to carry out hundreds of swatting attacks across the US is being extradited to Florida to face felony charges.

The infamous spyware developer NSO Group, creator of the Pegasus spyware, has been quietly planning a comeback, which involves investing millions of dollars lobbying in Washington while exploiting the Israel-Hamas war to stoke global security fears and position its products as a necessity. Breaches of Microsoft and Hewlett-Packard Enterprise, disclosed in recent days, have pushed the espionage operations of the well-known Russia-backed hacking group Midnight Blizzard back into the spotlight. And Amazon-owned Ring said this week that it is shutting down a feature of its controversial Neighbors app that gave law enforcement a free pass to request footage from users without a warrant.

WIRED had a deep dive this week into the Israel-linked hacking group known as Predatory Sparrow and its notably aggressive offensive cyberattacks, particularly against Iranian targets, which have included crippling thousands of gas stations and setting a steel mill on fire. With so much going on, we’ve got the perfect quick weekend project for iOS users who want to feel more digitally secure: Make sure you’ve upgraded your iPhone to iOS 17.3 and then turn on Apple’s new Stolen Device Protection feature, which could block thieves from taking over your accounts.

And there’s more. Each week, we highlight the news we didn’t cover in-depth ourselves. Click on the headlines below to read the full stories. And stay safe out there.

After first disclosing a breach in October, the ancestry and genetics company 23andMe said in December that personal data from 6.9 million users was impacted in the incident stemming from attackers compromising roughly 14,000 user accounts. These accounts then gave attackers access to information voluntarily shared by users in a social feature the company calls DNA Relatives. 23andMe has blamed users for the account intrusions, saying that they only occurred because victims set weak or reused passwords on their accounts. But a state-mandated filing in California about the incident reveals that the attackers started compromising customers’ accounts in April and continued through much of September without the company ever detecting suspicious activity—and that someone was trying to guess and brute-force users’ passwords.

North Korea has been using generative artificial intelligence tools “to search for hacking targets and search for technologies needed for hacking,” according to a senior official at South Korea’s National Intelligence Service who spoke to reporters on Wednesday under the condition of anonymity. The official said that Pyongyang has not yet begun incorporating generative AI into active offensive hacking operations but that South Korean officials are monitoring the situation closely. More broadly, researchers say they are alarmed by North Korea’s development and use of AI tools for multiple applications.

The digital ad industry is notorious for enabling the monitoring and tracking of users across the web. New findings from 404 Media highlight a particularly insidious service, Patternz, that draws data from ads in hundreds of thousands of popular, mainstream apps to reportedly fuel a global surveillance dragnet. The tool and its visibility have been marketed to governments around the world to integrate with other intelligence agency surveillance capabilities. “The pipeline involves smaller, obscure advertising firms and advertising industry giants like Google. In response to queries from 404 Media, Google and PubMatic, another ad firm, have already cut-off a company linked to the surveillance firm,” 404’s Joseph Cox wrote.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory have devised an algorithm that could be used to convert data from smart devices’ ambient light sensors into an image of the scene in front of the device. A tool like this could be used to turn a smart home gadget or mobile device into a surveillance tool. Ambient light sensors measure light in an environment and automatically adjust a screen’s brightness to make it more usable in different conditions. But because ambient light data isn’t considered to be sensitive, these sensors automatically have certain permissions in an operating system and generally don’t require specific approval from a user to be used by an app. As a result, the researchers point out that bad actors could potentially abuse the readings from these sensors without users having recourse to block the information stream.

A Bloody Pig Mask Is Just Part of a Wild New Criminal Charge Against eBay

A Bloody Pig Mask Is Just Part of a Wild New Criminal Charge Against eBay

“EBay’s actions against us had a damaging and permanent impact on us—emotionally, psychologically, physically, reputationally, and financially—and we strongly pushed federal prosecutors for further indictments to deter corporate executives and board members from creating a culture where stalking and harassment is tolerated or encouraged,” Ina and David Steiner say in a victim statement published online. The couple also highlighted that EcommerceBytes has filed a civil lawsuit against eBay and its former employees that is set to be heard in 2025.

China’s Judicial Bureau has claimed a privately run research institution, the Beijing Wangshendongjian Judicial Appraisal Institute, has created a way to identify people using Apple’s AirDrop tool, including determining phone numbers, email addresses, and device names. Police have been able to identify suspects using the technique, according to reports and a post from the Institute. Apple’s wireless AirDrop communication and file-sharing method has previously been used in China to protest the leadership of President Xi Jinping, and Apple introduced a 10-minute time limit sharing period in China, before later rolling it out globally.

In a blog post analyzing the incident, Johns Hopkins University cryptographer Matthew Green says the attack was initially discovered by researchers at Germany’s Technical University of Darmstadt in 2019. In short, Green says, Apple doesn’t use a secure private set intersection that can help mask people’s identity when communicating with other phones using AirDrop. It’s unclear if Apple plans to make any changes to stop AirDrop being abused in the future.

It’s been more than 15 years since the Stuxnet malware was smuggled into Iran’s Natanz uranium enrichment plant and destroyed hundreds of centrifuges. Despite the incident happening over a decade ago, there are still plenty of details that remain unknown about the attack, which is believed to have been coordinated by the US and Israel. That includes who may have delivered the Stuxnet virus to the nuclear facility—a USB thumb drive was used to install the worm into the nuclear plant’s air-gapped networks. In 2019, it was reported that Dutch intelligence services had recruited an insider to help with the attack. This week, the Dutch publication Volkskrant claimed to identify the mole as Erik van Sabben. According to the report, van Sabben was recruited by Dutch intelligence service AIVD in 2005, and politicians in the Netherlands did not know about the operation. Van Sabben is said to have left Iran shortly after the sabotage began. However, he died two weeks later, on January 16, 2009, after being involved in a motorcycle accident in Dubai.

The rapid advances in generative AI systems, which use machine learning to create text and produce images, has seen companies scrambling to incorporate chatbots or similar technologies into their products. Despite the progress, traditional cybersecurity practices of locking down systems from unauthorized access and making sure apps can’t access too much data still apply. This week, 404 Media reported that Chattr, a company creating an “AI digital assistant” to help with hiring, exposed data through an incorrect Firebase configuration and also revealed how its systems work. This includes the AI appearing to have the ability to “accept or deny job applicants.” The pseudonymous security researcher behind the finding, MrBruh, shared a video with 404 Media showing the chatbot appearing to automatically make decisions about job applications. Chattr secured the exposed systems after being contacted by the researchers but did not comment on the incident.