Select Page
The Mystery of ‘Jia Tan,’ the XZ Backdoor Mastermind

The Mystery of ‘Jia Tan,’ the XZ Backdoor Mastermind

Ultimately, Scott argues that those three years of code changes and polite emails were likely not spent sabotaging multiple software projects, but rather building up a history of credibility in preparation for the sabotage of XZ Utils specifically—and potentially other projects in the future. “He just never got to that step because we got lucky and found his stuff,” says Scott. “So that’s burned now, and he’s gonna have to go back to square one.”

Technical Ticks and Time Zones

Despite Jia Tan’s persona as a single individual, their yearslong preparation is a hallmark of a well-organized state-sponsored hacker group, argues Raiu, the former Kaspersky lead researcher. So too are the technical hallmarks of the XZ Utils malicious code that Jia Tan added. Raiu notes that, at a glance, the code truly looks like a compression tool. “It’s written in a very subversive manner,” he says. It’s also a “passive” backdoor, Raiu says, so it wouldn’t reach out to a command-and-control server that might help identify the backdoor’s operator. Instead, it waits for the operator to connect to the target machine via SSH and authenticate with a private key—one generated with a particularly strong cryptographic function known as ED448.

The backdoor’s careful design could be the work of US hackers, Raiu notes, but he suggests that’s unlikely, since the US wouldn’t typically sabotage open source projects—and if it did, the National Security Agency would probably use a quantum-resistant cryptographic function, which ED448 is not. That leaves non-US groups with a history of supply chain attacks, Raiu suggests, like China’s APT41, North Korea’s Lazarus Group, and Russia’s APT29.

At a glance, Jia Tan certainly looks East Asian—or is meant to. The time zone of Jia Tan’s commits are UTC+8: That’s China’s time zone, and only an hour off from North Korea’s. However, an analysis by two researchers, Rhea Karty and Simon Henniger, suggests that Jia Tan may have simply changed the time zone of their computer to UTC+8 before every commit. In fact, several commits were made with a computer set to an Eastern European time zone instead, perhaps when Jia Tan forgot to make the change.

“Another indication that they are not from China is the fact that they worked on notable Chinese holidays,” say Karty and Henniger, students at Dartmouth College and the Technical University of Munich, respectively. Boehs, the developer, adds that much of the work starts at 9 am and ends at 5 pm for Eastern European time zones. “The time range of commits suggests this was not some project that they did outside of work,” Boehs says.

All of those clues lead back to Russia, and specifically Russia’s APT29 hacking group, argues Dave Aitel, a former NSA hacker and founder of the cybersecurity firm Immunity. Aitel points out that APT29—widely believed to work for Russia’s foreign intelligence agency, known as the SVR—has a reputation for technical care of a kind that few other hacker groups show. APT29 also carried out the Solar Winds compromise, perhaps the most deftly coordinated and effective software supply chain attack in history. That operation matches the style of the XZ Utils backdoor far more than the cruder supply chain attacks of APT41 or Lazarus, by comparison.

“It could very well be someone else,” says Aitel. “But I mean, if you’re looking for the most sophisticated supply chain attacks on the planet, that’s going to be our dear friends at the SVR.”

Security researchers agree, at least, that it’s unlikely that Jia Tan is a real person, or even one person working alone. Instead, it seems clear that the persona was the online embodiment of a new tactic from a new, well-organized organization—a tactic that nearly worked. That means we should expect to see Jia Tan return by other names: seemingly polite and enthusiastic contributors to open source projects, hiding a government’s secret intentions in their code commits.

A Vending Machine Error Revealed Secret Face Recognition Tech

A Vending Machine Error Revealed Secret Face Recognition Tech

Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting face recognition data without their consent.

The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, “Invenda.Vending.FacialRecognitionApp.exe,” displayed after the machine failed to launch a face recognition application that nobody expected to be part of the process of using a vending machine.

“Hey, so why do the stupid M&M machines have facial recognition?” SquidKid47 pondered.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Stanley sounded the alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines—without ever requesting consent.

This frustrated Stanley, who discovered that Canada’s privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls’ informational kiosks were secretly “using facial recognition software on unsuspecting patrons.”

Only because of that official investigation did Canadians learn that “over 5 million nonconsenting Canadians” were scanned into Cadillac Fairview’s database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive face recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

A University of Waterloo spokesperson, Rebecca Elming, eventually responded, confirming to CTV News that the school had asked to disable the vending machine software until the machines could be removed.

Students told CTV News that their confidence in the university’s administration was shaken by the controversy. Some students claimed on Reddit that they attempted to cover the vending machine cameras while waiting for the school to respond, using gum or Post-it notes. One student pondered whether “there are other places this technology could be being used” on campus.

Elming was not able to confirm the exact timeline for when the machines would be removed, other than telling Ars it would happen “as soon as possible.” Elming declined Ars’ request to clarify if there are other areas of campus collecting face recognition data. She also wouldn’t confirm, for any casual snackers on campus, when, if ever, students could expect the vending machines to be replaced with snack dispensers not equipped with surveillance cameras.

Invenda Claims Machines Are GDPR-Compliant

MathNEWS’ investigation tracked down responses from companies responsible for smart vending machines on the University of Waterloo’s campus.

Adaria Vending Services told MathNEWS that “what’s most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers.”

According to Adaria and Invenda, students shouldn’t worry about data privacy because the vending machines are “fully compliant” with the world’s toughest data privacy law, the European Union’s General Data Protection Regulation (GDPR).

“These machines are fully GDPR compliant and are in use in many facilities across North America,” Adaria’s statement said. “At the University of Waterloo, Adaria manages last mile fulfillment services—we handle restocking and logistics for the snack vending machines. Adaria does not collect any data about its users and does not have any access to identify users of these M&M vending machines.”

SpaceX Launched Military Satellites Designed to Track Hypersonic Missiles

SpaceX Launched Military Satellites Designed to Track Hypersonic Missiles

Two prototype satellites for the Missile Defense Agency and four missile-tracking satellites for the US Space Force rode a SpaceX Falcon 9 rocket into orbit Wednesday from Florida’s Space Coast.

These satellites are part of a new generation of spacecraft designed to track hypersonic missiles launched by China or Russia and perhaps emerging missile threats from Iran or North Korea, which are developing their own hypersonic weapons.

Hypersonic missiles are smaller and more maneuverable than conventional ballistic missiles, which the US military’s legacy missile defense satellites can detect when they launch. Infrared sensors on the military’s older-generation missile tracking satellites are tuned to pick out bright thermal signatures from missile exhaust.

The New Threat Paradigm

Hypersonic missiles represent a new challenge for the Space Force and the Missile Defense Agency (MDA). For one thing, ballistic missiles follow a predictable parabolic trajectory that takes them into space. Hypersonic missiles are smaller and comparatively dim, and they spend more time flying in Earth’s atmosphere. Their maneuverability makes them difficult to track.

A nearly five-year-old military organization called the Space Development Agency (SDA) has launched 27 prototype satellites over the last year to prove the Pentagon’s concept for a constellation of hundreds of small, relatively low-cost spacecraft in low-Earth orbit. This new fleet of satellites, which the SDA calls the Proliferated Warfighter Space Architecture, will eventually number hundreds of spacecraft to track missiles and relay data about their flight paths down to the ground. The tracking data will provide an early warning to those targeted by hypersonic missiles and help generate a firing solution for interceptors to shoot them down.

The SDA constellation combines conventional tactical radio links, laser inter-satellite communications, and wide-view infrared sensors. The agency, now part of the Space Force, plans to launch successive generations, or tranches, of small satellites, each introducing new technology. The SDA’s approach relies on commercially available spacecraft and sensor technology and will be more resilient to attack from an adversary than the military’s conventional space assets. Those legacy military satellites often cost hundreds of millions or billions of dollars apiece, with architectures that rely on small numbers of large satellites that might appear like a sitting duck to an adversary determined to inflict damage.

Four of the small SDA satellites and two larger spacecraft for the Missile Defense Agency were aboard a SpaceX Falcon 9 rocket when it lifted off from Cape Canaveral Space Force Station at 5:30 pm EST (2230 UTC) Wednesday.

The rocket headed northeast from Cape Canaveral to place the six payloads into low-Earth orbit. Officials from the Space Force declared the launch a success later Wednesday evening.

The SDA’s four tracking satellites, built by L3Harris, are the last spacecraft the agency will launch in its prototype constellation, called Tranche 0. Beginning later this year, the SDA plans to kick off a rapid-fire launch campaign with SpaceX and United Launch Alliance to quickly build out its operational Tranche 1 constellation, with launches set to occur at one-month intervals to deploy approximately 150 satellites. Then, there will be a Tranche 2 constellation with more advanced sensor technologies.

The primary payloads aboard Wednesday’s launch were for the Missile Defense Agency. These two Hypersonic and Ballistic Tracking Space Sensor (HBTSS) satellites, one supplied by L3Harris and the other by Northrop Grumman, will demonstrate medium field-of-view sensors. Those sensors can’t cover as much territory as the SDA satellites but will provide more sensitive and detailed missile tracking data.

Elon Musk’s X Gave Check Marks to Terrorist Group Leaders, Report Says

Elon Musk’s X Gave Check Marks to Terrorist Group Leaders, Report Says

A watchdog group’s investigation found that terrorist group Hezbollah and other US-sanctioned entities have accounts with paid check marks on X, the Elon Musk–owned social network that still resides at the Twitter.com domain.

The Tech Transparency Project (TTP), a nonprofit that is critical of Big Tech companies, said in a report on Wednesday that “X, the platform formerly known as Twitter, is providing premium, paid services to accounts for two leaders of a US-designated terrorist group and several other organizations sanctioned by the US government.”

After buying Twitter for $44 billion, Musk started charging users for check marks that were previously intended to verify that an account was notable and authentic. “Along with the check marks, which are intended to confer legitimacy, X promises various perks for premium accounts, including the ability to post longer text and videos and greater visibility for some posts,” the Tech Transparency Project report noted.

The Tech Transparency Project suggests that X may be violating US sanctions. “The accounts identified by TTP include two that apparently belong to the top leaders of Lebanon-based Hezbollah and others belonging to Iranian and Russian state-run media,” the report said. “The fact that X requires users to pay a monthly or annual fee for premium service suggests that X is engaging in financial transactions with these accounts, a potential violation of US sanctions.”

Some of the accounts were verified before Musk bought Twitter, but verification was a free service at the time. Musk’s decision to charge for check marks means that X is “providing a premium, paid service to sanctioned entities,” which may raise “new legal issues,” the Tech Transparency Project said.

Report Details 28 Check-Marked Accounts

Musk’s X charges $1,000 a month for a Verified Organizations subscription and last month added a basic tier for $200 a month. For individuals, the X Premium tiers that come with check marks cost $8 or $16 a month.

It’s possible for US companies to receive a license from the government to engage in certain transactions with sanctioned entities, but it doesn’t seem likely that X has such a license. X’s rules explicitly prohibit users from purchasing X Premium “if you are a person with whom X is not permitted to have dealings under US and any other applicable economic sanctions and trade compliance law.”

In all, the Tech Transparency Project said it found 28 “verified” accounts tied to sanctioned individuals or entities. These include individuals and groups listed by the US Treasury Department’s Office of Foreign Assets Control (OFAC) as Specially Designated Nationals.

“Of the 28 X accounts identified by TTP, 18 show they got verified after April 1, 2023, when X began requiring accounts to subscribe to paid plans to get a check mark. The other 10 were legacy verified accounts, which are required to pay for a subscription to retain their check marks,” the group wrote, adding that it “found advertising in the replies to posts in 19 of the 28 accounts.”

X issued the following statement on Wednesday: “X has a robust and secure approach in place for our monetization features, adhering to legal obligations, along with independent screening by our payments providers. Several of the accounts listed in the Tech Transparency Report are not directly named on sanction lists, while some others may have visible account check marks without receiving any services that would be subject to sanctions. Our teams have reviewed the report and will take action if necessary. We’re always committed to ensuring that we maintain a safe, secure and compliant platform.”

X Removes Some Check Marks

An account with the handle @SH_NasrallahEng appears to be tied to Hezbollah leader Hassan Nasrallah, the TTP report said. The account had a check mark when we first checked it earlier Wednesday, but it has since been removed.

“The account, which has 93,600 followers, posts English-language Hezbollah messages and memes disparaging Israel and the US. It was created in October 2021 and verified in November 2023, the same month that Nasrallah threatened further escalation of Israel’s war with Hamas,” the report said.

‘AI Girlfriends’ Are a Privacy Nightmare

‘AI Girlfriends’ Are a Privacy Nightmare

You shouldn’t trust any answers a chatbot sends you. And you probably shouldn’t trust it with your personal information either. That’s especially true for “AI girlfriends” or “AI boyfriends,” according to new research.

An analysis into 11 so-called romance and companion chatbots, published on Wednesday by the Mozilla Foundation, has found a litany of security and privacy concerns with the bots. Collectively, the apps, which have been downloaded more than 100 million times on Android devices, gather huge amounts of people’s data; use trackers that send information to Google, Facebook, and companies in Russia and China; allow users to use weak passwords; and lack transparency about their ownership and the AI models that power them.

Since OpenAI unleashed ChatGPT on the world in November 2022, developers have raced to deploy large language models and create chatbots that people can interact with and pay to subscribe to. The Mozilla research provides a glimpse into how this gold rush may have neglected people’s privacy, and into tensions between emerging technologies and how they gather and use data. It also indicates how people’s chat messages could be abused by hackers.

Many “AI girlfriend” or romantic chatbot services look similar. They often feature AI-generated images of women which can be sexualized or sit alongside provocative messages. Mozilla’s researchers looked at a variety of chatbots including large and small apps, some of which purport to be “girlfriends.” Others offer people support through friendship or intimacy, or allow role-playing and other fantasies.

“These apps are designed to collect a ton of personal information,” says Jen Caltrider, the project lead for Mozilla’s Privacy Not Included team, which conducted the analysis. “They push you toward role-playing, a lot of sex, a lot of intimacy, a lot of sharing.” For instance, screenshots from the EVA AI chatbot show text saying “I love it when you send me your photos and voice,” and asking whether someone is “ready to share all your secrets and desires.”

Caltrider says there are multiple issues with these apps and websites. Many of the apps may not be clear about what data they are sharing with third parties, where they are based, or who creates them, Caltrider says, adding that some allow people to create weak passwords, while others provide little information about the AI they use. The apps analyzed all had different use cases and weaknesses.

Take Romantic AI, a service that allows you to “create your own AI girlfriend.” Promotional images on its homepage depict a chatbot sending a message saying,“Just bought new lingerie. Wanna see it?” The app’s privacy documents, according to the Mozilla analysis, say it won’t sell people’s data. However, when the researchers tested the app, they found it “sent out 24,354 ad trackers within one minute of use.” Romantic AI, like most of the companies highlighted in Mozilla’s research, did not respond to WIRED’s request for comment. Other apps monitored had hundreds of trackers.

In general, Caltrider says, the apps are not clear about what data they may share or sell, or exactly how they use some of that information. “The legal documentation was vague, hard to understand, not very specific—kind of boilerplate stuff,” Caltrider says, adding that this may reduce the trust people should have in the companies.