For years, AlphaBay ruled the dark web. If you were in the market to buy drugs or stolen credit cards, the digital bazaar was the place to turn. At its peak, more than 350,000 products were listed for sale—an estimated 10 times the size of the notorious Silk Road market—and the website proved to be the ire of law enforcement the world round. That was until cops took AlphaBay offline in 2017.
This week, WIRED published the first in a six-part series detailing the hunt for Alpha02, the mastermind believed to be behind AlphaBay, and the huge international takedown operation that wiped the marketplace from the web. Each week, we’ll publish a new part of the series, excerpted from WIRED reporter Andy Greenberg’s new book, Tracers in the Dark.
Schools across the US have faced dozens of hoax calls about mass shootings in recent months. After a call is made, police scramble to the scene fearing the worst, only to find out there is no shooter. Now hoax phone call recordings obtained by WIRED and conversations with law enforcement officials reveal how the calls have been made and show that law enforcement officials are closing in on the alleged hoaxer. Police are looking for a male “with a heavy accent described as Middle Eastern or African” and have linked the phone calls to Ethiopia.
Elsewhere, a bug in Apple’s new macOS 13 Ventura operating system is causing problems for malware scanners and security monitoring tools. With the new software update, Apple accidentally crippled third-party security products in a way users may not notice. The company is planning to fix the bug in an upcoming software release.
We also looked at a newly discovered Chinese influence operation that is targeting US elections—although it is not having much success. And now that Elon Musk owns Twitter, here’s how you should think about your privacy and security on the bird website.
But wait, there’s more! Each week, we highlight the news we didn’t cover in-depth ourselves. Click on the headlines below to read the full stories. And stay safe out there.
Officials in Canada and the Netherlands are investigating allegations that Chinese police forces have operated a network of illegal police stations within their countries. According to reports that emerged this week, Chinese police forces have been operating out of clandestine bases and using their presence to track and threaten dissidents. The Dutch government has called such sites “illegal” and said it is “investigating exactly what they are doing here,” while officials in Canada said they are investigating “so-called ‘police’ stations.”
However, it is just the tip of the iceberg. Spanish civil rights group Safeguard Defenders first claimed that Chinese police forces from the cities of Fuzhou and Qingtian were running “overseas police service stations” across the West in a report published in September. Since 2018, the group claims, more than 38 police service stations have appeared in “dozens of countries” spread across five different continents. “Such overseas police ‘service stations’ have been used by police back in China to carry out such ‘persuasion to return’ operations on foreign soil, including in Europe,” the report states. Lawmakers in both England and Scotland are also planning on investigating the stations, reports say.
Elon Musk is buying Twitter for $44 billion after the least sexy will-they-won’t-they saga of all time. And while Musk attempted to reassure advertisers yesterday that “Twitter obviously cannot become a free-for-all hellscape, where anything can be said with no consequences,” the acquisition raises practical questions about what the social network’s nearly 240 million active users can expect from the platform in the future.
Chief among these concerns are questions about how Twitter’s stances on user security and privacy may change in the Musk era. A number of top Twitter executives were fired last night, including CEO Parag Agrawal, the company’s general counsel Sean Edgett, and Vijaya Gadde, the company’s head of legal policy, trust, and safety who was known for working to protect user data from law enforcement requests and court orders. Gadde ran the committee that ousted Donald Trump from Twitter in January 2021 following the Capitol riots. Musk, meanwhile, said in May that he would want to reinstate Trump on the platform and called the former US president’s removal “morally bad.”
This afternoon, Musk wrote that “Twitter will be forming a content moderation council with widely diverse viewpoints. No major content decisions or account reinstatements will happen before that council convenes.”
Content moderation has real implications for user security on any platform, particularly when it involves hate speech and violent misinformation. But other topics, including the privacy of Twitter direct messages, protection from unlawful government data requests, and the overall quality of Twitter’s security protections, will loom large in the coming weeks. This is particularly true in light of recent accusations from former Twitter chief security officer Peiter “Mudge” Zatko, who described Twitter as having grossly inadequate digital security defenses in an August whistleblower report.
“Personally, I don’t know what to do, especially when you take Mudge’s whistleblower complaint into consideration,” says Whitney Merrill, a privacy and data protection lawyer and former Federal Trade Commission attorney. “I’m just not putting any sensitive data or data I’d like to stay confidential into DMs.”
Twitter offers a tool for downloading all the data it holds in your account, and reviewing your own trove is a good first step in understanding what information the company has linked to you. It’s unclear, though, exactly how much control you currently have over deleting this data, and the policies could continue to evolve under the Musk administration. Twitter DMs, for example, only offer the option to “Delete for You,” meaning delete messages from your own account but not for other users.
More broadly, Twitter’s current policy on account deactivation simply says, “If you do not log back into your account for the 30 days following the deactivation, your account will be permanently deactivated. Once permanently deactivated, all information associated with your account is no longer available in our Production Tools.” It is unclear what exactly this means in terms of long-term data retention and, again, policies may change in the future.
“It’s really not about pornography,” says Brit, a former user of Accountable2You who asked to only be identified by her first name, due to privacy concerns. “It’s about making you conform to what your pastor wants.” Brit says she was asked to install the app by her parents after she was caught looking at pornography and that her mother and her pastor were both her designated accountability partners. “I remember I had to sit down and have a conversation with him [her pastor] after I Wikipedia’d an article about atheism,” she says. “I was a kid, but that doesn’t mean I don’t have some kind of right to read what I want to read.”
While accountability apps are largely marketed to parents and families, some also advertise their services to churches. Accountable2You, for example, advertises group rates for churches or small groups and has set up several landing pages for specific churches where members can sign up. Covenant Eyes, meanwhile, employs a director of Church and Ministry Outreach to help onboard religious organizations.
Accountable2You did not respond to WIRED’s requests for comment.
Eva Galperin is director of cybersecurity at the Electronic Frontier Foundation, a digital rights nonprofit, and cofounder of the Coalition Against Stalkerware. Galperin says consent to such surveillance is a major concern. “One of the key elements of consent is that a person can feel comfortable saying no,” she says. “You could argue that any app installed in a church setting is done in a coercive manner.” While WIRED did not speak to anyone who was unaware that the app was on their phone, which is often the case with spyware, Hao-Wei Lin says he didn’t feel like he was in a position where he could say no to his church leader when he was asked to install Covenant Eyes. Gracepoint had secured him a $400-a-month apartment in Berkeley, where he was attending college. Without the church’s support, he might have had nowhere to live.
But this is not the experience of everyone we spoke to. James Nagy is a former Gracepoint church member who, as a one-time congregation leader, was on both sides of Covenant Eyes reports. Nagy, who is gay, was taught from a young age that homosexuality was a sin. So when Gracepoint offered him a software solution that claimed to be able to help what he then considered to be a moral dilemma, he jumped at the opportunity. He says that while he believed many people at Gracepoint were pressured to install the app, in his case, the pressure came from himself. “Gracepoint didn’t try to change me,” Nagy says. “I tried to change me.” Nagy is now an elder at the Presbyterian Church (USA) and until 2021 was a facilitator with the Reformation Project, a nonprofit whose mission is to advance LGBTQ inclusion in the church.
Many of Silicon Valley’s fiercest watchdogs on Capitol Hill are now snarling. Yesterday’s arresting testimony by Twitter’s former security chief, Peiter “Mudge” Zatko, has lawmakers in both parties redoubling their efforts to rein in the tech titans.
Zatko’s testimony before the Senate Judiciary Committee follows a detailed report he submitted to the US Department of Justice, the Securities and Exchange Commission, and the Federal Trade Commission late last month. His allegations, which were the central subject of yesterday’s hearing, range from claims of lax security protocols to negligent leadership—all of which Twitter denies.
Even as senators were left seething—guess they aren’t fans of Twitter’s 4,000 or so employees having easy access to their accounts and millions of others, as Zatko alleges—there’s also a sense of renewal in the air at the Capital.
“That was a fun one,” Republican senator Mike Lee told WIRED after the hearing.
The anger cloaked in elation is, in part, because many senators feel they now found the proverbial smoking gun.
“My guess is that this testimony today will trigger a lot of class actions,” Senator John Kennedy of Louisiana said after questioning the witness on Tuesday. “And it should.”
The Republican is referring to Zatko’s allegation that the social media platform lacks basic security measures, such as tracking which of the company’s hundreds of engineers are inside the platform making changes. This includes, according to Zatko, the potential mining of a United States senator’s own account.
“I’m assuming they have,” Kennedy said.
Hence the snarling. Like the rest of us, US senators are protective of their private data. And a growing consensus in Washington is that the FTC is ill-suited to take on social media giants who, according to Zatko, laugh off $150 million fines and all the demands the FTC places on bad tech actors.
“Maybe the thing to do is put it in the hands of private litigants,” Senator Josh Hawley of Missouri said. “Lawsuits are powerful things, so maybe it’s, we let the folks who are getting doxed and the folks who are getting hacked and whatever—we give them the power to go into court. Then you get discovery.”
While senators plan to ask Twitter officials to testify—likely with an assist from subpoenas—in response to the accusations from their former executive, they also don’t seem to be waiting. Senator Hawley is now trying to breathe new life into his out-of-the-box proposal to move the FTC’s tech portfolio to the Department of Justice, though he’s open to many reform ideas floating around Washington.
Hawley and outspoken senator Lindsey Graham, of South Carolina are renewing their calls to eradicate Section 230—the law, passed by Congress in the internet’s infancy, that protects online companies from certain kinds of litigation for content users publish on their platforms.
“You’ve got to license the people. Apparently, money doesn’t matter to them. Losing your ability to operate would matter,” Graham said. “So if you were licensed, then you have something you could lose.”
Graham has teamed up with Senator Elizabeth Warren of Massachusetts in calling for the creation of a new federal regulatory body focused on tech companies. While the two agree the FTC is currently incapable of overseeing Silicon Valley, they disagree on Section 230, which Graham has wanted to be reformed for some time.
“My guess is that Meta is going to have to look at some form of geo-siloing if they want to continue to operate in the EU,” says Calli Schroeder, global privacy counsel at the Electronic Privacy Information Center, a nonprofit digital rights research organization. Schroeder, who previously worked with companies on international data transfers, says this approach could mean Meta would have to create its own servers and data centers in the EU that aren’t connected to its broader databases.
Harshvardhan Pandit, a computer science research fellow at Trinity College Dublin who is researching the GDPR, says that as data authorities are still considering Meta’s case and a final decision hasn’t been published yet, they could include several caveats or steps that Meta should take to fall in line. For instance, one recent data protection decision in Europe gave a six-month period for a company to make changes to its business.
“I think the most pragmatic solution would be for them to create the European infrastructure, like Google or Amazon, which have quite a few data centers here,” Pandit says, adding that Meta could also introduce more encryption to how it stores data and maximize how much it keeps in the EU. All these measures would be costly, though. Jack Gilbert, director and associate general counsel at Meta, says that the issue “is in the process of being resolved.” Facebook did not respond specifically to questions about its plan to respond to the Irish decision.
European officials have twice ruled that systems put in place to share data between the EU and US don’t properly protect people’s data—the complaints have been ongoing since the early 2010s. European courts ruled that international data-sharing agreements weren’t up to scratch first in 2015 and then again in July 2020, when the Privacy Shield agreement was ruled illegal.
“All that the EU is asking for when organizations transfer data to other countries is to protect that data in line with the GDPR,” says Nader Henein, a research vice president specializing in privacy and data protection at Gartner. “The issue is that laws in the US that protect the data of ‘nonresident aliens’ are woefully insufficient and make it very difficult for organizations like Facebook to comply with local law and the GDPR.”
While Meta is the focus of the most high-profile complaint, it isn’t the only company impacted by a lack of clarity on how companies in Europe can send data to the US. “The data transfer issue is not Meta-specific,” David Wehner, Meta’s chief strategy officer, said in a July earnings call. “It relates to how in general data is transferred for all US and EU companies back and forth to the US.”
The impacts of the July 2020 decision to get rid of Privacy Shield are now being felt. Since January of this year, multiple European data regulators have ruled that using Google Analytics, the company’s traffic-monitoring service for websites, falls foul of the GDPR. Danish authorities went even further: Schools can’t use Chromebooks without restrictions being put in place. “There is a ton of legal uncertainty, and there is a significant compliance risk,” says Gabriela Zanfir-Fortuna, vice president of global privacy at Future of Privacy Forum, a nonprofit think tank.