Select Page
How to Install Threads on Your Windows Desktop

How to Install Threads on Your Windows Desktop

After all the press for Mark Zuckerberg and Elon Musk potentially taking it to each other in the octagon, the only analog we’re likely to see is Twitter versus Meta’s new darling–Threads. The platform has picked up 70 million sign-ups in the first couple of days, and it shows no sign of slowing down. The only trouble is that, right now, it’s mobile only. You can view individual posts in a browser, but you can’t post or read your whole feed.

Personally, my relationship with the blue bird has been in sharp decline over the last few months, so I decided to give Threads a try. The best way I can describe it is, it’s like rerolling a new character class in a game after having already been through the endgame content. You know, kind of refreshing.

Threads still falls behind because it’s mobile locked to Android and iOS devices, so I can’t really use it on anything other than my phone and tablet. But if you’re running Windows 11, there’s a quick path around that restriction using the Windows Subsystem for Android.

It all hinges on the Amazon Appstore and activating the ability to sideload Android APKs with the flick of a few switches. So this guide isn’t just for Threads, but more of a … meta guide for most Android apps that have available APKs. (I hope you see what I did there.)

Before We Begin

  • Have Windows 11 installed.
  • Have the latest Windows updates installed.
  • Make sure Microsoft supports Amazon Appstore in your country or region. (Check here.)

Install Amazon Appstore/Windows Subsystem for Android

  • Open the Microsoft Store and search for Amazon Appstore. Click Install or Get to begin the download.
  • This will start you through a three-step setup process—just follow it through. It will ask for permission to make changes to a couple of utilities—allow them, and you’ll soon be prompted to restart your computer.
  • When it comes back from restart, your PC will automatically begin installing the Windows Subsystem for Android. When that’s finished, you’ll be prompted with an Amazon Appstore login screen. (You don’t have to log in.)
  • You’ll now be able to find Windows Subsystem for Android in your Start menu. Open it and select Advanced settings on the left, then toggle the Developer mode slider to the right.

Download the Threads APK

There are multiple options for downloading Android app APKs, but if you don’t know where you’re going, you can end up in some unsavory corners of the web. One of the safest in my experience is APKMirror.

  • Get the Threads APK using the APKMirror link here.
  • By default this will go into your Downloads folder.

Install WSATools

While there are several options for apps that allow installation of Android APKs once the Windows Subsystem for Android is installed, WSATools is one of the simplest and most straightforward, and it’s another pickup from the Microsoft Store.

  • Open the Microsoft Store and search for WSATools. Install it.

Install Threads

All the pieces are in place, let’s go!

  • WSATools will now be available on your Start menu. Open it up.
  • Click Install an APK. This will tell you the ADB is missing the first time you run it. Click Install and select or make a folder to install it into. Personally, I just made C:\ADB to be simple. (You’ll never have to do this again.)
  • Once that’s done, you’ll get another prompt to find your file. Go to your Downloads folder and select our freshly downloaded Threads APK.
  • Click Install when it shows the Threads icon and information.
  • It is possible that it will ask permission for ADB Debugging. If so, click yes.

If this doesn’t work, or it says it can’t access the WSA, restarting your machine to try again will do the trick.

And that’s it! Threads should now be available in your Start menu, so when you’re at your desk at work or on your gaming rig taking a break between runs, you can open Threads on your Windows 11 PC with the same ease as Microsoft’s Facebook, Twitter, and Instagram Windows apps. Happy spooling! (Is that what we’re calling it?)

Google Made Millions From Ads for Fake Abortion Clinics

Google Made Millions From Ads for Fake Abortion Clinics

Researchers at the CCDH also found several marketing firms catering to crisis pregnancy centers and offering services, including help accessing the Google ad grants, along with strategies to ensure that their content appears next to legitimate reproductive health information by hijacking keywords used by people seeking abortions.

“There’s a set of keywords which are clearly abortion search keywords, and those keywords tend to be the names of abortion providers,” says Callum Hood, head of research at CCDH. “Amongst the top keywords that fake clinics target, ‘planned parenthood’ is in the top five.” Planned Parenthood is a genuine reproductive health organization.

This is not the first time Google’s free advertising perks have gone to anti-abortion groups. In 2019, a group of anti-choice clinics run by a Catholic group were found to have received tens of thousands of dollars worth of free advertising on Google. In response, the company changed its policies to require such organizations to note whether they actually offer abortion services. 

But the CCDH report found that sometimes these labels were still not applied to ads from crisis pregnancy centers. And even then, Shakouri says the label can be confusing to users who don’t know the difference between a crisis pregnancy center and a legitimate health clinic that may simply not provide abortion care.“There’s a lot of ways people could interpret that labeling, and that labeling has been applied to organizations like abortion funds or services that act as referral services,” she says.

This confusion extends beyond ads and search to Google Maps, where crisis pregnancy centers often show up alongside legitimate clinics.

“It’s very hard for people that are less digitally literate to find out who is a legitimate provider,”  says Sanne Thijssen, the creator of #HeyGoogle, which maps crisis pregnancy centers throughout Europe to help women better identify fake clinics. “A lot of times if they see something on Google Maps … they aren’t able to really distinguish as well.”

Martha Dimitratou, media manager for PlanC, a nonprofit that provides information about access to the abortion pill, says that the organization’s Google Ads account was banned over a year ago for advertising “unauthorized pharmacies.” 

“We have tried to appeal this very many times, but Google does not want to change the system,” she says.

Meanwhile,  Google continues to allow ads from crisis pregnancy centers directing users to sites that promote “abortion reversal,” an unscientific method of administering progesterone to a woman who has taken abortion medication in order to stop its effects.

Angela Vasquez-Girouxat, vice president of communications and research at abortion advocacy group Naral, notes that a past study on “abortion reversal” had to be halted because the regimen posed a threat to the health of the women involved. “Imagine if there were a vaccine study that found the vaccines were harmful to people,” she says. “Google probably wouldn’t promote that as a legitimate regimen, but they allow these organizations to continue to promote abortion pill reversal and other fake science, despite the fact that it is physically dangerous.”

Ron DeSantis Pushed Elon Musk’s Twitter to Its Breaking Point

Ron DeSantis Pushed Elon Musk’s Twitter to Its Breaking Point

Ron DeSantis, the Republican governor of Florida, surely hoped to trend on Twitter after announcing his run for president in an audio stream on the platform today. He likely did not want to see the top hashtag be #DeSaster.

Just minutes after DeSantis joined the platform’s owner, Elon Musk, on Twitter Spaces, and before the politician could even speak, Musk could be heard saying, “The servers are straining somewhat.” Then the stream abruptly ended, apparently overwhelmed by some 667,000 listeners, a paltry number compared to the streams on other platforms routinely watched by millions. 

DeSantis’ appearance was a gamble on a novel presidential campaign tactic and a platform not known for its mass appeal to US voters. The move ended up pushing Twitter to its breaking point both technically and philosophically.

The company, which has a fifth of the staff it had when Musk acquired it last year, eventually restarted the audio stream almost 30 minutes after the scheduled start time. But the event went on to demonstrate the ideological blinders on Musk’s social media  project—and its tendency to insulate powerful people, especially those with right-wing views, from the “free speech” the CEO has claimed to champion. 

The #DeSaster does not bode well for Musk’s ambitions to expand and stabilize the platform, which he has said will one day attract 1 billion users a month. The entrepreneur has repeatedly talked of turning Twitter into an “everything app” similar to the multifunctional Chinese app WeChat. Twitter is set to host a new show by right-wing commentator Tucker Carlson following his ouster from Fox News, where he regularly drew more than 3 million viewers.

Today’s glitches showed that Twitter does not appear ready to host such crowds. It doesn’t show great potential as a place to reach a broad swath of US voters either. Just 20 percent of US adults report that they use Twitter, according to a recent Pew Research survey, while 81 percent say they use YouTube and 69 percent Facebook. And although Musk has spoken of turning Twitter into a global “digital town square,” he has overseen a weakening of content moderation and invited back accounts banned for offensive content, including the rapper Ye, formerly Kanye West, and former US president Donald Trump.

The DeSantis stream further undermined Musk’s claims to be making Twitter a place for authentic exchanges of diverse opinions. The platform is “not just canned speeches and teleprompters,” he boasted, just minutes after DeSantis had finished his, well, canned speech, which echoed lines from his campaign video.

Musk also repeated his wish that Twitter be a place where people with different political views could mingle. “Perhaps some minds will be changed one way or the other,” he said. But when the digital floor was opened to questions, they came from a who’s who of right-wing thought leadership and Muskian allies, including entrepreneur turned podcaster David Sacks, who cohosted the event.

Buffalo Mass Shooting Victims’ Families Sue Meta, Reddit, Amazon

Buffalo Mass Shooting Victims’ Families Sue Meta, Reddit, Amazon

Elmore says the goal is to force reform.

“We can’t bring the victims of this lawsuit back, but we can make sure that no other families have to file this kind of lawsuit,” he says. No families deserve to be members of this unenviable club, Elmore says.

The lawsuit essentially takes aim at the full journey that brought Gendron from being a regular American teen to becoming a violent white supremacist—one equipped with the means and intention of massacring as many Black people as possible. They point to platforms like Facebook and Snapchat as the first part of that process.

“Gendron’s radicalization on social media was neither a coincidence nor an accident,” the complaint alleges. “It was the foreseeable consequence of the defendant social media companies’ conscious decision to design, program, and operate platforms and tools that maximize user engagement (and corresponding advertising revenue) at the expense of public safety.”

The lawsuit claims that the white supremacist ideology that captured Gendron, particularly the “great replacement theory”—which imagines an international plot to weaken the political power of white people—is a “product of social media.” While it may have been conjured up by a French author and promoted by hardened neo-Nazis, the lawsuit claims that “replacement theory proponents rely heavily on social media—and the tools and features the Social Media Defendants utilize to increase their own engagement—to promote racist ideology to young and impressionable adherents.”

Exposure to this kind of hate propaganda as a teenager, mixed with the addictive nature of social media, fundamentally altered Grendron’s brain chemistry, Elmore argues in his filings.

Social media platforms maximized user engagement “not by showing them content they request or want to see, but rather, by showing them and otherwise recommending content from which they cannot look away,” the complaint continues. “Taking full advantage of the incomplete development of Gendron’s frontal lobe, Instagram, YouTube, and Snapchat maintained his product engagement by targeting him with increasingly extreme and violent content and connections which, upon information and belief, promoted racism, antisemitism, and gun violence.”

This is not a bug, Elmore argues. “These products were functioning as designed and intended.”

These platforms pointed Gendron to the next step in his radicalization: 4chan.

While there is no algorithm on the notorious image board, there was a waiting “community of fellow racists urging him to move forward,” the lawsuit alleges. What’s more, Gendron was a frequent user of /k/, the weapons board. That community, and similar ones on Discord, helped him prepare for the attack and increase his chances of succeeding.

The lawsuit singles out 4chan financial backer Good Smile, a major Japanese toy company that in 2015 invested $2.4 million for a 30 percent share in the site, according to documents WIRED obtained. Pointing to reporting from WIRED and a lawsuit filed by former employees of the company, the families allege that Good Smile’s role in 4chan “is not that of a passive investor but is actively involved in the management of the social media site.”

In a statement from April, Good Smile denied WIRED’s reporting, insisting, “We do not have a partnership with 4chan, never had influence over the management and/or control of 4chan.” In the same statement, however, Good Smile also says, “We severed any limited relationship we previously had with 4chan in June of 2022. Since then, we have not had any relationship with 4chan.” The company has cited “confidentiality obligations” preventing it from commenting on the matter and has ignored multiple requests for comment.

The Comedian Taking on India’s New Censorship Law

The Comedian Taking on India’s New Censorship Law

But he adds that his legal challenge isn’t about him. “This is bigger than any one profession. It will affect everyone,” he says.

He points to wide discrepancies between the official account of Covid’s impact on the country and the assessment of international agencies. “The WHO has said that Covid deaths in India were about 10 times more than the official count. Anybody even referring to that could be labeled a fake news peddler, and it would have to be taken down.”

In April 2021, India’s most populous state, Uttar Pradesh, was ravaged by a second wave of Covid-19 and a severe shortage of oxygen in hospitals. The state government denied there was a problem. Amidst this unfolding crisis, one man tweeted an SOS call for oxygen to save his dying grandfather. The authorities in the state charged him with rumor-mongering and causing panic.

Experts believe the amendments to India’s IT rules would enable more of this kind of repression, under a government that has already extended its powers over the internet, forcing social media platforms to remove critical voices and using emergency powers to censor a BBC documentary critical of Modi.

Prateek Waghre, policy director at the Internet Freedom Foundation (IFF), a digital liberties organization, says the social media team of Modi’s Bharatiya Janata Party (BJP) has itself freely spread misinformation about political opponents and critics, while “reporters going to the ground and bringing out the inconvenient truth have faced consequences.”

Waghre says the lack of clarity on what constitutes fake news makes matters even worse. “Looking at the same data set, it is possible that two people can arrive at different conclusions,” he adds. “Just because your interpretation of that data set is different to that of the government’s doesn’t make it fake news. If the government is putting itself in a position to fact-check information about itself, the first likely misuse of it would be against information that is inconvenient to the government.”

This is not a hypothetical scenario. In September 2019, a journalist was booked by police for allegedly trying to defame the government after recording schoolchildren who were supposed to be receiving full meals from the state eating just salt and roti.

In November 2021, two journalists, Samriddhi Sakunia and Swarna Jha, were arrested for reporting on anti-Muslim violence that had erupted in the northeastern state of Tripura. They were accused of reporting “fake news.”

Nonbinding, state-backed fact-checks already happen through the government’s Press Information Bureau, despite that organization’s checkered record on objectivity.

Media watch website compiled a number of PIB’s “fact-checks” and found that the Bureau simply labels inconvenient reports as “false” or “baseless” without providing any concrete proof.

In June 2022, Tapasya, a reporter for investigative journalism organization The Reporters’ Collective, wrote that the Indian government required children aged six and under to get an Aadhar biometric identification card in order to access food at government-run centers—in defiance of an Indian Supreme Court ruling.

The PIB Fact Check quickly labeled the story fake. When Tapasya inquired under the Right To Information Act (a freedom of information law) about the procedure behind the labeling, PIB simply attached a tweet from the Woman and Child Development ministry, which claimed the story was fake—in other words, the PIB Fact Check had not done any independent research.

“Parroting the government line isn’t fact-checking,” Tapasya says. “The government could have gotten my story taken down on the internet if the new IT rules were in play in June 2022.”

Social media companies have sometimes pushed back against the Indian government’s attempts to impose controls over what can be published online. But the IFF’s Waghre doesn’t expect them to put up much of a fight this time. “Nobody wants litigation, nobody wants to risk their safe harbor,” he says, referring to the “safe harbor” rules that protect platforms from being held liable for content posted by their users. “There is likely to be mechanical compliance, and possibly even proactive censorship of views that they know are likely to be flagged.”

Kamra didn’t want to comment on his prospects in challenging the new rules. But he says a democracy’s health is in question when the government wants to control the sources of information. “This isn’t what democracy looks like,” he says. “There are several problems with social media. It has been harmful in the past. But more government control isn’t the solution to it.”