Select Page
TikTok and Meta’s Moderators Form a United Front in Germany

TikTok and Meta’s Moderators Form a United Front in Germany

Screening social media content to remove abuse or other banned material is one of the toughest jobs in tech, but also one of the most undervalued. Content moderators for TikTok and Meta in Germany have banded together to demand more recognition for workers who are employed to keep some of the worst content off social platforms, in a rare moment of coordinated pushback by tech workers across companies.

The combined group met in Berlin last week to demand from the two platforms higher pay, more psychological support, and the ability to unionize and organize. The workers say the low pay and prestige unfairly makes moderators low-skilled workers in the eyes of German employment rules. One moderator who spoke to WIRED says that forced them to endure more than a year of immigration red tape to be able to stay in the country.

“We want to see recognition of moderation not as an easy job, but an extremely difficult, highly skilled job that actually requires a large amount of cultural and language expertise,” says Franziska Kuhles, who has worked as a content moderator for TikTok for four years. She is one of 11 elected members chosen to represent workers at the company’s Berlin office as part of an employee-elected works council. “It should be recognized as a real career, where people are given the respect that comes with that.”

Last week’s meeting marked the first time that moderators from different companies have formally met with each other in Germany to exchange experiences and collaborate on unified demands for workplace changes.

TikTok, Meta, and other platforms rely on moderators like Kuhles to ensure that violent, sexual, and illegal content is removed. Although algorithms can help filter some content, more sensitive and nuanced tasks fall to human moderators. Much of this work is outsourced to third-party companies around the world, and moderators have often complained of low wages and poor working conditions.

Germany, which is a hub for moderating content across Europe and the Middle East, has relatively progressive labor laws that allow the creation of elected works councils, or Betriebsrat, inside companies, legally-recognized structures similar to but distinct from trade unions. Works councils must be consulted by employers over major company decisions and can have their members elected to company boards. TikTok workers in Germany formed a works council in 2022.

Hikmat El-Hammouri, regional organizer at Ver.di, a Berlin-based union that helped facilitate the meeting, calls the summit “the culmination of work by union organizers in the workplaces of social media companies to help these key online safety workers—content moderators—fight for the justice they deserve.” He hopes that TikTok and Meta workers teaming up can help bring new accountability to technology companies with workers in Germany.

TikTok, Meta, and Meta’s local moderation contractor did not respond to a request for comment.

Moderators from Kenya to India to the United States have often complained that their work is grueling, with demanding quotas and little time to make decisions on the content; many have reported suffering from post-traumatic stress disorder (PTSD) and psychological damage. In recognition of that, many companies offer some form of psychological counseling to moderation staff, but some workers say it is inadequate.

WhatsApp Has Started a Fight With the UK About Encryption

WhatsApp Has Started a Fight With the UK About Encryption

“Nobody’s defending CSAM,” says Barbora Bukovská, senior director for law and policy at  Article 19, a digital rights group. “But the bill has the chance to violate privacy and legislate wild surveillance of private communication. How can that be conducive to democracy?” 

The UK Home Office, the government department that is overseeing the bill’s development, did not supply an attributable response to a request for comment. 

Children’s charities in the UK say that it’s disingenuous to portray the debate around the bill’s CSAM provisions as a black-and-white choice between privacy and safety. The technical challenges posed by the bill are not insurmountable, they say, and forcing the world’s biggest tech companies to invest in solutions makes it more likely the problems will be solved.

“Experts have demonstrated that it’s possible to tackle child abuse material and grooming in end-to-end encrypted environments,” says Richard Collard, associate head of child safety online policy at the British children’s charity NSPCC, pointing to a July paper published by two senior technical directors at GCHQ, the UK’s cyber intelligence agency, as an example.  

Companies have started selling off-the-shelf products that claim the same. In February, London-based SafeToNet launched its SafeToWatch product that, it says, can identify and block child abuse material from ever being uploaded to messengers like WhatsApp. “It sits at device level, so it’s not affected by encryption,” says the company’s chief operating officer, Tom Farrell, who compares it to the autofocus feature in a phone camera. “Autofocus doesn’t allow you to take your image until it’s in focus. This wouldn’t allow you to take it before it proved that it was safe.” 

WhatsApp’s Cathcart called for private messaging to be excluded entirely from the Online Safety Bill. He says that his platform is already reporting more CSAM to the National Center for Missing and Exploited Children (NCMEC) than Apple, Google, Microsoft, Twitter and TikTok combined. 

Supporters of the bill disagree. “There’s a problem with child abuse in end-to-end encrypted environments,” says Michael Tunks, head of policy and public affairs at the British nonprofit Internet Watch Foundation, which has license to search the internet for CSAM. 

WhatsApp might be doing better than some other platforms at reporting CSAM, but it doesn’t compare favorably with other Meta services that are not encrypted. Although Instagram and WhatsApp have the same number of users worldwide according to data platform Statista, Instagram made 3 million reports versus WhatsApp’s 1.3 million, the NCMEC says.

“The bill does not seek to undermine end-to-end encryption in any way,” says Tunks, who supports the bill in its current form, believing it puts the onus on companies to tackle the internet’s child abuse problem. “The online safety bill is very clear that scanning is specifically about CSAM and also terrorism,” he adds. “The government has been pretty clear they are not seeking to repurpose this for anything else.” 

The US Supreme Court Doesn’t Understand the Internet

The US Supreme Court Doesn’t Understand the Internet

Recent laws in both Texas and Florida have sought to impose greater restrictions on the way platforms can and cannot police content.

Gonzalez v. Google takes a different track, focusing on platforms’ failure to deal with extremist content. Social media platforms have been accused of facilitating hate speech and calls to violence that have resulted in real-world harm, from a genocide in Myanmar to killings in Ethiopia and a coup attempt in Brazil.

“The content at issue is obviously horrible and objectionable,” says G. S. Hans, an associate law professor at Cornell University in New York. “But that’s part of what online speech is. And I fear that the sort of extremity of the content will lead to some conclusions or religious implications that I don’t think are really reflective of the larger dynamic of the internet.”

The Internet Society’s Sullivan says that the arguments around Section 230 conflate Big Tech companies—which, as private companies, can decide what content is allowed on their platforms—with the internet as a whole. 

“People have forgotten the way the internet works,” says Sullivan. “Because we’ve had an economic reality that has meant that certain platforms have become overwhelming successes, we have started to confuse social issues that have to do with the overwhelming dominance by an individual player or a small handful of players with problems to do with the internet.” 

Sullivan worries that the only companies able to survive such regulations would be larger platforms, further calcifying the hold that Big Tech platforms already have.

Decisions made in the US on internet regulation are also likely to reverberate around the world. Prateek Waghre, policy director at the Internet Freedom Foundation in India, says a ruling on Section 230 could set a precedent for other countries.

“It’s less about the specifics of the case,” says Waghre. “It’s more about [how] once you have a prescriptive regulation or precedent coming out of the United States, that is when other countries, especially those that are authoritarian-leaning, are going to use it to justify their own interventions.”

India’s government is already making moves to take more control over content within the country, including establishing a government-appointed committee on content moderation and greater enforcement of the country’s IT rules.

Waghre suspects that if platforms have to implement policies and tools to comply with an amended, or entirely obliterated, Section 230, then they will likely apply those same methods and standards to other markets as well. In many countries around the world, big platforms, particularly Facebook, are so ubiquitous as to essentially function as the internet for millions of people.

“Once you start doing something in one country, then that’s used as precedent or reasoning to do the same thing in another country,” he says.

Meta Verified Shows a Company Running Out of Ideas

Meta Verified Shows a Company Running Out of Ideas

Meta’s new subscription service looks pretty familiar. For between $11.99 and $14.99 a month, Instagram and Facebook users will get a blue “verified” mark, access to better security features, and more visibility in search. Their comments will also be prioritized.

The package has strong echoes of Twitter’s Blue subscription service, launched under new owner Elon Musk, who has been aggressively trying to find ways to monetize his platform—most recently, by telling users they won’t be able to use text-based two-factor authentication unless they subscribe.

Meta CEO Mark Zuckerberg announced Meta Verified in a post to his Instagram channel on February 19, saying that the service, which will be rolled out first in Australia and New Zealand, “is about increasing authenticity and security across our services.” 

Analysts say that while the move isn’t entirely out of character for Meta, it hints at a lack of innovation at the social media giant, which has laid off more than 11,000 workers since late last year and spent billions on its push into the metaverse, a technology with no clear business model.

“Meta has always had copying in their DNA—Instagram’s Reels is but one of a long list of prominent examples—so it’s no surprise that, seeing Twitter get away with offering basic functionality as a premium service, Zuckerberg is trying to do the same,” says Tama Leaver, professor of internet studies at Curtin University in Australia. “Meta’s move to copy Twitter’s subscription model shows a distinct lack of new ideas … Meta has shed staff and is hemorrhaging money in building a metaverse that no one seems all that interested in right now.”

While Meta has emphasized the security aspects of its subscription product, the fact that subscribers will get greater visibility on the company’s platforms marks a significant change for users.

Twitter’s attempts to make users pay for features, including more promotion by its algorithms, have been met with widespread criticism, and many have threatened to quit the platform, although there is no reliable data on how many people have followed through.

However, Snapchat and Discord have also both introduced paid subscription tiers to users without a similar level of outrage, suggesting that the dislike of Twitter Blue could be linked to Musk himself and broader concerns about the platform. 

“Meta has seen Snapchat, Discord, and Twitter launch their own subscription plans, which gives power-users additional features or perks,” says social media analyst Matt Navarra, who first broke the news about the Meta change. The idea of paying for features that used to be free has started to become normalized, he says. “The risk there is reduced for them in terms of whether it will be a success.”

Regardless, Navarra admits he won’t be buying verified status from Meta. “I don’t think it’s worth it,” he says.

How much money Meta can raise through verification is unclear. Twitter has struggled to sell subscriptions to its Blue service, with The Information reporting that the platform has fewer than 300,000 subscribers worldwide—which would bring in less than 1 percent of the $3 billion Musk wants the company to make. The Meta family of apps, including Instagram, Facebook, and WhatApp, have nearly 10 times the number of monthly users that Twitter does. 

Mastodon Features That Twitter Should Steal (but Won’t)

Mastodon Features That Twitter Should Steal (but Won’t)

Any platform that supports free speech should have a content warning system pretty much like the one Mastodon offers. I bet Musk won’t implement it, though, because his snowflake fans would find this kind of free speech upsetting (and he’s afraid of them). 

Mute People For a Little While

Sometimes a person you enjoy following gets in a mood. You don’t want to unfollow them, but you also don’t want to deal with whatever thing they’re currently yelling about. Maybe they’re endlessly discussing a movie you will never watch. Maybe they’re live tweeting a sporting event, or maybe they’re worked up about something political. On Twitter you don’t have many options—you can unfollow them, mute them, or block them. All of those changes are permanent, though. 

Mastodon allows you to mute people for a set amount of time—anywhere from five minutes to seven days—enough time for the person to work through whatever has them posting so much at the moment. It’s a great compromise, and Twitter should add it. 

A Simpler Verification Process

The purpose of Twitter’s verification system, at least in the early days, was to confirm that a given account was actually run by a given politician, celebrity, journalist, or organization. The system for getting the checkmark was opaque, though, which led to the checkmark becoming somewhat of a status symbol. Having said that, Musk’s early attempts at “reform” mostly just created a spammer’s paradise. 

Mastodon, meanwhile, has a system that allows for quick verification without any overhead. Basically, if you link to your Mastodon account with the tag “ref=me” on your website, Mastodon will highlight that you control the site on your profile. This gives people a quick way to confirm your identity without creating a lot of work for moderators. Twitter could do worse than copying this strategy for “official” accounts. Elon Musk won’t implement this, though, possibly because he wants to make you pay for verification while calling it democratic. 

A (Free) Edit Button

Twitter users want an edit button. They can get one if they’re willing to pay $8 a month. Mastodon users get an edit button for free. Elon won’t offer this, though—probably because he likes money more than he likes you. 

Actual Support For Third-Party Clients

The best way to use Twitter used to be third-party clients, which generally offered a much smoother and customizable experience than the official Twitter app and website. TweetBot, for example, is a much nicer way to use Twitter on a Mac than anything built by Twitter. The problem: Twitter severely restricted its API a few years ago, which limited the kinds of things third-party clients could do. You can’t get notifications for likes, or retweets. Polls are just broken. I could go on. 

Mastodon doesn’t have this problem. Third-party clients can do everything—and in some cases, more—than the official website and applications can do. It’s refreshing, and something that Twitter should do to reward its power users. It won’t, though. Because …

Following Hashtags

On Twitter, you can follow accounts and search for hashtags. Mastodon allows users to follow an entire hashtag, so that all related posts show up on your home screen. I don’t know if Twitter should add this, but a lot of people like it, and it’s a really great way to find people who regularly post about the subjects you’re interested in. 

No Ads or Subscriptions

Town squares are open to everyone. They don’t charge admission, and they’re not covered with ads. Sure, there may be a business or two adjacent to the town square, and there might be a few walls covered with flyers for punk concerts, but for the most part a town square is primarily a noncommercial space. Twitter, if it was truly a town square, would be like that. Mastodon already is. There’s no company involved with Mastodon—it’s an open-source program owned by a nonprofit. The network is run by volunteers who set up servers for their friends and communities. Anyone can set up a server and connect to all of the other ones, and moderation is done by volunteers. 

Now, I don’t think Elon Musk is going to make Twitter free and noncommercial. It’s a business, and he’s a businessman—not an engineer, not a free speech advocate, and not someone who actually cares about community at the end of the day, regardless of his public statements. He’s a money person who likes money and would like to have more of it (even though the money he currently has is clearly not doing much for his mental and emotional health). 

And that’s the problem: A town square, by definition, can’t be a business. It needs to be a space owned by the people. That’s what an Elon Musk Twitter can never be, and what Mastodon already is. I wrote about how to get started with Mastodon, so check that out if you’re curious.