Select Page
Senators Warn the Next US Bank Run Could Be Rigged

Senators Warn the Next US Bank Run Could Be Rigged

Idaho senator Jim Risch, the top Republican on the Foreign Relations Committee—who also serves on the Intelligence Committee—says he’d be surprised if they didn’t mimic the digital pressure campaign that experts say caused the bank runs. “We see all kinds of input from foreign actors trying to do harm to the country, so it’s really an obvious avenue for somebody to try to do that,” Risch says.

Some experts think the threat is real. “The fear is not overblown,” Peter Warren Singer, strategist and senior fellow at New America, a Washington-based think tank, told WIRED via email. “Most cyber threat actors, whether criminals or states, don’t create new vulnerabilities, but notice and then take advantage of existing ones. And it is clear that both stock markets and social media are manipulatable. Add them together and you multiply the manipulation potential.” 

In the aftermath of the GameStop meme-driven rally—which was partly fueled by a desire to wipe out hedge funds shorting the stock—experts warned the same techniques could be used to target banks. In a paper for the Carnegie Endowment, published in November 2021, Claudia Biancotti, a director at the Bank of Italy, and Paolo Ciocca, an Italian finance regulator, warned that financial institutions were vulnerable to similar market manipulation.

“Finance-focused virtual communities are growing in size and potential economic and social impact, as demonstrated by the role played by online groups of retail traders in the GameStop case,” they wrote, “Such communities are highly exposed to manipulation, and may represent a prime target for state and nonstate actors conducting malicious information operations.”

The government’s response to the Silicon Valley Bank collapse—depositors’ money was quickly protected—shows banks can be hardened against this kind of event, says Cristián Bravo Roman—an expert on AI, banking, and contagion risk at Western Ontario University. “All the measures that were taken to restore trust in the banking system limit the ability of a hostile attacker,” he says.

Roman says federal officials now see, or at least should see, the real cyberthreat of mass digital hysteria clearly, and may strengthen provisions designed to protect smaller banks against runs. “It completely depends on what happens after this,” Roman says. “The truth is, the banking system is just as political as it is economic.”

Preventing the swell of online panic, whether real or fabricated, is far more complicated. Social media sites in the US can’t be easily compelled to remove content, and they are protected by Section 230 of the Communications Decency Act of 1996, which shields tech companies from liability for what others write on their platforms. While that provision is currently being challenged in the US Supreme Court, it’s unlikely lawmakers would want to limit what many see as free speech. 

“I don’t think that social media can be regulated to censor talk about a bank’s financial condition unless there is deliberate manipulation or misinformation, just as that might be in any other means of communicating,” says Senator Richard Blumenthal, a Connecticut Democrat.

“I don’t think we should offer a systemic response to a localized problem,” says North Dakota Republican senator Kevin Cramer—although he adds that he wants to hear “all the arguments.” 

“We need to be very cautious to not get in the way of speech,” Cramer says. “But when speech becomes designed specifically to short a market, for example, or to lead to an unnecessary run on the bank, we have to be reasonable about it.”

While some members of Congress  are using the run on Silicon Valley Bank to revive conversations about the regulation of social media platforms, other lawmakers are, once again, looking to tech companies themselves for solutions.“We need to be better at discovering and exposing bots. We need to understand the source,” says Senator Angus King, a Maine Independent. 

King, a member of the Senate Intelligence Committee, says Washington can’t solve all of Silicon Valley’s problems, especially when it comes to cleaning up bots. “That has to be them,” he says. “We can’t do that.”

TikTok and Meta’s Moderators Form a United Front in Germany

TikTok and Meta’s Moderators Form a United Front in Germany

Screening social media content to remove abuse or other banned material is one of the toughest jobs in tech, but also one of the most undervalued. Content moderators for TikTok and Meta in Germany have banded together to demand more recognition for workers who are employed to keep some of the worst content off social platforms, in a rare moment of coordinated pushback by tech workers across companies.

The combined group met in Berlin last week to demand from the two platforms higher pay, more psychological support, and the ability to unionize and organize. The workers say the low pay and prestige unfairly makes moderators low-skilled workers in the eyes of German employment rules. One moderator who spoke to WIRED says that forced them to endure more than a year of immigration red tape to be able to stay in the country.

“We want to see recognition of moderation not as an easy job, but an extremely difficult, highly skilled job that actually requires a large amount of cultural and language expertise,” says Franziska Kuhles, who has worked as a content moderator for TikTok for four years. She is one of 11 elected members chosen to represent workers at the company’s Berlin office as part of an employee-elected works council. “It should be recognized as a real career, where people are given the respect that comes with that.”

Last week’s meeting marked the first time that moderators from different companies have formally met with each other in Germany to exchange experiences and collaborate on unified demands for workplace changes.

TikTok, Meta, and other platforms rely on moderators like Kuhles to ensure that violent, sexual, and illegal content is removed. Although algorithms can help filter some content, more sensitive and nuanced tasks fall to human moderators. Much of this work is outsourced to third-party companies around the world, and moderators have often complained of low wages and poor working conditions.

Germany, which is a hub for moderating content across Europe and the Middle East, has relatively progressive labor laws that allow the creation of elected works councils, or Betriebsrat, inside companies, legally-recognized structures similar to but distinct from trade unions. Works councils must be consulted by employers over major company decisions and can have their members elected to company boards. TikTok workers in Germany formed a works council in 2022.

Hikmat El-Hammouri, regional organizer at Ver.di, a Berlin-based union that helped facilitate the meeting, calls the summit “the culmination of work by union organizers in the workplaces of social media companies to help these key online safety workers—content moderators—fight for the justice they deserve.” He hopes that TikTok and Meta workers teaming up can help bring new accountability to technology companies with workers in Germany.

TikTok, Meta, and Meta’s local moderation contractor did not respond to a request for comment.

Moderators from Kenya to India to the United States have often complained that their work is grueling, with demanding quotas and little time to make decisions on the content; many have reported suffering from post-traumatic stress disorder (PTSD) and psychological damage. In recognition of that, many companies offer some form of psychological counseling to moderation staff, but some workers say it is inadequate.

WhatsApp Has Started a Fight With the UK About Encryption

WhatsApp Has Started a Fight With the UK About Encryption

“Nobody’s defending CSAM,” says Barbora Bukovská, senior director for law and policy at  Article 19, a digital rights group. “But the bill has the chance to violate privacy and legislate wild surveillance of private communication. How can that be conducive to democracy?” 

The UK Home Office, the government department that is overseeing the bill’s development, did not supply an attributable response to a request for comment. 

Children’s charities in the UK say that it’s disingenuous to portray the debate around the bill’s CSAM provisions as a black-and-white choice between privacy and safety. The technical challenges posed by the bill are not insurmountable, they say, and forcing the world’s biggest tech companies to invest in solutions makes it more likely the problems will be solved.

“Experts have demonstrated that it’s possible to tackle child abuse material and grooming in end-to-end encrypted environments,” says Richard Collard, associate head of child safety online policy at the British children’s charity NSPCC, pointing to a July paper published by two senior technical directors at GCHQ, the UK’s cyber intelligence agency, as an example.  

Companies have started selling off-the-shelf products that claim the same. In February, London-based SafeToNet launched its SafeToWatch product that, it says, can identify and block child abuse material from ever being uploaded to messengers like WhatsApp. “It sits at device level, so it’s not affected by encryption,” says the company’s chief operating officer, Tom Farrell, who compares it to the autofocus feature in a phone camera. “Autofocus doesn’t allow you to take your image until it’s in focus. This wouldn’t allow you to take it before it proved that it was safe.” 

WhatsApp’s Cathcart called for private messaging to be excluded entirely from the Online Safety Bill. He says that his platform is already reporting more CSAM to the National Center for Missing and Exploited Children (NCMEC) than Apple, Google, Microsoft, Twitter and TikTok combined. 

Supporters of the bill disagree. “There’s a problem with child abuse in end-to-end encrypted environments,” says Michael Tunks, head of policy and public affairs at the British nonprofit Internet Watch Foundation, which has license to search the internet for CSAM. 

WhatsApp might be doing better than some other platforms at reporting CSAM, but it doesn’t compare favorably with other Meta services that are not encrypted. Although Instagram and WhatsApp have the same number of users worldwide according to data platform Statista, Instagram made 3 million reports versus WhatsApp’s 1.3 million, the NCMEC says.

“The bill does not seek to undermine end-to-end encryption in any way,” says Tunks, who supports the bill in its current form, believing it puts the onus on companies to tackle the internet’s child abuse problem. “The online safety bill is very clear that scanning is specifically about CSAM and also terrorism,” he adds. “The government has been pretty clear they are not seeking to repurpose this for anything else.” 

The US Supreme Court Doesn’t Understand the Internet

The US Supreme Court Doesn’t Understand the Internet

Recent laws in both Texas and Florida have sought to impose greater restrictions on the way platforms can and cannot police content.

Gonzalez v. Google takes a different track, focusing on platforms’ failure to deal with extremist content. Social media platforms have been accused of facilitating hate speech and calls to violence that have resulted in real-world harm, from a genocide in Myanmar to killings in Ethiopia and a coup attempt in Brazil.

“The content at issue is obviously horrible and objectionable,” says G. S. Hans, an associate law professor at Cornell University in New York. “But that’s part of what online speech is. And I fear that the sort of extremity of the content will lead to some conclusions or religious implications that I don’t think are really reflective of the larger dynamic of the internet.”

The Internet Society’s Sullivan says that the arguments around Section 230 conflate Big Tech companies—which, as private companies, can decide what content is allowed on their platforms—with the internet as a whole. 

“People have forgotten the way the internet works,” says Sullivan. “Because we’ve had an economic reality that has meant that certain platforms have become overwhelming successes, we have started to confuse social issues that have to do with the overwhelming dominance by an individual player or a small handful of players with problems to do with the internet.” 

Sullivan worries that the only companies able to survive such regulations would be larger platforms, further calcifying the hold that Big Tech platforms already have.

Decisions made in the US on internet regulation are also likely to reverberate around the world. Prateek Waghre, policy director at the Internet Freedom Foundation in India, says a ruling on Section 230 could set a precedent for other countries.

“It’s less about the specifics of the case,” says Waghre. “It’s more about [how] once you have a prescriptive regulation or precedent coming out of the United States, that is when other countries, especially those that are authoritarian-leaning, are going to use it to justify their own interventions.”

India’s government is already making moves to take more control over content within the country, including establishing a government-appointed committee on content moderation and greater enforcement of the country’s IT rules.

Waghre suspects that if platforms have to implement policies and tools to comply with an amended, or entirely obliterated, Section 230, then they will likely apply those same methods and standards to other markets as well. In many countries around the world, big platforms, particularly Facebook, are so ubiquitous as to essentially function as the internet for millions of people.

“Once you start doing something in one country, then that’s used as precedent or reasoning to do the same thing in another country,” he says.

Meta Verified Shows a Company Running Out of Ideas

Meta Verified Shows a Company Running Out of Ideas

Meta’s new subscription service looks pretty familiar. For between $11.99 and $14.99 a month, Instagram and Facebook users will get a blue “verified” mark, access to better security features, and more visibility in search. Their comments will also be prioritized.

The package has strong echoes of Twitter’s Blue subscription service, launched under new owner Elon Musk, who has been aggressively trying to find ways to monetize his platform—most recently, by telling users they won’t be able to use text-based two-factor authentication unless they subscribe.

Meta CEO Mark Zuckerberg announced Meta Verified in a post to his Instagram channel on February 19, saying that the service, which will be rolled out first in Australia and New Zealand, “is about increasing authenticity and security across our services.” 

Analysts say that while the move isn’t entirely out of character for Meta, it hints at a lack of innovation at the social media giant, which has laid off more than 11,000 workers since late last year and spent billions on its push into the metaverse, a technology with no clear business model.

“Meta has always had copying in their DNA—Instagram’s Reels is but one of a long list of prominent examples—so it’s no surprise that, seeing Twitter get away with offering basic functionality as a premium service, Zuckerberg is trying to do the same,” says Tama Leaver, professor of internet studies at Curtin University in Australia. “Meta’s move to copy Twitter’s subscription model shows a distinct lack of new ideas … Meta has shed staff and is hemorrhaging money in building a metaverse that no one seems all that interested in right now.”

While Meta has emphasized the security aspects of its subscription product, the fact that subscribers will get greater visibility on the company’s platforms marks a significant change for users.

Twitter’s attempts to make users pay for features, including more promotion by its algorithms, have been met with widespread criticism, and many have threatened to quit the platform, although there is no reliable data on how many people have followed through.

However, Snapchat and Discord have also both introduced paid subscription tiers to users without a similar level of outrage, suggesting that the dislike of Twitter Blue could be linked to Musk himself and broader concerns about the platform. 

“Meta has seen Snapchat, Discord, and Twitter launch their own subscription plans, which gives power-users additional features or perks,” says social media analyst Matt Navarra, who first broke the news about the Meta change. The idea of paying for features that used to be free has started to become normalized, he says. “The risk there is reduced for them in terms of whether it will be a success.”

Regardless, Navarra admits he won’t be buying verified status from Meta. “I don’t think it’s worth it,” he says.

How much money Meta can raise through verification is unclear. Twitter has struggled to sell subscriptions to its Blue service, with The Information reporting that the platform has fewer than 300,000 subscribers worldwide—which would bring in less than 1 percent of the $3 billion Musk wants the company to make. The Meta family of apps, including Instagram, Facebook, and WhatApp, have nearly 10 times the number of monthly users that Twitter does.