Elon Musk reactivated Donald Trump’s Twitter account last weekend, reversing a ban imposed in January 2021 after his posts were deemed to have incited violence at the US Capitol. Trump has not started using his account again, but social media researchers have warned for months that his return could bring a wave of division and disinformation on the platform. Even without his controversial presence, a new analysis of millions of tweets shows that hate speech has become more visible on Twitter under Musk’s leadership.
Researchers at Tufts University’s Digital Planet group tracked hate speech on Twitter before and after Musk took ownership of the company in late October. To do this, they used a data stream the platform provides that’s known as the firehose—a feed of every public tweet, like, retweet, and reply shared across the platform. The group has used the same approach in previous studies, including one looking at toxicity on Twitter around the US midterm elections.
To study how Musk’s ownership changed Twitter, the researchers searched through tweets posted between March 1 and November 13 of this year, collecting the 20 most popular—as determined by a combination of followers, likes, and retweets—with keywords that could indicate anti-LGBTQ+, racist, or antisemitic intent. They then reviewed the language of those tweets in each of the three categories and attempted to judge their true intent.
For the months prior to Musk’s takeover, the researchers deemed just one tweet out of the three top 20 lists to be actually hateful, in this case against Jewish people. The others were either quoting another person’s hateful remarks or using the relevant key words in a non-hateful way.
In the weeks after Musk took over Twitter, the same analysis found that hateful tweets became much more prominent among the most popular tweets with potentially toxic language. For tweets using words associated with anti-LGBTQ+ or antisemitic posts, seven of the top 20 posts in each category were now hateful. For popular tweets using potentially racist language, one of the top 20 was judged to be hate speech.
“The toxicity of Twitter has severely increased post-Musk’s walking into that building,” says Bhaskar Chakravorti, dean of global business at the Fletcher Business School at Tufts University and chair of Digital Planet, which carried out the analysis.
This data could add to the challenges Musk faces as he attempts a turnaround for the company, which he has loaded with debt. Advertisers provide the majority of Twitter’s revenue, but some have said in recent weeks that they will reduce or pause spending until they learn more about any changes to the platform’s content policies. “Advertisers cannot invest their dollars on platforms where comprehensive policies on hate speech and misinformation are not in place and consistently enforced,” says Lou Paskalis, a long-time ad executive who previously served as president of MMA Global, a marketing trade group.
The Tufts analysis does not indicate whether the increase in hate speech stems from specific changes made by Musk after he acquired Twitter for $44 million last month. Although he initially claimed that the company’s policies would not change, he also laid off thousands of staff and contractors, reducing the resources Twitter could bring to bear on policing content. In some countries where the platform is popular, such as Brazil, activists and researchers who track disinformation say there is no longer anyone at Twitter to respond to their warnings and requests.
“Twitter has seemingly neglected security for a very long time, and with all the changes, there is risk for sure,” says David Kennedy, CEO of the incident response firm TrustedSec, who formerly worked at the NSA and with the United States Marine Corps signal intelligence unit. “There’s a lot of work to be done to stabilize and secure the platform, and there is definitely an elevated risk from a malicious insider perspective due to all the changes occurring. As time passes, the probability of an incident lowers, but the security risks and technology debt are still there.”
A breach of Twitter could expose the company or users in myriad ways. Of particular concern would be an incident that endangers users who are activists, dissidents, or journalists under a repressive regime. With more than 230 million users, a Twitter breach would also have far-reaching potential consequences for identity theft, harassment, and other harm to users around the world. And from a government intelligence perspective, the data has already proved valuable enough over the years to motivate government spies to infiltrate the company, a threat the whistleblower Zatko said Twitter was not prepared to counter.
The company was already under scrutiny from the US Federal Trade Commission for past practices, and on Thursday, seven Democratic senators called on the FTC to investigate whether “reported changes to internal reviews and data security practices” at Twitter violated the terms of a 2011 settlement between Twitter and the FTC over past data mishandling.
Were a breach to happen, the details would, of course, dictate the consequences for users, Twitter, and Musk. But the outspoken billionaire may want to note that, at the end of October, the FTC issued an order against the online delivery service Drizly along with personal sanctions against its CEO, James Cory Rellas, after the company exposed the data of roughly 2.5 million users. The order requires the company to have stricter policies on deleting information and to minimize data collection and retention, while also requiring the same from Cory Rellas at any future companies he works for.
Speaking broadly about the current digital security threat landscape at the Aspen Cyber Summit in New York City on Wednesday, Rob Silvers, undersecretary for policy at the Department of Homeland Security, urged vigilance from companies and other organizations. “I wouldn’t get too complacent. We see enough attempted intrusions and successful intrusions every day that we are not letting our guard down even a little bit,” he said. “Defense matters, resilience matters in this space.”
Dan Tentler, a founder of the attack simulation and remediation firm Phobos Group who worked in Twitter security from 2011 to 2012, points out that while current chaos and understaffing within the company does create pressing potential risks, it also could pose challenges to attackers who might have difficulty in this moment mapping the organization to target employees who likely have strategic access or control within the company. He adds, though, that the stakes are high because of Twitter’s scale and reach around the world.
“If there are insiders left within Twitter or someone breaches Twitter, there’s probably not a lot standing in their way from doing whatever they want—you have an environment where there may not be a lot of defenders left,” he says.
“I’m a white person, and despite there being a range of skin tones available for emoji these days, I still just choose the original Simpsons-esque yellow. Is this insensitive to people of color?”
I don’t think it’s possible to determine what any group of people, categorically, might find insensitive—and I won’t venture to speak, as a white person myself, on behalf of people of color. But your trepidation about which emoji skin tone to use has evidently weighed on many white people’s minds since 2015, when the Unicode Consortium—the mysterious organization that sets standards for character encoding in software systems around the world—introduced the modifiers. A 2018 University of Edinburgh study of Twitter data confirmed that the palest skin tones are used least often, and most white people opt, as you do, for the original yellow.
It’s not hard to see why. While it might seem intuitive to choose the skin tone that most resembles your own, some white users worry that calling attention to their race by texting a pale high five (or worse, a raised fist) might be construed as celebrating or flaunting it. The writer Andrew McGill noted in a 2016 Atlantic article that many white people he spoke to feared that the white emoji “felt uncomfortably close to displaying ‘white pride,’ with all the baggage of intolerance that carries.” Darker skin tones are a more obviously egregious choice for white users and are generally interpreted as grossly appropriative or, at best, misguided attempts at allyship.
That leaves yellow, the Esperanto of emoji skin tones, which seems to offer an all-purpose or neutral form of pictographic expression, one that does not require an acknowledgment of race—or, for that matter, embodiment. (Unicode calls it a “nonhuman” skin tone.) While this logic may strike you as sound enough, sufficient to put the question out of mind while you dash off a yellow thumbs-up, I can sense you’re aware on some level that it doesn’t really hold up to scrutiny.
The existence of a default skin tone unavoidably calls to mind the thorny notion of race neutrality that crops up in so many objections to affirmative action or, to cite a more relevant example, in the long-standing use of “flesh-colored” and “nude” as synonyms for pinkish skin tones. The yellow emoji feels almost like claiming, “I don’t see race,” that dubious shibboleth of post-racial politics, in which the ostensible desire to transcend racism often conceals a more insidious desire to avoid having to contend with its burdens. Complicating all this is the fact that the default yellow is indelibly linked to The Simpsons, which used that tone solely for Caucasian characters (those of other races, like Apu and Dr. Hibbert, were shades of brown). The writer Zara Rahman has argued that the notion of a neutral emoji skin tone strikes her as evidence of an all-too-familiar bad faith: “To me, those yellow images have always meant one thing: white.”
At the risk of making too much of emoji (there are, undeniably, more urgent forms of racial injustice that deserve attention), I’d argue that the dilemma encapsulates a much larger tension around digital self-expression. The web emerged amid the heady spirit of 1990s multiculturalism and color-blind politics, an ethos that recalls, for example, the United Colors of Benetton ad that featured three identical human hearts labeled “white,” “black,” and “yellow.” The promise of disembodiment was central to the cyberpunk ideal, which envisioned the internet as a new frontier where users would shirk their real-life identities, take on virtual bodies (or no bodies at all), and be judged by their ideas—or their souls—rather than by their race. This vision was, unsurprisingly, propagated by the largely middle- and upper-class white men who were the earliest shapers of internet culture. The scholar Lisa Nakamura has argued that the digital divide gave cyberspace a “whitewashed” perspective and that the dream of universalism became, in many early chat rooms, an opportunity for white people to engage in identity tourism, adopting avatars of other races that were rife with stereotypes—a problem that lives on in the prevalence of digital blackface on TikTok and other platforms.
It’s telling that skin tone modifiers were introduced in 2015, when social platforms teemed with posts about the police killings of Walter Scott and Freddie Gray, among others, and when the tech press began to take stock of algorithmic bias in the justice system, acknowledging that technologies once hailed as objective and color-blind were merely compounding historical injustices. That year, Ta-Nehisi Coates observed (at the close of the Obama presidency) that the term post-racial “is almost never used in earnest,” and Anna Holmes noted that it “has mostly disappeared from the conversation, except as sarcastic shorthand.”
Following two weeks of extreme chaos at Twitter, users are joining and fleeing the site in droves. More quietly, many are likely scrutinizing their accounts, checking their security settings, and downloading their data. But some users are reporting problems when they attempt to generate two-factor authentication codes over SMS: Either the texts don’t come or they’re delayed by hours.
The glitchy SMS two-factor codes mean that users could get locked out of their accounts and lose control of them. They could also find themselves unable to make changes to their security settings or download their data using Twitter’s access feature. The situation also provides an early hint that troubles within Twitter’s infrastructure are bubbling to the surface.
Not all users are having problems receiving SMS authentication codes, and those who rely on an authenticator app or physical authentication token to secure their Twitter account may not have reason to test the mechanism. But users have been self-reporting issues on Twitter since the weekend, and WIRED confirmed that on at least some accounts, authentication texts are hours delayed or not coming at all. The meltdown comes less than two weeks after Twiter laid off about half of its workers, roughly 3,700 people. Since then, engineers, operations specialists, IT staff, and security teams have been stretched thin attempting to adapt Twitter’s offerings and build new features per new owner Elon Musk’s agenda.
Reports indicate that the company may have laid off too many employees too quickly and that it has been attempting to hire back some workers. Meanwhile, Musk has said publicly that he is directing staff to disable some portions of the platform. “Part of today will be turning off the ‘microservices’ bloatware,” he tweeted this morning. “Less than 20 percent are actually needed for Twitter to work!”
Twitter’s communications department, which reportedly no longer exists, did not return WIRED’s request for comment about problems with SMS two-factor authentication codes. Musk did not reply to a tweet requesting comment.
“Temporary outage of multifactor authentication could have the effect of locking people out of their accounts. But the even more concerning worry is that it will encourage users to just disable multifactor authentication altogether, which makes them less safe,” says Kenneth White, codirector of the Open Crypto Audit Project and a longtime security engineer. “It’s hard to say exactly what caused the issue that so many people are reporting, but it certainly could result from large-scale changes to the web services that have been announced.”
SMS texts are not the most secure way to receive authentication codes, but many people rely on the mechanism, and security researchers agree that it’s better than nothing. As a result, even intermittent or sporadic outages are problematic for users and could put them at risk.
Twitter’s SMS authentication code delivery system has repeatedly had stability issues over the years. In August 2020, for example, Twitter Support tweeted, “We’re looking into account verification codes not being delivered via SMS text or phone call. Sorry for the inconvenience, and we’ll keep you updated as we continue our work to fix this.” Three days later, the company added, “We have more work to do with fixing verification code delivery, but we’re making progress. We’re sorry for the frustration this has caused and appreciate your patience while we keep working on this. We hope to have it sorted soon for those of you who aren’t receiving a code.”
Musk’s chief concern will be whether he can align his philosophical affection for decentralization with the need to turn Twitter into a profitable business. He has previously expressed a desire to open source the Twitter algorithm in the name of transparency, but ceding control of the algorithm (the mechanism by which people are kept on the platform) would be another step entirely, and surely a disaster for advertising revenue.
Musk might use Bluesky technology to partly realize his ambition to turn Twitter into “X, the everything app”—a type of super-app that blends social media with payments and other utilities, similar to WeChat. Although the AT Protocol does not use blockchain, it is able to “integrate with cryptocurrencies,” Graber has previously said, which means Bluesky could help support the payments aspect of the vision. But again, this is all dealing in the hypothetical.
Although plenty of questions hang over the implementation, Bluesky isn’t alone in thinking that society would benefit from a more decentralized social media ecosystem, with less power pooled in the hands of a cash-motivated minority.
Evan Henshaw-Plath, the first employee of Odeo (which made Twitter), runs a “peer-to-peer social network” called Planetary that shares plenty of common ground with Bluesky; both are attempting to increase transparency around algorithms and give people control of their personal data.
Henshaw-Plath predicts that Twitter will experiment heavily with Web3 and crypto-related projects under Musk, irrespective of whether Bluesky ends up playing a starring role. “I’m not sure that’s good,” he says, “but it’s definitely where most of the big changes will be.”
Henshaw-Plath also says the acquisition might increase the chances of Bluesky securing additional funding since Twitter is no longer “constrained by Wall Street,” and suspects that Dorsey might return to Twitter in some capacity under Musk.
Once the AT Protocol is up and running, the aim is to enable a level of interaction between Planetary and Bluesky networks, says Henshaw-Plath, creating a sort of coalition motivated by the shared desire to tip the balance of power in favor of users.
This is also the ambition of Stani Kulechov, the creator of Lens Protocol, a similar project that relies on users self-hosting their profiles to create decentralization—an alternative to Bluesky’s cloud-based model. He says this approach “enables people to own their social capital” in terms of both their content and audience, and ensures social profiles are “always in your custody and control.”
But while efforts to minimize companies’ control over the way people communicate should be celebrated, there are short-term dangers that need to be taken into account, says Brewster Kahle, creator of the Internet Archive and the Internet Hall of Fame. “If decentralization brought local control to more people in how they build their communities, that would be a good thing,” says Kahle. But the concern is that a lack of clarity over the mechanics of moderation under this new model might lead to the kind of “free-for-all hellscape” Musk says he is determined to avoid. “In the short term, decentralization could mean there is no content moderation or spam controls at all, giving a louder megaphone to a few,” Kahle adds.
For this reason, Kahle says getting the technology right is all-important, but there are “warning signs of simplistic, absolutist thinking” among those attempting to innovate in the social media space that could jeopardize the whole endeavor.
It’s up to Musk, the “free speech absolutist” and world’s richest individual, to carry forward the vision for a more equitable, more private, less antagonistic social media experience. If he decides not to, Bluesky will have to fly the nest in search of backing elsewhere.