Select Page
Why the Emoji Skin Tone You Choose Matters

Why the Emoji Skin Tone You Choose Matters

“I’m a white person, and despite there being a range of skin tones available for emoji these days, I still just choose the original Simpsons-esque yellow. Is this insensitive to people of color?”

—True Colors


Dear True,

I don’t think it’s possible to determine what any group of people, categorically, might find insensitive—and I won’t venture to speak, as a white person myself, on behalf of people of color. But your trepidation about which emoji skin tone to use has evidently weighed on many white people’s minds since 2015, when the Unicode Consortium—the mysterious organization that sets standards for character encoding in software systems around the world—introduced the modifiers. A 2018 University of Edinburgh study of Twitter data confirmed that the palest skin tones are used least often, and most white people opt, as you do, for the original yellow.

It’s not hard to see why. While it might seem intuitive to choose the skin tone that most resembles your own, some white users worry that calling attention to their race by texting a pale high five (or worse, a raised fist) might be construed as celebrating or flaunting it. The writer Andrew McGill noted in a 2016 Atlantic article that many white people he spoke to feared that the white emoji “felt uncomfortably close to displaying ‘white pride,’ with all the baggage of intolerance that carries.” Darker skin tones are a more obviously egregious choice for white users and are generally interpreted as grossly appropriative or, at best, misguided attempts at allyship.

That leaves yellow, the Esperanto of emoji skin tones, which seems to offer an all-purpose or neutral form of pictographic expression, one that does not require an acknowledgment of race—or, for that matter, embodiment. (Unicode calls it a “nonhuman” skin tone.) While this logic may strike you as sound enough, sufficient to put the question out of mind while you dash off a yellow thumbs-up, I can sense you’re aware on some level that it doesn’t really hold up to scrutiny.

The existence of a default skin tone unavoidably calls to mind the thorny notion of race neutrality that crops up in so many objections to affirmative action or, to cite a more relevant example, in the long-standing use of “flesh-colored” and “nude” as synonyms for pinkish skin tones. The yellow emoji feels almost like claiming, “I don’t see race,” that dubious shibboleth of post-racial politics, in which the ostensible desire to transcend racism often conceals a more insidious desire to avoid having to contend with its burdens. Complicating all this is the fact that the default yellow is indelibly linked to The Simpsons, which used that tone solely for Caucasian characters (those of other races, like Apu and Dr. Hibbert, were shades of brown). The writer Zara Rahman has argued that the notion of a neutral emoji skin tone strikes her as evidence of an all-too-familiar bad faith: “To me, those yellow images have always meant one thing: white.”

At the risk of making too much of emoji (there are, undeniably, more urgent forms of racial injustice that deserve attention), I’d argue that the dilemma encapsulates a much larger tension around digital self-expression. The web emerged amid the heady spirit of 1990s multiculturalism and color-blind politics, an ethos that recalls, for example, the United Colors of Benetton ad that featured three identical human hearts labeled “white,” “black,” and “yellow.” The promise of disembodiment was central to the cyberpunk ideal, which envisioned the internet as a new frontier where users would shirk their real-life identities, take on virtual bodies (or no bodies at all), and be judged by their ideas—or their souls—rather than by their race. This vision was, unsurprisingly, propagated by the largely middle- and upper-class white men who were the earliest shapers of internet culture. The scholar Lisa Nakamura has argued that the digital divide gave cyberspace a “whitewashed” perspective and that the dream of universalism became, in many early chat rooms, an opportunity for white people to engage in identity tourism, adopting avatars of other races that were rife with stereotypes—a problem that lives on in the prevalence of digital blackface on TikTok and other platforms.

It’s telling that skin tone modifiers were introduced in 2015, when social platforms teemed with posts about the police killings of Walter Scott and Freddie Gray, among others, and when the tech press began to take stock of algorithmic bias in the justice system, acknowledging that technologies once hailed as objective and color-blind were merely compounding historical injustices. That year, Ta-Nehisi Coates observed (at the close of the Obama presidency) that the term post-racial “is almost never used in earnest,” and Anna Holmes noted that it “has mostly disappeared from the conversation, except as sarcastic shorthand.”

The Bruce Willis Deepfake Is Everyone’s Problem

The Bruce Willis Deepfake Is Everyone’s Problem

For some experts, this transferability could lead to people losing control of their “personality” as firms take full ownership of their identity rather than just a licensed use for a particular purpose. In fact, the original calls for these kinds of transferability were made in the 1950s by studio lawyers who wanted to control the movies that actors appeared in and the products they endorsed. “One might (potentially) garner more money for such a total transfer, but the cost seems inconceivably great to the person and society,” Rothman says.

Student athletes, for instance, risk agents, managers, companies, or even the NCAA hoovering up their identities in the hope of extracting any future profit if they find big-league success. Actors, athletes, and average citizens, Rothman argues, are in danger of losing control of their “own names, likenesses, and voices to creditors, ex-spouses, record producers, managers, and even Facebook.”

Many actors won’t be affected, simply because their identities won’t be valuable. But it is also true that celebrities like Kim Kardashian and Tom Cruise have bargaining power that others don’t: They can bullishly negotiate that the use of their image not extend beyond any particular show or film. Smaller actors, meanwhile, face the possibility of contracts that extract rights wholesale. “There is a real risk that new actors (i.e., just starting out and desperate for breakthrough work) would be especially vulnerable to signing away their publicity rights as a condition of their first contracts,” says Johanna Gibson, a professor of intellectual property law at Queen Mary, University of London. “This power imbalance could be exploited by studios keen both to commercialize image and character and indeed to avoid libel (depending upon the nature of that commercialization), as the performer would no longer have rights to control how their image is used.”

This could leave actors in a position of either missing out on work, or signing a contract that would later allow them to be deepfaked into content they find demeaning without legal recourse. In the film franchise model, Gibson argues, the risk is even greater.

SAG-AFTRA disagrees, explaining that reasonable minds will always differ, even when working toward the same stated goal. “While some prominent commentators have expressed fear that a transferable right of publicity could lead to involuntary transfers or forced commercialization, there is little basis to believe this fear would come to fruition,” says Van Lier. ”There are no instances, to our knowledge, of the right being involuntarily transferred during anyone’s lifetime or anyone being forced to exploit it. The most notable attempt involved OJ Simpson and the court expressly refused to transfer it to his victim’s family.”

Eventually, AIs trained on Bruce Willis’ likeness won’t need Bruce Willis at all. “If a company can train its AI algorithms to replicate the specific mannerisms, timing, tonality, etc. of a particular actor, it makes the AI-generated content more and more life-like,” says Van Lier. “This can have long-term implications.” In other words, actors—and everyone else—must learn how to protect their digital rights, or they could find themselves performing a role they did not expect.

The Psychological Impact of Consuming True Crime

The Psychological Impact of Consuming True Crime

While Coccio eventually left the subreddit, many others stayed. Dawn Cecil, a criminology professor at the University of South Florida and author of Fear, Justice & Modern True Crime, says that many who engage with true crime forums have “good intentions of wanting to help solve a crime or find a missing person”; some also want to draw attention to miscarriages of justice and question the effectiveness of the criminal justice system.

Still, Cecil warns that true crime forums can become echo chambers that feed fear or buttress preexisting beliefs. Consuming true crime, as she details in her book, can also skew people’s perception of crime and reinforce stereotypes.

It can also lead people to things they regret. Marcus is a 42-year-old from Seattle who joined Reddit purely so he could post on r/serialpodcast. At first he found it “fun,” but in his time there he has been verbally attacked as well as doxed—a stranger from the subreddit once called him at work. (He asked that WIRED not use his real name for privacy purposes.) He says he’s seen “some of the most gruesome things I’ve ever seen on the internet” thanks to his interest in the Serial case.

Meghan, a 30-year-old nurse from Washington who asked that WIRED not use her last name, has spent seven years on the sub out of “habit.” She enjoyed the early “exciting” days when people regularly posted new discoveries and says chatting with strangers over the years has been beneficial. “At this point some of the other long-term posters feel a bit like old friends, even the ones that I fight with the most,” she says. But personal attacks on the sub also heighten Meghan’s anxiety, and she has also come to reevaluate her attitude toward true crime.

“I am embarrassed and ashamed of how gleefully I came back to this sub to look at lividity documents, et cetera, without fully considering that the victim was a real person,” she says. “A teenager died; multiple other teenagers’ lives were completely upended … It’s just all sad. And I think that does affect my mental health.”

Two years ago, Marcus took a step back from r/serialpodcast. “It became really bad for my mental health, arguing the same arguments,” he says. When Syed was released from prison last month, Marcus returned to r/serialpodcast—but he imagines it won’t be for long. Meghan says she will stop consuming Serial commentary if Syed is not tried again. For others, true crime forums remain tantalizing spaces—where community has been forged and answers appear to be just around the corner.

As of this writing, Dahmer is the top English-language show on Netflix, which reports that some 56 million households have seen the series. The streaming service is set to premiere Conversations With a Killer: The Jeffrey Dahmer Tapes on Friday.

The Tricky Ethics of Being a Teacher on TikTok

The Tricky Ethics of Being a Teacher on TikTok

“I don’t want any students in my videos now, absolutely not,” she says, “Whether you have 10 followers or 100,000 followers, a weird person is a weird person who could find you.” Miss P’s students beg to feature in her videos, but she refuses to film their faces for safety reasons.

Yet Miss P does occasionally record students’ voices. She conducts a “roses and thorns” activity with her classes once a month, in which they each share something good and bad about their lives anonymously on a piece of paper; she sometimes TikToks herself reading these notes to the class. If a student’s voice is audible in the background, Miss P asks them if they would like it to be cut out of the video; she also asks a class’s permission before recording.

While individual students cannot be identified in “roses and thorns” videos, I felt odd when I first stumbled across one. Should the world know that one student is self-harming and another is addicted to porn; shouldn’t this information be kept within the confines of the classroom? Miss P understands this criticism but says her classroom is a safe space: “You see a little tiny piece, but the heart-wrenching stuff and the conversations we have, I don’t post that.”

Miss P says it’s often the students themselves who want her to record the activity. “They have so much pride that it’s their roses and thorns on the TikToks,” she says. Roses and thorns is also not a mandatory activity—Miss P has some classes who have never once participated, and individual members of the class do not have to write anything down. Her videos are flooded with supportive comments, such as, “You are definitely that teacher that will make a difference” (14,000 likes) and “I need you at my school” (2,000 likes).

There are some teachers within Miss P’s school who do not approve of her TikTok account, but her principal and the superintendent of her district are supportive. Like Miss A, Miss P believes schools need to start having more explicit conversations with teachers about social media, establishing firm rules about TikTok use.

“There should be lines; you can’t post everything,” Miss P says. She wishes, for example, that someone had shown her how to filter comments and warned her to check for identifying details in the background of videos. “But I do think it has the potential to be good,” she adds, arguing that TikTok humanizes teachers. “Some students think when my day’s over, I go under my desk and lay out a blanket and sleep in my classroom,” she says, “I think it’s cool to see teachers are people; they have lives and personalities.”

While browsing teacher TikTok, I’ve seen a small child in a polka-dot coat clap along to a rhyme in class and another group of young students do a choreographed dance to a Disney song. I’ve seen a teacher list out the reasons their kindergartners had meltdowns that week, and I’ve read poetry written by eighth-grade students. There is room for debate about the benefits and pitfalls of all of these videos, though no one yet knows how the students featured in them will feel as they age.

In April, TikTok surpassed Instagram as the most downloaded app of the year; it’s the fifth app to ever reach 3.5 billion downloads. As the service continues to grow in popularity, it is up to individual institutions to create clear guidance for their educators. Meanwhile, a new school year has begun—and with it comes a fresh round of TikToks.

Prediction Engines Are Like Karma: You Get What You Stream

Prediction Engines Are Like Karma: You Get What You Stream

“Streaming services often allow account holders to create multiple, separate profiles, which I appreciate. I want the recommendations I get to reflect my taste and not my partner’s. Is this selfish? Is there any virtue in sharing a profile with others?”

—Island in the Stream


Dear Island,

Sharing, at least as it’s often understood, is virtuous only in cases of finite resources. It’s generous for a child to share her lunch with a classmate who has none or for the wealthy to give money to the less fortunate. But I find it hard to believe that forfeiting an individual profile would be laudable when there are enough to go around. What’s bothering you isn’t the fear of selfishness but the realization that you see other people’s inclinations and preferences as a form of contamination, a threat to the purity of your personal algorithm. To insist on your own digital fiefdom suggests you believe your taste to be so unique and precise that any disruption to its pattern will compromise its underlying integrity.

At a basic level, prediction engines are like karma, invisible mechanisms that register each of your actions and return to you something of equal value. If you watch a lot of true-crime docs, you will eventually find yourself in a catalog dominated by gruesome titles. If you tend to stream sitcoms from the early 2000s, your recommendations will turn into an all-you-can-eat buffet of millennial nostalgia. The notion that one reaps what one sows, that every action begets an equal reaction, is not merely spiritual pablum, but a law encoded in the underlying architecture of our digital universe. Few users really know how these predictive technologies work. (On TikTok, speculations about how the algorithm functions have become as dense as scholastic debates about the metaphysical constitution of angels.) Still, we like to believe that there are certain cosmic principles at play, that each of our actions is being faithfully logged, that we are, in each moment, shaping our future entertainment by what we choose to linger on, engage with, and purchase.

Perhaps it would be worthwhile to probe a little at that sense of control. You noted that you want your recommendations to align with your taste, but what is taste, exactly, and where does it come from? It’s common to think of one’s preferences as sui generis, but our proclivities have been shaped by all sorts of external factors, including where we live, how we were raised, our ages, and other relevant data. These variables fall into discernible trends that hold true across populations. Demographic profiling has proved how easy it is to discover patterns in large samples. Given a big enough data set, political views can be predicted based on fashion preferences (L.L. Bean buyers tilt conservative; Kenzo appeals to liberals), and personality traits can be deduced by what kind of music a user likes (fans of Nicki Minaj tend to be extroverted). Nobody knows what causes these correlations, but their consistency suggests that none of us is exactly the master of our own fate, or the creator of a bespoke persona. Our behavior falls into predictable patterns that are subject to social forces operating beyond the level of our awareness.

And, well, prediction engines couldn’t work if this wasn’t the case. It’s nice to think that the recommendations on your private profile are as unique as your thumbprint. But those suggestions have been informed by the behavioral data of millions of other users, and the more successful the platform is at guessing what you’ll watch, the more likely it is that your behavior falls in line with that of other people. The term “user similarity” describes how automated recommendations analogize the behavior of customers with kindred habits, which means, essentially, that you have thousands of shadow-selves out there who are streaming, viewing, and purchasing many of the same products you are, like quantumly entangled particles that mirror one another from opposite sides of the universe. Their choices inform the options you’re shown, just as your choices will inflect the content promoted for future users.

Karma, at least in popular culture, is often regarded as a simplistic form of cosmic comeuppance, but it’s more properly understood as a principle of interdependence. Everything in the world is connected to everything else, creating a vast web of interrelation wherein the consequences of every action reverberate through the entire system. For those of us who have been steeped in the dualities of Western philosophy and American individualism, it can be difficult to comprehend just how intertwined our lives are with the lives of others. In fact, it’s only recently that information technologies—and the large data sets they create—have revealed to us what some of the oldest spiritual traditions have been teaching for millennia: that we live in a world that is chaotic and radically interdependent, one in which the distance between any two people (or the space between any two vectors) is often smaller than we might think.

With that in mind, Island, sharing a profile might be less an act of generosity than a recognition of that interdependence. The person you’re living with has already changed you in countless ways, subtly altering what you believe, what you buy, the way you speak. If your taste in movies currently diverges from theirs, that doesn’t mean it always will. In fact, it’s almost certain that your preferences will inch closer together the longer you share a home. This is arguably a good thing. Most of us have experienced at some point the self-­perpetuating hell of karmic cycles, the way one cigarette leads to an addiction or a single lie begets a string of further deceptions. Automated recommendations can similarly foster narrowly recursive habits, breeding more and more of the same until we’re stuck in a one-dimensional reflection of our past choices. Deliberately opening up your profile to others could be a way to let some air into that dank cave of individual preferences where the past continually reverberates, isolating you from the vast world of possibilities that lies outside.