Select Page
Be Your Own Tab Manager

Be Your Own Tab Manager

“I’ve read for years about why people keep so many tabs open on their browsers—digital distraction, FOMO, boredom—and I’ve tried to pare down my own overpopulated browsers, but nothing sticks. Why can’t I become a closer?” 

—Open Tab

Dear Open,

Before reading your question, I was actually not aware that there is a corpus of commentary about browser tab clutter. I have not perused the literature myself, though I imagine it’s like any content niche—a blend of prescriptive common sense and insular self-reference.

Beneath the broad digital highways of news, shopping, and social media, there exist endless grottoes of discourse, accessible via search queries, where cloisters of experts have already discussed any question or problem that has ever occurred to you to the point of Talmudic exhaustion. Sorry for the convoluted metaphor—it’s very difficult to visualize our experiences online.

In fact, a decade and a half ago, Kevin Kelly, a cofounder of this magazine, asked hundreds of people to draw a picture of the internet. It was an attempt to crowdsource the “unconscious layout” of the virtual world we spend so much of our lives navigating, to concretize the ephemeral flow of data in spatial terms. Most of the drawings were crude and idiosyncratic, and revealed, if anything, the impossibility of arriving at any shared vision of a realm that is basically empyrean. “The internet is intangible, like spirits and angels,” Kelly wrote. “The web is an immense ghost land of disembodied places. Who knows if you are even there, there.”

I could ask you, Open, by way of turning to your question, where precisely you are reading this column—which is to say, where these words exist in relation to the other content you have encountered or will encounter over the course of your day. If you are reading this in print, the answer is simple: The words exist in a magazine, an object that has precise and measurable spatial relationships to other physical things that are visible when you look up from the page. If you are reading this online, the question becomes more difficult to answer, though I imagine you have a sense—implicit and largely subliminal—that the article is located somewhere specific, one point on a map made up of all the other sites you have recently visited or hope to visit later. Most likely, that map resembles the tabs you have open on your browser.

Like most graphical widgets, tabs are metaphors whose referent has been largely forgotten. They grew out of the more expansive “desktop” trope that has dominated personal computing (which imagines incorporeal data organized into “files” and “folders”) and are modeled after the card tabs inserted into drawers of paper files. They are, in other words, “markers,” a term borrowed from cartography: objects used to indicate a position, place, or route.

Just as maps are fictional interfaces designed to spatially orient the traveler, tabs are imaginary objects that allow users to navigate the contourless chaos of the dataplasm. It’s worth noting that the earliest known maps, like those painted in the caves of Lascaux, were not of the earth but of the heavens—the original spiritual realm—and were, essentially, attempts to visualize individual data points (stars) constellated into familiar objects (bulls, antelopes, warriors). Incidentally, some of the oldest sky maps in the Library of Congress look remarkably like visual representations of the internet.

Although I haven’t read the articles about tab overuse (and don’t plan to), I assume they point out its irrationality—having too many open slows down your browser—and recommend organizational strategies, like tab managers, that allow you to more easily access information. But to my mind, tab accumulation has, like most compulsive habits, a subliminal purpose that eludes our crude attempts to rationalize it out of existence. Your open tabs are essentially your personalized map of the internet, a method of visualizing where you have been and where you hope to go next. Taken together, they form a perimeter that annexes a galaxy of idiosyncratic content within the seemingly infinite cosmos of information.

It’s unclear from your question just how many tabs you have open on a given day. The information available on the maximum limits of popular browsers is mixed and possibly apocryphal—a rumored 500 in Safari for iPhone (though there are ways to hack this limit) and 9,000 tabs in Chrome. In any case, most browsers allow for practically limitless tab use, which can become problematic for users inclined to hoarding. It seems to me that once there are enough to warrant a tab manager (which allows you to group and search your open tabs the way Google helps you search the internet), the situation has grown perilously close to the absurd scenarios imagined by Borges or Lewis Carroll, who wrote of maps that are the same scale as the landscape they represent. Despite the farcical nature of those stories, they aptly dramatize the human tendency to confuse abstraction with the thing itself, which ultimately stems from a desire for control.

Noma Is Closing. Welcome to the End of Fine Dining

Noma Is Closing. Welcome to the End of Fine Dining

Ten years ago, I went to a therapist for the first time. I was writing a cookbook with an esteemed chef and needed help figuring out how to work with him. The chef in question was an alumni of Copenhagen’s vaunted restaurant Noma, and I needed to push through a chapter alteration that he didn’t like.

“First, say what you need and he will ignore it,” the therapist advised. “Second, say it again and he will ignore it again. The third time … ”

WHAM! The therapist slammed the palm of his hand down on his desk.

“The third time, you hit the table between the two of you, then calmly restate what you need.”

I’d never negotiated with anyone like that, but we’d signed onto the project as partners, and I was struggling to maintain the balance of power.

I’ve been thinking about this episode since hearing the surprise announcement in early January that Noma would close its doors for good at the end of 2024. The chef I worked with was Blaine Wetzel, the direct progeny, in restaurant genealogical terms, of Noma’s chef and co-owner Rene Redzepi. In a 2015 article Redzepi wrote, he confessed to sometimes being a “bully” and a “terrible boss” to his staff, flying into fits of rage in his kitchen. This was part of the reason why, when I heard the news of Noma’s closing, I couldn’t help but think that it was a good thing. 

Way back in 2006, when I was a food writer in Europe and before Noma was an intergalactic thing, I lucked into a dinner there. Looking at my photos, Redzepi still seems to have baby fat in his face, yet the restaurant’s trajectory was clear. He could use food to poke at your emotions, turning an onion dish into the most incredible onions you’ve ever tasted, or a beet sauce that made you want to use it as body paint.

Noma has been the most influential restaurant in the world for close to 15 years. In that period, it won the top spot on the the World’s 50 Best Restaurants list five times and has expanded palates—jellyfish, moss, or ants, anyone? Noma has also been a pioneer in the global fermentation movement and inspired legions of chefs and copycats.

Despite wild success, he is pulling the plug on Noma because, financially and emotionally, “it’s unsustainable,” he said. For years, high-end kitchens like Noma have relied on unpaid or incredibly underpaid internships where stagiaires worked grueling, life-sucking hours as they learned the trade. This is often illegal and is slowly petering out. Yet for interns and staff who grind it out at a place like Noma, the experience can write the ticket for the rest of their careers.

In 2010, Wetzel did just that, going straight from his role as Noma’s chef de partie to taking over the kitchen at the Willows Inn in the Pacific Northwest. In June 2013, my wife, Elisabeth, and I moved from New York City to Washington state’s Lummi Island, population 813, so I could work with Wetzel. Soon he picked up a pair of prestigious James Beard awards. Yet in the near-decade since Elisabeth and I left the island, layers of the inn’s management flaked away, finally revealing behavior that sounded more and more like Redzepi at his worst.

Mental Health Apps Won’t Get You Off the Couch

Mental Health Apps Won’t Get You Off the Couch

“Everyone’s so gung ho about therapy these days. I’ve been curious myself, but I’m not ready to commit to paying for it. A mental health app seems like it could be a decent stepping stone. But are they actually helpful?”

—Mindful Skeptic

Dear Mindful,

The first time you open Headspace, one of the most popular mental wellness apps, you are greeted with the image of a blue sky—a metaphor for the unperturbed mind—and encouraged to take several deep breaths. The instructions that appear across the firmament tell you precisely when to inhale, when to hold, and when to exhale, rhythms that are measured by a white progress bar, as though you’re waiting for a download to complete. Some people may find this relaxing, although I’d bet that for every user whose mind floats serenely into the pixelated blue, another is glancing at the clock, eyeing their inbox, or worrying about the future—wondering, perhaps, about the ultimate fate of a species that must be instructed to carry out the most basic and automatic of biological functions.

Dyspnea, or shortness of breath, is a common side effect of anxiety, which rose, along with depression, by a whopping 25 percent globally between 2020 and 2021, according to a report from the World Health Organization. It’s not coincidental that this mental health crisis has dovetailed with the explosion of behavioral health apps. (In 2020, they garnered more than $2.4 billion in venture capital investment.) And you’re certainly not alone, Mindful, in doubting the effectiveness of these products. Given the inequality and inadequacy of access to affordable mental health services, many have questioned whether these digital tools are “evidence-based,” and whether they serve as effective substitutes for professional help.

I’d argue, however, that such apps are not intended to be alternatives to therapy, but that they represent a digital update to the self-help genre. Like the paperbacks found in the Personal Growth sections of bookstores, such apps promise that mental health can be improved through “self-awareness” and “self-knowledge”—virtues that, like so many of their cognates (self-care, self-empowerment, self-checkout), are foisted on individuals in the twilight of public institutions and social safety nets.

Helping oneself is, of course, an awkward idea, philosophically speaking. It’s one that involves splitting the self into two entities, the helper and the beneficiary. The analytic tools offered by these apps (exercise, mood, and sleep tracking) invite users to become both scientist and subject, taking note of their own behavioral data and looking for patterns and connections—that anxiety is linked to a poor night’s sleep, for example, or that regular workouts improve contentedness. Mood check-ins ask users to identify their feelings and come with messages stressing the importance of emotional awareness. (“Acknowledging how we’re feeling helps to strengthen our resilience.”) These insights may seem like no-brainers—the kind of intuitive knowledge people can come to without the help of automated prompts—but if the breathing exercises are any indication, these apps are designed for people who are profoundly alienated from their nervous systems.

Of course, for all the focus on self-knowledge and personalized data, what these apps don’t help you understand is why you’re anxious or depressed in the first place. This is the question that most people seek to answer through therapy, and it’s worth posing about our society’s mental health crisis as a whole. That quandary is obviously beyond my expertise as an advice columnist, but I’ll leave you with a few things to consider.

Linda Stone, a researcher and former Apple and Microsoft executive, coined the term “screen apnea” to describe the tendency to hold one’s breath or breathe more shallowly while using screens. The phenomenon occurs across many digital activities (see “email apnea” and “Zoom apnea”) and can lead to sleep disruption, lower energy levels, or increased depression and anxiety. There are many theories about why extended device use puts the body into a state of stress—psychological stimulation, light exposure, the looming threat of work emails and doomsday headlines—but the bottom line seems to be that digital technologies trigger a biological state that mirrors the fight-or-flight response.

It’s true that many mental health apps recommend activities or “missions” that involve getting off one’s phone. But these tend to be tasks performed in isolation (pushups, walks, guided meditations), and because they are completed so as to be checked off, tracked, and subsumed into one’s overall mental health stats, the apps end up ascribing a utility value to activities that should be pleasurable for their own sake. This makes it more difficult to practice those mindfulness techniques—living in the moment, abandoning vigilant self-monitoring—that are supposed to relieve stress. By attempting to instill more self-awareness, in other words, these apps end up intensifying the disunity that so many of us already feel on virtual platforms.

Why the Emoji Skin Tone You Choose Matters

Why the Emoji Skin Tone You Choose Matters

“I’m a white person, and despite there being a range of skin tones available for emoji these days, I still just choose the original Simpsons-esque yellow. Is this insensitive to people of color?”

—True Colors

Dear True,

I don’t think it’s possible to determine what any group of people, categorically, might find insensitive—and I won’t venture to speak, as a white person myself, on behalf of people of color. But your trepidation about which emoji skin tone to use has evidently weighed on many white people’s minds since 2015, when the Unicode Consortium—the mysterious organization that sets standards for character encoding in software systems around the world—introduced the modifiers. A 2018 University of Edinburgh study of Twitter data confirmed that the palest skin tones are used least often, and most white people opt, as you do, for the original yellow.

It’s not hard to see why. While it might seem intuitive to choose the skin tone that most resembles your own, some white users worry that calling attention to their race by texting a pale high five (or worse, a raised fist) might be construed as celebrating or flaunting it. The writer Andrew McGill noted in a 2016 Atlantic article that many white people he spoke to feared that the white emoji “felt uncomfortably close to displaying ‘white pride,’ with all the baggage of intolerance that carries.” Darker skin tones are a more obviously egregious choice for white users and are generally interpreted as grossly appropriative or, at best, misguided attempts at allyship.

That leaves yellow, the Esperanto of emoji skin tones, which seems to offer an all-purpose or neutral form of pictographic expression, one that does not require an acknowledgment of race—or, for that matter, embodiment. (Unicode calls it a “nonhuman” skin tone.) While this logic may strike you as sound enough, sufficient to put the question out of mind while you dash off a yellow thumbs-up, I can sense you’re aware on some level that it doesn’t really hold up to scrutiny.

The existence of a default skin tone unavoidably calls to mind the thorny notion of race neutrality that crops up in so many objections to affirmative action or, to cite a more relevant example, in the long-standing use of “flesh-colored” and “nude” as synonyms for pinkish skin tones. The yellow emoji feels almost like claiming, “I don’t see race,” that dubious shibboleth of post-racial politics, in which the ostensible desire to transcend racism often conceals a more insidious desire to avoid having to contend with its burdens. Complicating all this is the fact that the default yellow is indelibly linked to The Simpsons, which used that tone solely for Caucasian characters (those of other races, like Apu and Dr. Hibbert, were shades of brown). The writer Zara Rahman has argued that the notion of a neutral emoji skin tone strikes her as evidence of an all-too-familiar bad faith: “To me, those yellow images have always meant one thing: white.”

At the risk of making too much of emoji (there are, undeniably, more urgent forms of racial injustice that deserve attention), I’d argue that the dilemma encapsulates a much larger tension around digital self-expression. The web emerged amid the heady spirit of 1990s multiculturalism and color-blind politics, an ethos that recalls, for example, the United Colors of Benetton ad that featured three identical human hearts labeled “white,” “black,” and “yellow.” The promise of disembodiment was central to the cyberpunk ideal, which envisioned the internet as a new frontier where users would shirk their real-life identities, take on virtual bodies (or no bodies at all), and be judged by their ideas—or their souls—rather than by their race. This vision was, unsurprisingly, propagated by the largely middle- and upper-class white men who were the earliest shapers of internet culture. The scholar Lisa Nakamura has argued that the digital divide gave cyberspace a “whitewashed” perspective and that the dream of universalism became, in many early chat rooms, an opportunity for white people to engage in identity tourism, adopting avatars of other races that were rife with stereotypes—a problem that lives on in the prevalence of digital blackface on TikTok and other platforms.

It’s telling that skin tone modifiers were introduced in 2015, when social platforms teemed with posts about the police killings of Walter Scott and Freddie Gray, among others, and when the tech press began to take stock of algorithmic bias in the justice system, acknowledging that technologies once hailed as objective and color-blind were merely compounding historical injustices. That year, Ta-Nehisi Coates observed (at the close of the Obama presidency) that the term post-racial “is almost never used in earnest,” and Anna Holmes noted that it “has mostly disappeared from the conversation, except as sarcastic shorthand.”

The Bruce Willis Deepfake Is Everyone’s Problem

The Bruce Willis Deepfake Is Everyone’s Problem

For some experts, this transferability could lead to people losing control of their “personality” as firms take full ownership of their identity rather than just a licensed use for a particular purpose. In fact, the original calls for these kinds of transferability were made in the 1950s by studio lawyers who wanted to control the movies that actors appeared in and the products they endorsed. “One might (potentially) garner more money for such a total transfer, but the cost seems inconceivably great to the person and society,” Rothman says.

Student athletes, for instance, risk agents, managers, companies, or even the NCAA hoovering up their identities in the hope of extracting any future profit if they find big-league success. Actors, athletes, and average citizens, Rothman argues, are in danger of losing control of their “own names, likenesses, and voices to creditors, ex-spouses, record producers, managers, and even Facebook.”

Many actors won’t be affected, simply because their identities won’t be valuable. But it is also true that celebrities like Kim Kardashian and Tom Cruise have bargaining power that others don’t: They can bullishly negotiate that the use of their image not extend beyond any particular show or film. Smaller actors, meanwhile, face the possibility of contracts that extract rights wholesale. “There is a real risk that new actors (i.e., just starting out and desperate for breakthrough work) would be especially vulnerable to signing away their publicity rights as a condition of their first contracts,” says Johanna Gibson, a professor of intellectual property law at Queen Mary, University of London. “This power imbalance could be exploited by studios keen both to commercialize image and character and indeed to avoid libel (depending upon the nature of that commercialization), as the performer would no longer have rights to control how their image is used.”

This could leave actors in a position of either missing out on work, or signing a contract that would later allow them to be deepfaked into content they find demeaning without legal recourse. In the film franchise model, Gibson argues, the risk is even greater.

SAG-AFTRA disagrees, explaining that reasonable minds will always differ, even when working toward the same stated goal. “While some prominent commentators have expressed fear that a transferable right of publicity could lead to involuntary transfers or forced commercialization, there is little basis to believe this fear would come to fruition,” says Van Lier. ”There are no instances, to our knowledge, of the right being involuntarily transferred during anyone’s lifetime or anyone being forced to exploit it. The most notable attempt involved OJ Simpson and the court expressly refused to transfer it to his victim’s family.”

Eventually, AIs trained on Bruce Willis’ likeness won’t need Bruce Willis at all. “If a company can train its AI algorithms to replicate the specific mannerisms, timing, tonality, etc. of a particular actor, it makes the AI-generated content more and more life-like,” says Van Lier. “This can have long-term implications.” In other words, actors—and everyone else—must learn how to protect their digital rights, or they could find themselves performing a role they did not expect.