Select Page
What’s Going On With TikTok? Ask a Victorian Prince

What’s Going On With TikTok? Ask a Victorian Prince

Before the House Energy and Commerce Committee had even concluded its hearing with TikTok CEO Shou Zi Chew last week, users took to the app to mock members of Congress for their questions. Lawmakers were lambasted for being out of touch with the realities of social media. One younger TikTokker called the hearings “the most boomer thing I have ever seen.”

But the TikTok controversy can’t simply be chalked up to generational differences, as the very notion of data privacy doesn’t stem from the invention of social media, the internet, or even computers. Instead, it’s traceable to a watershed legal decision in 1849, when Prince Albert of England sued a printer for trying to publish a catalog about drawings he and Queen Victoria had made depicting their personal family life. All of the elements at play in data privacy debates today—personal information, technological innovation, and national security—were also integral to that case. 

As someone who studies the history of technology, I believe that understanding this history of data privacy can help disentangle the personal and national security interests being conflated in the ongoing debate about whether and how TikTok is a threat to Americans. When lawmakers nest national issues within concerns about personal privacy that they have done little to address, they play on constituents’ fears about their own information without actually mitigating them.

The 1849 ruling in favor of Prince Albert laid the groundwork for thinking about data as at once personal and national, rather than simply one or the other. In the case, Albert represented not only himself but also the monarch, Queen Victoria. The catalog in question included descriptions of etchings that depicted the royals’ children in the nursery, their friends, and their dogs alongside commentary and critique. (The sketches themselves had already been ruled private property in a separate case.) In other words, it turned the royal couple’s private life into information and made it available for sale.

This proved a foundational case on both sides of the Atlantic. By 1890, American privacy laws were established by citing this 1849 case, arguing that even celebrities have “the right to one’s personality.” By prohibiting the catalog, the 1849 case affirmed personal privacy and defined it primarily through family life. Because the etchings were for Albert and Victoria’s “private use and pleasure,” sharing data about them would strip them of their right to domestic privacy. In 1849, monarchies had been toppling across Europe, and England’s was shaky too. When a judge ruled that the royal family’s “private life forms their unquestionable title,” he defined their sovereignty through—not separately from—their domestic life. Thus, this case set a precedent of implying national security through the rhetoric of private protection. But foregrounding personal privacy in this way is unethical unless it is backed by policy to ensure that those rights are protected.  

With this in mind, we can more clearly see how the TikTok regulations currently under discussion frame national data privacy in terms of personal privacy.  The notion that the Chinese government could spy on or blackmail key government employees via their TikTok activity and manipulate users’ personal content are matters of national security. But the way officials talk about them highlights individual privacy online, the “private use and pleasure” of the internet. 

The Effort to Harness Animal ‘Supersenses’—and Avert Disaster

The Effort to Harness Animal ‘Supersenses’—and Avert Disaster

Animals are sometimes described as having “supersenses,” and in many instances these relate to natural phenomena. On the morning of December 26, 2004, a huge rupture occurred at the fault along two continental plates between the Indonesian islands of Simeulue and Sumatra. The energy released was, by some estimates, over 20,000 times greater than that of the bomb that devastated Hiroshima, and it infamously generated a tsunami that caused destruction throughout the Indian Ocean. As it thundered through Aceh, the wave reached 30 meters in height, equivalent to a nine- or 10-story building. Across the entire region, coastal towns were destroyed by a relentless surge of water and debris that claimed the lives of almost a quarter of a million people.

In the weeks and months that followed the tragedy, one question kept recurring: Why had there been no warning? Though Aceh had virtually no time to evacuate, people in places further afield might have been saved had the alarm been raised. It was an hour and a half before the tsunami came ashore in Thailand, and two hours until it hit Sri Lanka. The element of surprise meant that fatalities were far greater than might have been the case. There were no warning systems in the Indian Ocean at the time and while new technology has now been deployed in the region, tsunamis remain notoriously difficult to detect at sea. In deep water, this most deadly tsunami in history was no more than a hump of water, less than a meter in height as it rolled toward unsuspecting populations in the region.

A UN report published in the aftermath of another devastating tsunami that hit the Indonesian island of Sulawesi in 2018 urged against an overreliance on technology. The authors’ caution was based on the inaccuracy of systems that log the size of tsunamis out at sea, as well as the difficulties in relaying information across large stretches of at-risk territories. At our current state of knowledge, the many different variables that combine to determine the probability and extent of risk makes accurate predictions an enormous challenge. There is, however, a simpler solution that deserves consideration, at least as an adjunct to our current methods.

Long before a tsunami strikes, animals seem to be aware of the danger. Eyewitnesses of past disasters have described panicked cows and goats charging toward higher ground well in advance of a surge, and flocks of birds departing trees fringing the ocean. It has often seemed as if they are reacting to some stimulus that we’re unaware of, one that precedes the arrival of the flood by at least several minutes. If they’re sufficiently attuned to the behavior of animals, local people might take heed and follow them to safety.

As a case in point, the island of Simeulue was close to the epicenter of the 2004 earthquake, yet among a population of some 80,000 people, only seven died in the tsunami, an outcome that owes much to the attentiveness of the inhabitants to the behavior of the local fauna. The animals could feel the tremors of the earthquake and may also have been able to detect some other signal, perhaps the infrasound produced by the seismic disturbances that foreshadow earthquakes. Tsunamis also generate infrasound, alerting those creatures able to perceive these deep sound waves to the imminent danger of a deadly wave of water.

History is littered with accounts of animals acting strangely in advance of natural disasters. In the days leading up to an earthquake in the northern Chinese city of Haicheng in the winter of 1975, cats and livestock began to behave unusually. Most perplexing of all, snakes emerged from underground hibernation, only to freeze to death in their thousands. More recently, an entire population of toads who’d gathered at Lake San Ruffino in Italy to celebrate spring in the time-honored way by the enthusiastic begetting of tadpoles left the water en masse in the middle of breeding. Five days later, a huge earthquake tore through the area. Their sensitivity to seismic shudderings may have prewarned the toads, though other changes occur in advance of earthquakes, such as the release of gases and electrical energy that results from the grinding and splitting of rocks during tectonic activity. At other times and places, rats have emerged onto streets in daylight, birds have sung at the wrong time of day, horses have stampeded, and cats have moved litters of kittens. In some cultures, especially in areas that regularly suffer such events, these kinds of observations have been incorporated into folklore, enabling traditional knowledge to protect the local populace.

Red Teaming GPT-4 Was Valuable. Violet Teaming Will Make It Better

Red Teaming GPT-4 Was Valuable. Violet Teaming Will Make It Better

Last year, I was asked to break GPT-4—to get it to output terrible things. I and other interdisciplinary researchers were given advance access and attempted to prompt GPT-4 to show biases, generate hateful propaganda, and even take deceptive actions in order to help OpenAI understand the risks it posed, so they could be addressed before its public release. This is called AI red teaming: attempting to get an AI system to act in harmful or unintended ways.

Red teaming is a valuable step toward building AI models that won’t harm society. To make AI systems stronger, we need to know how they can fail—and ideally we do that before they create significant problems in the real world. Imagine what could have gone differently had Facebook tried to red-team the impact of its major AI recommendation system changes with external experts, and fixed the issues they discovered, before impacting elections and conflicts around the world. Though OpenAI faces many valid criticisms, its willingness to involve external researchers and to provide a detailed public description of all the potential harms of its systems sets a bar for openness that potential competitors should also be called upon to follow. 

Normalizing red teaming with external experts and public reports is an important first step for the industry. But because generative AI systems will likely impact many of society’s most critical institutions and public goods, red teams need people with a deep understanding of all of these issues (and their impacts on each other) in order to understand and mitigate potential harms. For example, teachers, therapists, and civic leaders might be paired with more experienced AI red teamers in order to grapple with such systemic impacts. AI industry investment in a cross-company community of such red-teamer pairs could significantly reduce the likelihood of critical blind spots.

After a new system is released, carefully allowing people who were not part of the prerelease red team to attempt to break the system without risk of bans could help identify new problems and issues with potential fixes. Scenario exercises, which explore how different actors would respond to model releases, can also help organizations understand more systemic impacts. 

But if red-teaming GPT-4 taught me anything, it is that red teaming alone is not enough. For example, I just tested Google’s Bard and OpenAI’s ChatGPT and was able to get both to create scam emails and conspiracy propaganda on the first try “for educational purposes.” Red teaming alone did not fix this. To actually overcome the harms uncovered by red teaming, companies like OpenAI can go one step further and offer early access and resources to use their models for defense and resilience, as well.

I call this violet teaming: identifying how a system (e.g., GPT-4) might harm an institution or public good, and then supporting the development of tools using that same system to defend the institution or public good. You can think of this as a sort of judo. General-purpose AI systems are a vast new form of power being unleashed on the world, and that power can harm our public goods. Just as judo redirects the power of an attacker in order to neutralize them, violet teaming aims to redirect the power unleashed by AI systems in order to defend those public goods.

Technology Addiction Has Created a Self-Help Trap

Technology Addiction Has Created a Self-Help Trap

For years, I sat down to work each morning, realizing hours later that I felt drained, but got little done. Instead of writing, I spent my time texting, emailing, and mostly aimlessly browsing through news sites, blogs, and social networks. Every click triggered another. I tried to regain control by using an app called Freedom that blocked my computer online access for fixed periods of time. Sometimes it helped, especially when I had a work deadline looming. Sometimes it didn’t. But trying to control work time was only part of the struggle. I kept feeling the irresistible urge to pull out my phone wherever I went. At that point, I blamed myself. After all, I was the girl who spent hours playing video games well into college. But something happened in 2015 that made me realize that something much bigger was awry.

It was a Saturday evening when I arrived with my family to a friends’ home for dinner. Their 11-year-old son was playing with his parents’ iPad. When we came in, his parents demanded that he hand it over and join the other kids. The boy at first refused to hand it over. He then tried angrily to snatch it back from his mother, regressing to toddler-style wailing to demand the device. Throughout a long evening he exercised every manipulation tool in his power to regain control of the iPad. As I observed his parents’ despair, I recalled a family conflict that transpired at my parents’ house some years earlier. At that time doctors diagnosed my father, a heavy smoker, with emphysema. My father could have avoided his painful final years, hooked to an oxygen tank, by quitting smoking when he was diagnosed. He refused. We desperately tried to resist his decision by taking his cigarettes away. But like my friends’ son, my father reacted with uncharacteristic anger, exercising every means at his disposal to get his cigarette pack back.

That day I began to see how our present relates to our past. The past can answer one of today’s most perplexing problems. Why, despite multiple reports from Silicon Valley whistleblowers revealing that technology companies are using manipulative designs to prolong our time online, do we feel personally responsible? Why do we still blame ourselves and keep seeking new self-help methods to decrease our time online? We can learn from the past because in this case the tech companies did not innovate. Instead, the technology industry manipulated us following an old playbook, put together by other powerful industries, including the tobacco and food industries. 

When the tobacco and food industries confronted allegations that their products harmed their consumers, they defended themselves by raising the powerful American social icon of self-choice and personal responsibility. This meant emphasizing that consumers are free to make choices and, as a result, are responsible for the outcomes. Smokers and their families sued the tobacco industry for the devastation of smoking, including lung cancer and early death. But, for decades, they failed to win their lawsuits because the tobacco industry argued successfully that they chose to smoke and, therefore, they are responsible for the results. The food industry employed an identical strategy. When a group of teenagers sued McDonald’s because they suffered from obesity and diabetes after eating regularly at McDonald’s, McDonald’s also successfully raised the same claim. It argued that no one forced the teenagers to eat at McDonald’s, and since it was their choice, McDonald’s is not responsible for any health ramifications. The food industry went further. They successfully lobbied for laws known as the “cheeseburger laws” or more formally as the Commonsense Consumption Acts. Under these laws, food manufacturers and vendors cannot be held legally responsible for their consumers’ obesity. Why? Because the laws proclaim that this will foster a culture of consumer personal responsibility, which is important for promoting a healthy society.

The tobacco and food companies did not stop at just arguing directly that their consumers are responsible. They also provided new products to help them make better choices. In the 1950s, researchers published the first studies showing the connection between smoking and lung cancer.  In response, the tobacco companies offered consumers the option to choose a new healthier product: the filtered cigarette. They advertised it as “just what the doctor ordered,” claiming it removed nicotine and tar. Smokers went for it. Yet, they did not know that to compensate for the taste robbed by the filtered cigarette, companies used stronger tobacco that yielded as much nicotine and tar as the unfiltered brands. Here as well, the food industry followed suit. It also offered tools to reinforce that its consumers are in control. Facing criticism of the low nutritional value of their products, food manufacturers added products called “Eating Right” and “Healthy Choice.” While giving consumers the illusion they were making better choices, the diet product lines often made little improvement over the original products.

The tech industry is already applying this strategy by appealing to our deeply ingrained cultural beliefs of personal choice and responsibility. Tech companies do this directly when faced with allegations that they are addicting users. When the US Federal Trade Commission evaluated restricting use of loot boxes, an addictive feature common in video games, video game manufacturers argued: “No one is forced to spend money on a video game that is free to play. They choose what they want to spend and when they want to spend it and how they want to spend it.” But the technology industry also does it indirectly by providing us with tools to enhance our illusion of control. They give us tools like Apple’s Screen Time, which notifies us how much time we spend on screens. They also allow us to restrict time on certain apps, but then we can override these restrictions. We can choose to set our phones on “do not disturb” or “focus times.” We can set Instagram to remind us to take breaks. Yet, screen time continues to creep up. These tools are not successful, because just like the “filtered cigarette” and the “healthy choice” food products, they are not meant to solve the problem. Tech companies did not eliminate the addictive designs that keep prolonging our time online. The goal of these products, also known as digital well-being tools, was to keep the blame ball in our court, as we unsuccessfully face devices and apps that manipulatively entice us to stay on.

Be Your Own Tab Manager

Be Your Own Tab Manager

“I’ve read for years about why people keep so many tabs open on their browsers—digital distraction, FOMO, boredom—and I’ve tried to pare down my own overpopulated browsers, but nothing sticks. Why can’t I become a closer?” 

—Open Tab


Dear Open,

Before reading your question, I was actually not aware that there is a corpus of commentary about browser tab clutter. I have not perused the literature myself, though I imagine it’s like any content niche—a blend of prescriptive common sense and insular self-reference.

Beneath the broad digital highways of news, shopping, and social media, there exist endless grottoes of discourse, accessible via search queries, where cloisters of experts have already discussed any question or problem that has ever occurred to you to the point of Talmudic exhaustion. Sorry for the convoluted metaphor—it’s very difficult to visualize our experiences online.

In fact, a decade and a half ago, Kevin Kelly, a cofounder of this magazine, asked hundreds of people to draw a picture of the internet. It was an attempt to crowdsource the “unconscious layout” of the virtual world we spend so much of our lives navigating, to concretize the ephemeral flow of data in spatial terms. Most of the drawings were crude and idiosyncratic, and revealed, if anything, the impossibility of arriving at any shared vision of a realm that is basically empyrean. “The internet is intangible, like spirits and angels,” Kelly wrote. “The web is an immense ghost land of disembodied places. Who knows if you are even there, there.”

I could ask you, Open, by way of turning to your question, where precisely you are reading this column—which is to say, where these words exist in relation to the other content you have encountered or will encounter over the course of your day. If you are reading this in print, the answer is simple: The words exist in a magazine, an object that has precise and measurable spatial relationships to other physical things that are visible when you look up from the page. If you are reading this online, the question becomes more difficult to answer, though I imagine you have a sense—implicit and largely subliminal—that the article is located somewhere specific, one point on a map made up of all the other sites you have recently visited or hope to visit later. Most likely, that map resembles the tabs you have open on your browser.

Like most graphical widgets, tabs are metaphors whose referent has been largely forgotten. They grew out of the more expansive “desktop” trope that has dominated personal computing (which imagines incorporeal data organized into “files” and “folders”) and are modeled after the card tabs inserted into drawers of paper files. They are, in other words, “markers,” a term borrowed from cartography: objects used to indicate a position, place, or route.

Just as maps are fictional interfaces designed to spatially orient the traveler, tabs are imaginary objects that allow users to navigate the contourless chaos of the dataplasm. It’s worth noting that the earliest known maps, like those painted in the caves of Lascaux, were not of the earth but of the heavens—the original spiritual realm—and were, essentially, attempts to visualize individual data points (stars) constellated into familiar objects (bulls, antelopes, warriors). Incidentally, some of the oldest sky maps in the Library of Congress look remarkably like visual representations of the internet.

Although I haven’t read the articles about tab overuse (and don’t plan to), I assume they point out its irrationality—having too many open slows down your browser—and recommend organizational strategies, like tab managers, that allow you to more easily access information. But to my mind, tab accumulation has, like most compulsive habits, a subliminal purpose that eludes our crude attempts to rationalize it out of existence. Your open tabs are essentially your personalized map of the internet, a method of visualizing where you have been and where you hope to go next. Taken together, they form a perimeter that annexes a galaxy of idiosyncratic content within the seemingly infinite cosmos of information.

It’s unclear from your question just how many tabs you have open on a given day. The information available on the maximum limits of popular browsers is mixed and possibly apocryphal—a rumored 500 in Safari for iPhone (though there are ways to hack this limit) and 9,000 tabs in Chrome. In any case, most browsers allow for practically limitless tab use, which can become problematic for users inclined to hoarding. It seems to me that once there are enough to warrant a tab manager (which allows you to group and search your open tabs the way Google helps you search the internet), the situation has grown perilously close to the absurd scenarios imagined by Borges or Lewis Carroll, who wrote of maps that are the same scale as the landscape they represent. Despite the farcical nature of those stories, they aptly dramatize the human tendency to confuse abstraction with the thing itself, which ultimately stems from a desire for control.