Vietnam was known as the first televised war. The Iran Green Movement and the Arab Spring were called the first Twitter Revolutions. And now the Russian invasion of Ukraine is being dubbed the first TikTok War. As The Atlantic and others have pointed out, it’s not, neither literally nor figuratively: TikTok is merely the latest social media platform to see its profitable expansion turn into a starring role in a crisis.
But as its #ukraine and #украина posts near a combined 60 billion views, TikTok should learn from the failings of other platforms over the past decade, failings that have exacerbated the horrors of war, facilitated misinformation, and impeded access to justice for human rights crimes. TikTok should take steps now to better support creators sharing evidence and experience, viewers, and the people and institutions who use these videos for reliable information and human rights accountability.
First, TikTok can help people on the ground in Ukraine who want to galvanize action and be trusted as frontline witnesses. The company should provide targeted guidance directly to these vulnerable creators. This could include notifications or videos in their For You page that demonstrate (1) how to film in a way that is more verifiable and trustworthy to outside sources, (2) how to protect themselves and others in case a video shot in crisis becomes a tool of surveillance and outright targeting, and (3) how to share their footage without it getting taken down or made less visible as graphic content. TikTok should begin the process of incorporating emerging approaches (such as the C2PA standards) that allow creators to choose to show a video’s provenance. And it should offer easy ways, prominently available when recording, to protectively and not just aesthetically blur faces of vulnerable people.
TikTok should also be investing in robust, localized, contextual content moderation and appeals routing for this conflict and the next crisis. Social media creators are at the mercy of capricious algorithms that cannot navigate the difference between harmful violent content and victims of war sharing their experiences. If a clip or account is taken down or suspended—often because it breaches a rule the user never knew about—it’s unlikely they’ll be able to access a rapid or transparent appeals process. This is particularly true if they live outside North America and Western Europe. The company should bolster its content moderation in Ukraine immediately.
The platform is poorly designed for accurate information but brilliantly designed for quick human engagement. The instant fame that the For You page can grant has brought the everyday life and dark humor of young Ukrainians like Valeria Shashenok (@valerissh) from the city of Chernihiv into people’s feeds globally. Human rights activists know that one of the best ways to engage people in meaningful witnessing and to counter the natural impulse to look away occurs when you experience their realities in a personal, human way. Undoubtedly some of this insight into real people’s lives in Ukraine is moving people to a place of greater solidarity. Yet the more decontextualized the suffering of others is—and the For You page also encourages flitting between disparate stories—the more the suffering is experienced as spectacle. This risks a turn toward narcissistic self-validation or worse: trolling of people at their most vulnerable.
And that’s assuming that the content we’re viewing is shared in good faith. The ability to remix audio, along with TikTok’s intuitive ease in editing, combining, and reusing existing footage, among other factors, make the platform vulnerable to misinformation and disinformation. Unless spotted by an automated match-up with a known fake, labeled as state-affiliated media, or identified by a fact-checker as incorrect or by TikTok teams as being part of a coordinated influence campaign, many deceptive videos circulate without any guidance or tools to help viewers exercise basic media literacy.
TikTok should do more to ensure that it promptly identifies, reviews, and labels these fakes for their viewers, and takes them down or removes them from recommendations. They should ramp up capacity to fact-check on the platform and address how their business model and its resulting algorithm continues to promote deceptive videos with high engagement. We, the people viewing the content, also need better direct support. One of the first steps that professional fact-checkers take to verify footage is to use a reverse image search to see if a photo or video existed before the date it claims to have been made or is from a different location or event than what it is claimed to be. As the TikTok misinfo expert Abbie Richards has pointed out, TikTok doesn’t even indicate the date a video was posted when it appears in the For You feed. Like other platforms, TikTok also doesn’t make an easy reverse image search or video search available in-platform to its users or offer in-feed indications of previous video dupes. It’s past time to make it simpler to be able to check whether a video you see in your feed comes from a different time and place than it claims, for example with intuitive reverse image/video search or a simple one-click provenance trail for videos created in-platform.
No one visits the “Help Center.” Tools need to be accompanied by guidance in videos that appear on people’s For You page. Viewers need to build the media literacy muscles for how to make good judgements about the footage they are being exposed to. This includes sharing principles like SIFT as well as tips specific to the ways TikTok works, such as what to look for on TikTok’s extremely popular livestreams: For example, check the comments and look at the creator’s previous content, and on any video, always check to make sure the audio is original (as both Richards and Marcus Bösch, another TikTok misinfo expert, have suggested). Reliable news sources also need to be part of the feed, as TikTok appears to have started to do increasingly.
TikTok also demonstrates a problem that arises as content recommender algorithms intersect with good media literacy practices of “lateral reading.” Perversely, the more attention you pay to a suspicious video, the more you return to it after looking for other sources, the more the TikTok algorithm feeds you more of the same and prioritizes sharing that potentially false video to other people.
In the past decade, autonomous driving has gone from “maybe possible” to “definitely possible” to “inevitable” to “how did anyone ever think this wasn’t inevitable?” to “now commercially available.” In December 2018, Waymo, the company that emerged from Google’s self-driving-car project, officially started its commercial self-driving-car service in the suburbs of Phoenix. At first, the program was underwhelming: available only to a few hundred vetted riders, and human safety operators remained behind the wheel. But in the past four years, Waymo has slowly opened the program to members of the public and has begun to run robotaxis without drivers inside. The company has since brought its act to San Francisco. People are now paying for robot rides.
And it’s just a start. Waymo says it will expand the service’s capability and availability over time. Meanwhile, its onetime monopoly has evaporated. Every significant automaker is pursuing the tech, eager to rebrand and rebuild itself as a “mobility provider. Amazon bought a self-driving-vehicle developer, Zoox. Autonomous trucking companies are raking in investor money. Tech giants like Apple, IBM, and Intel are looking to carve off their slice of the pie. Countless hungry startups have materialized to fill niches in a burgeoning ecosystem, focusing on laser sensors, compressing mapping data, setting up service centers, and more.
This 21st-century gold rush is motivated by the intertwined forces of opportunity and survival instinct. By one account, driverless tech will add $7 trillion to the global economy and save hundreds of thousands of lives in the next few decades. Simultaneously, it could devastate the auto industry and its associated gas stations, drive-thrus, taxi drivers, and truckers. Some people will prosper. Most will benefit. Some will be left behind.
It’s worth remembering that when automobiles first started rumbling down manure-clogged streets, people called them horseless carriages. The moniker made sense: Here were vehicles that did what carriages did, minus the hooves. By the time “car” caught on as a term, the invention had become something entirely new. Over a century, it reshaped how humanity moves and thus how (and where and with whom) humanity lives. This cycle has restarted, and the term “driverless car” may soon seem as anachronistic as “horseless carriage.” We don’t know how cars that don’t need human chauffeurs will mold society, but we can be sure a similar gear shift is on the way.
The First Self-Driving Cars
Just over a decade ago, the idea of being chauffeured around by a string of zeros and ones was ludicrous to pretty much everybody who wasn’t at an abandoned Air Force base outside Los Angeles, watching a dozen driverless cars glide through real traffic. That event was the Urban Challenge, the third and final competition for autonomous vehicles put on by Darpa, the Pentagon’s skunkworks arm.
Content
This content can also be viewed on the site it originates from.
At the time, America’s military-industrial complex had already thrown vast sums and years of research trying to make unmanned trucks. It had laid a foundation for this technology, but stalled when it came to making a vehicle that could drive at practical speeds, through all the hazards of the real world. So, Darpa figured, maybe someone else—someone outside the DOD’s standard roster of contractors, someone not tied to a list of detailed requirements but striving for a slightly crazy goal—could put it all together. It invited the whole world to build a vehicle that could drive across California’s Mojave Desert, and whoever’s robot did it the fastest would get a million-dollar prize.
The 2004 Grand Challenge was something of a mess. Each team grabbed some combination of the sensors and computers available at the time, wrote their own code, and welded their own hardware, looking for the right recipe that would take their vehicle across 142 miles of sand and dirt of the Mojave. The most successful vehicle went just seven miles. Most crashed, flipped, or rolled over within sight of the starting gate. But the race created a community of people—geeks, dreamers, and lots of students not yet jaded by commercial enterprise—who believed the robot drivers people had been craving for nearly forever were possible, and who were suddenly driven to make them real.
They came back for a follow-up race in 2005 and proved that making a car drive itself was indeed possible: Five vehicles finished the course. By the 2007 Urban Challenge, the vehicles were not just avoiding obstacles and sticking to trails but following traffic laws, merging, parking, even making safe, legal U-turns.
When Google launched its self-driving car project in 2009, it started by hiring a team of Darpa Challenge veterans. Within 18 months, they had built a system that could handle some of California’s toughest roads (including the famously winding block of San Francisco’s Lombard Street) with minimal human involvement. A few years later, Elon Musk announced Tesla would build a self-driving system into its cars. And the proliferation of ride-hailing services like Uber and Lyft weakened the link between being in a car and owning that car, helping set the stage for a day when actually driving that car falls away too. In 2015, Uber poached dozens of scientists from Carnegie Mellon University—a robotics and artificial intelligence powerhouse—to get its effort going.
A new video from human rights organization Amnesty International maps the locations of more than 15,000 cameras used by the New York Police Department, both for routine surveillance and in facial-recognition searches. A 3D model shows the 200-meter range of a camera, part of a sweeping dragnet capturing the unwitting movements of nearly half of the city’s residents, putting them at risk for misidentification. The group says it is the first to map the locations of that many cameras in the city.
Amnesty International and a team of volunteer researchers mapped cameras that can feed NYPD’s much criticized facial-recognition systems in three of the city’s five boroughs—Manhattan, Brooklyn, and the Bronx—finding 15,280 in total. Brooklyn is the most surveilled, with over 8,000 cameras.
A video by Amnesty International shows how New York City surveillance cameras work.
“You are never anonymous,” says Matt Mahmoudi, the AI researcher leading the project. The NYPD has used the cameras in almost 22,000 facial-recognition searches since 2017, according to NYPD documents obtained by the Surveillance Technology Oversight Project, a New York privacy group.
“Whether you’re attending a protest, walking to a particular neighborhood, or even just grocery shopping, your face can be tracked by facial-recognition technology using imagery from thousands of camera points across New York,” Mahmoudi says.
The cameras are often placed on top of buildings, on street lights, and at intersections. The city itself owns thousands of cameras; in addition, private businesses and homeowners often grant access to police.
Police can compare faces captured by these cameras to criminal databases to search for potential suspects. Earlier this year, the NYPD was required to disclose the details of its facial-recognition systems for public comment. But those disclosures didn’t include the number or location of cameras, or any details of how long data is retained or with whom data is shared.
The Amnesty International team found that the cameras are often clustered in majority nonwhite neighborhoods. NYC’s most surveilled neighborhood is East New York, Brooklyn, where the group found 577 cameras in less than 2 square miles. More than 90 percent of East New York’s residents are nonwhite, according to city data.
Facial-recognition systems often perform less accurately on darker-skinned people than lighter-skinned people. In 2016, Georgetown University researchers found that police departments across the country used facial recognition to identify nonwhite potential suspects more than their white counterparts.
In a statement, an NYPD spokesperson said the department never arrests anyone “solely on the basis of a facial-recognition match,” and only uses the tool to investigate “a suspect or suspects related to the investigation of a particular crime.”
“Where images are captured at or near a specific crime, comparison of the image of a suspect can be made against a database that includes only mug shots legally held in law enforcement records based on prior arrests,” the statement reads.
Amnesty International is releasing the map and accompanying videos as part of its #BantheScan campaign urging city officials to ban police use of the tool ahead of the city’s mayoral primary later this month. In May, Vice asked mayoral candidates if they’d support a ban on facial recognition. While most didn’t respond to the inquiry, candidate Dianne Morales told the publication she supported a ban, while candidates Shaun Donovan and Andrew Yang suggested auditing for disparate impact before deciding on any regulation.
When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.
Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren’t well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse’s many contradictions and unsubtle characterizations.
The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”
This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.
But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal’s many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.
Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.
The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.
The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.
The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.
The greatest failure of the digital age is how far removed it is from nature. The microchip has no circadian rhythm, nor has the computer breath. The network is incorporeal. This may represent an existential risk for life on Earth. I believe we have to make a decision: Succumb to pushing more of our brain time and economy into unnatural online constructs, or build the digital anew in a way that is rooted in nature.
Nature is excessive, baroque. Its song is not ours alone. We share this planet with 8 million nonhuman species, yet we scarcely think of how they move through the world. There is no way for wild animals, trees, or other species to make themselves known to us online or to express their preferences to us. The only value most of them have is the sum value of their processed body parts. Those that are not eaten are forgotten, or perhaps never remembered: Only 2 million of them are recorded by science.
This decade will be the most destructive for nonhuman life in recorded history. It could also be the most regenerative. Nonhuman life-forms may soon gain some agency in the world. I propose the invention of an Interspecies Money. I’m not talking about Dogecoin, the meme of a Shiba Inu dog that’s become a $64 billion cryptocurrency (as of today). I’m talking about a digital currency that could allow several hundred billion dollars to be held by other beings simply on account of being themselves and no other and being alive in the world. It is possible they will be able to spend and invest this digital currency to improve their lives. And because the services they ask for—recognition, security, room to grow, nutrition, even veterinary care—will often be provided by poor communities in the tropics, human lives will also be improved.
Money needs to cross the species divide. Whoa, I know. King Julien with a credit card. Flower grenades into the meaning of life. Bear with me. If money, as some economic theorists suggest, is a form of memory, it is obvious that nonhuman species are unseen by the market economy because no money has ever been assigned by them. In order to preserve the survival of some species it is necessary in some situations, usually when they are in direct competition with humans, to give them economic advantage. An orchid, a baobab tree, a dugong, an orangutan, even at some future point the trace lines of a mycelial network—all of these should hold money.
We have the technology to start building Interspecies Money now. Indeed, it sometimes seems to me that the living system (Gaia or otherwise) is in fact producing the tools needed to protect complex life at precisely the moment it is most needed: fintech solutions in mobile money, digital wallets, and cryptocurrencies, which have shown that it is possible to address micropayments accurately and cheaply; cloud computing firms, which have demonstrated that large amounts of data can be stored and processed, even in countries that favor data sovereignty; hardware, which has become smarter and cheaper. Single-board computers (Raspberry Pis), camera traps, microphones, and other cheap sensors, energy solutions in solar arrays and batteries, internet connectivity, flying and ground robots, low-orbit satellite systems, and the pervasiveness of smartphones make it plausible to build a verification system in the wild that is trusted by the markets.
The first requirement of Interspecies Money is to provide a digital identity of an individual animal, or a herd, or a type (depending on size, population dynamics, and other characteristics of the organisms). This can be done through many methods. Birds may be identified by sound, insects by genetics, trees by probability. For most wild animals it will be done by sight. Some may be observed constantly, others only glimpsed. For instance, the digital identity of rare Hirola antelopes in Kenya and Somalia, of which there are only 500 in existence, will be minted from images gathered on mobile phones, camera traps, and drones by community rangers. The identity serves as a digital twin, which in legal and practical terms holds the money and releases it based on the services the life-form requires.