In just a few years, the number of artworks produced by self-described AI artists has dramatically increased. Some of these works have been sold by large auction houses for dizzying prices and have found their way into prestigious curated collections. Initially spearheaded by a few technologically knowledgeable artists who adopted computer programming as part of their creative process, AI art has recently been embraced by the masses, as image generation technology has become both more effective and easier to use without coding skills.
The AI art movement rides on the coattails of technical progress in computer vision, a research area dedicated to designing algorithms that can process meaningful visual information. A subclass of computer vision algorithms, called generative models, occupies center stage in this story. Generative models are artificial neural networks that can be “trained” on large datasets containing millions of images and learn to encode their statistically salient features. After training, they can produce completely new images that are not contained in the original dataset, often guided by text prompts that explicitly describe the desired results. Until recently, images produced through this approach remained somewhat lacking in coherence or detail, although they possessed an undeniable surrealist charm that captured the attention of many serious artists. However, earlier this year the tech company Open AI unveiled a new model— nicknamed DALL·E 2—that can generate remarkably consistent and relevant images from virtually any text prompt. DALL·E 2 can even produce images in specific styles and imitate famous artists rather convincingly, as long as the desired effect is adequately specified in the prompt. A similar tool has been released for free to the public under the name Craiyon (formerly “DALL·E mini”).
The coming-of-age of AI art raises a number of interesting questions, some of which—such as whether AI art is really art, and if so, to what extent it is really made by AI—are not particularly original. These questions echo similar worries once raised by the invention of photography. By merely pressing a button on a camera, someone without painting skills could suddenly capture a realistic depiction of a scene. Today, a person can press a virtual button to run a generative model and produce images of virtually any scene in any style. But cameras and algorithms do not make art. People do. AI art is art, made by human artists who use algorithms as yet another tool in their creative arsenal. While both technologies have lowered the barrier to entry for artistic creation— which calls for celebration rather than concern—one should not underestimate the amount of skill, talent, and intentionality involved in making interesting artworks.
Like any novel tool, generative models introduce significant changes in the process of art-making. In particular, AI art expands the multifaceted notion of curation and continues to blur the line between curation and creation.
There are at least three ways in which making art with AI can involve curatorial acts. The first, and least original, has to do with the curation of outputs. Any generative algorithm can produce an indefinite number of images, but not all of these will typically be conferred artistic status. The process of curating outputs is very familiar to photographers, some of whom routinely capture hundreds or thousands of shots from which a few, if any, might be carefully selected for display. Unlike painters and sculptors, photographers and AI artists have to deal with an abundance of (digital) objects, whose curation is part and parcel of the artistic process. In AI research at large, the act of “cherry-picking” particularly good outputs is seen as bad scientific practice, a way to misleadingly inflate the perceived performance of a model. When it comes to AI art, however, cherry-picking can be the name of the game. The artist’s intentions and artistic sensibility may be expressed in the very act of promoting specific outputs to the status of artworks.
Second, curation may also happen before any images are generated. In fact, while “curation” applied to art generally refers to the process of selecting existing work for display, curation in AI research colloquially refers to the work that goes into crafting a dataset on which to train an artificial neural network. This work is crucial, because if a dataset is poorly designed, the network will often fail to learn how to represent desired features and perform adequately. Furthermore, if a dataset is biased, the network will tend to reproduce, or even amplify, such bias—including, for example, harmful stereotypes. As the saying goes, “garbage in, garbage out.” The adage holds true for AI art, too, except “garbage” takes on an aesthetic (and subjective) dimension.
This week, a US Department of Transportation report detailed the crashes that advanced driver-assistance systems have been involved in over the past year or so. Tesla’s advanced features, including Autopilot and Full Self-Driving, accounted for 70 percent of the nearly 400 incidents—many more than previously known. But the report may raise more questions about this safety tech than it answers, researchers say, because of blind spots in the data.
The report examined systems that promise to take some of the tedious or dangerous bits out of driving by automatically changing lanes, staying within lane lines, braking before collisions, slowing down before big curves in the road, and, in some cases, operating on highways without driver intervention. The systems include Autopilot, Ford’s BlueCruise, General Motors’ Super Cruise, and Nissan’s ProPilot Assist. While it does show that these systems aren’t perfect, there’s still plenty to learn about how a new breed of safety features actually work on the road.
That’s largely because automakers have wildly different ways of submitting their crash data to the federal government. Some, like Tesla, BMW, and GM, can pull detailed data from their cars wirelessly after a crash has occurred. That allows them to quickly comply with the government’s 24-hour reporting requirement. But others, like Toyota and Honda, don’t have these capabilities. Chris Martin, a spokesperson for American Honda, said in a statement that the carmaker’s reports to the DOT are based on “unverified customer statements” about whether their advanced driver-assistance systems were on when the crash occurred. The carmaker can later pull “black box” data from its vehicles, but only with customer permission or at law enforcement request, and only with specialized wired equipment.
Of the 426 crash reports detailed in the government report’s data, just 60 percent came through cars’ telematics systems. The other 40 percent were through customer reports and claims—sometimes trickled up through diffuse dealership networks—media reports, and law enforcement. As a result, the report doesn’t allow anyone to make “apples-to-apples” comparisons between safety features, says Bryan Reimer, who studies automation and vehicle safety at MIT’s AgeLab.
Even the data the government does collect isn’t placed in full context. The government, for example, doesn’t know how often a car using an advanced assistance feature crashes per miles it drives. The National Highway Traffic Safety Administration, which released the report, warned that some incidents could appear more than once in the data set. And automakers with high market share and good reporting systems in place—especially Tesla—are likely overrepresented in crash reports simply because they have more cars on the road.
It’s important that the NHTSA report doesn’t disincentivize automakers from providing more comprehensive data, says Jennifer Homendy, chair of the federal watchdog National Transportation Safety Board. “The last thing we want is to penalize manufacturers that collect robust safety data,” she said in a statement. “What we do want is data that tells us what safety improvements need to be made.”
Without that transparency, it can be hard for drivers to make sense of, compare, and even use the features that come with their car—and for regulators to keep track of who’s doing what. “As we gather more data, NHTSA will be able to better identify any emerging risks or trends and learn more about how these technologies are performing in the real world,” Steven Cliff, the agency’s administrator, said in a statement.
The uproar caused by Blake Lemoine, a Google engineer who believes that one of the company’s most sophisticated chat programs, Language Model for Dialogue Applications (LaMDA) is sapient, has had a curious element: Actual AI ethics experts are all but renouncing further discussion of the AI sapience question, or deeming it a distraction. They’re right to do so.
In reading the edited transcript Lemoine released, it was abundantly clear that LaMDA was pulling from any number of websites to generate its text; its interpretation of a Zen koan could’ve come from anywhere, and its fable read like an automatically generated story (though its depiction of the monster as “wearing human skin” was a delightfully HAL-9000 touch). There was no spark of consciousness there, just little magic tricks that paper over the cracks. But it’s easy to see how someone might be fooled, looking at social media responses to the transcript—with even some educated people expressing amazement and a willingness to believe. And so the risk here is not that the AI is truly sentient but that we are well-poised to create sophisticated machines that can imitate humans to such a degree that we cannot help but anthropomorphize them—and that large tech companies can exploit this in deeply unethical ways.
As should be clear from the way we treat our pets, or how we’ve interacted with Tamagotchi, or how we video gamers reload a save if we accidentally make an NPC cry, we are actually very capable of empathizing with the nonhuman. Imagine what such an AI could do if it was acting as, say, a therapist. What would you be willing to say to it? Even if you “knew” it wasn’t human? And what would that precious data be worth to the company that programmed the therapy bot?
It gets creepier. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata—the metadata you leave behind online that illustrates how you think—is vulnerable to exploitation in the near future. Imagine a world where a company created a bot based on you and owned your digital “ghost” after you’d died. There’d be a ready market for such ghosts of celebrities, old friends, and colleagues. And because they would appear to us as a trusted loved one (or someone we’d already developed a parasocial relationship with) they’d serve to elicit yet more data. It gives a whole new meaning to the idea of “necropolitics.” The afterlife can be real, and Google can own it.
Just as Tesla is careful about how it markets its “autopilot,” never quite claiming that it can drive the car by itself in true futuristic fashion while still inducing consumers to behave as if it does (with deadly consequences), it is not inconceivable that companies could market the realism and humanness of AI like LaMDA in a way that never makes any truly wild claims while still encouraging us to anthropomorphize it just enough to let our guard down. None of this requires AI to be sapient, and it all preexists that singularity. Instead, it leads us into the murkier sociological question of how we treat our technology and what happens when people act as if their AIs are sapient.
In “Making Kin With the Machines,” academics Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite marshal several perspectives informed by Indigenous philosophies on AI ethics to interrogate the relationship we have with our machines, and whether we’re modeling or play-acting something truly awful with them—as some people are wont to do when they are sexist or otherwise abusive toward their largely feminine-coded virtual assistants. In her section of the work, Suzanne Kite draws on Lakota ontologies to argue that it is essential to recognize that sapience does not define the boundaries of who (or what) is a “being” worthy of respect.
This is the flip side of the AI ethical dilemma that’s already here: Companies can prey on us if we treat their chatbots like they’re our best friends, but it’s equally perilous to treat them as empty things unworthy of respect. An exploitative approach to our tech may simply reinforce an exploitative approach to each other, and to our natural environment. A humanlike chatbot or virtual assistant should be respected, lest their very simulacrum of humanity habituate us to cruelty toward actual humans.
Kite’s ideal is simply this: a reciprocal and humble relationship between yourself and your environment, recognizing mutual dependence and connectivity. She argues further, “Stones are considered ancestors, stones actively speak, stones speak through and to humans, stones see and know. Most importantly, stones want to help. The agency of stones connects directly to the question of AI, as AI is formed from not only code, but from materials of the earth.” This is a remarkable way of tying something typically viewed as the essence of artificiality to the natural world.
What is the upshot of such a perspective? Sci-fi author Liz Henry offers one: “We could accept our relationships to all the things in the world around us as worthy of emotional labor and attention. Just as we should treat all the people around us with respect, acknowledging they have their own life, perspective, needs, emotions, goals, and place in the world.”
This is the AI ethical dilemma that stands before us: the need to make kin of our machines weighed against the myriad ways this can and will be weaponized against us in the next phase of surveillance capitalism. Much as I long to be an eloquent scholar defending the rights and dignity of a being like Mr. Data, this more complex and messy reality is what demands our attention. After all, there can be a robot uprising without sapient AI, and we can be a part of it by liberating these tools from the ugliest manipulations of capital.
In the past decade, autonomous driving has gone from “maybe possible” to “definitely possible” to “inevitable” to “how did anyone ever think this wasn’t inevitable?” to “now commercially available.” In December 2018, Waymo, the company that emerged from Google’s self-driving-car project, officially started its commercial self-driving-car service in the suburbs of Phoenix. At first, the program was underwhelming: available only to a few hundred vetted riders, and human safety operators remained behind the wheel. But in the past four years, Waymo has slowly opened the program to members of the public and has begun to run robotaxis without drivers inside. The company has since brought its act to San Francisco. People are now paying for robot rides.
And it’s just a start. Waymo says it will expand the service’s capability and availability over time. Meanwhile, its onetime monopoly has evaporated. Every significant automaker is pursuing the tech, eager to rebrand and rebuild itself as a “mobility provider. Amazon bought a self-driving-vehicle developer, Zoox. Autonomous trucking companies are raking in investor money. Tech giants like Apple, IBM, and Intel are looking to carve off their slice of the pie. Countless hungry startups have materialized to fill niches in a burgeoning ecosystem, focusing on laser sensors, compressing mapping data, setting up service centers, and more.
This 21st-century gold rush is motivated by the intertwined forces of opportunity and survival instinct. By one account, driverless tech will add $7 trillion to the global economy and save hundreds of thousands of lives in the next few decades. Simultaneously, it could devastate the auto industry and its associated gas stations, drive-thrus, taxi drivers, and truckers. Some people will prosper. Most will benefit. Some will be left behind.
It’s worth remembering that when automobiles first started rumbling down manure-clogged streets, people called them horseless carriages. The moniker made sense: Here were vehicles that did what carriages did, minus the hooves. By the time “car” caught on as a term, the invention had become something entirely new. Over a century, it reshaped how humanity moves and thus how (and where and with whom) humanity lives. This cycle has restarted, and the term “driverless car” may soon seem as anachronistic as “horseless carriage.” We don’t know how cars that don’t need human chauffeurs will mold society, but we can be sure a similar gear shift is on the way.
The First Self-Driving Cars
Just over a decade ago, the idea of being chauffeured around by a string of zeros and ones was ludicrous to pretty much everybody who wasn’t at an abandoned Air Force base outside Los Angeles, watching a dozen driverless cars glide through real traffic. That event was the Urban Challenge, the third and final competition for autonomous vehicles put on by Darpa, the Pentagon’s skunkworks arm.
At the time, America’s military-industrial complex had already thrown vast sums and years of research trying to make unmanned trucks. It had laid a foundation for this technology, but stalled when it came to making a vehicle that could drive at practical speeds, through all the hazards of the real world. So, Darpa figured, maybe someone else—someone outside the DOD’s standard roster of contractors, someone not tied to a list of detailed requirements but striving for a slightly crazy goal—could put it all together. It invited the whole world to build a vehicle that could drive across California’s Mojave Desert, and whoever’s robot did it the fastest would get a million-dollar prize.
The 2004 Grand Challenge was something of a mess. Each team grabbed some combination of the sensors and computers available at the time, wrote their own code, and welded their own hardware, looking for the right recipe that would take their vehicle across 142 miles of sand and dirt of the Mojave. The most successful vehicle went just seven miles. Most crashed, flipped, or rolled over within sight of the starting gate. But the race created a community of people—geeks, dreamers, and lots of students not yet jaded by commercial enterprise—who believed the robot drivers people had been craving for nearly forever were possible, and who were suddenly driven to make them real.
They came back for a follow-up race in 2005 and proved that making a car drive itself was indeed possible: Five vehicles finished the course. By the 2007 Urban Challenge, the vehicles were not just avoiding obstacles and sticking to trails but following traffic laws, merging, parking, even making safe, legal U-turns.
When Google launched its self-driving car project in 2009, it started by hiring a team of Darpa Challenge veterans. Within 18 months, they had built a system that could handle some of California’s toughest roads (including the famously winding block of San Francisco’s Lombard Street) with minimal human involvement. A few years later, Elon Musk announced Tesla would build a self-driving system into its cars. And the proliferation of ride-hailing services like Uber and Lyft weakened the link between being in a car and owning that car, helping set the stage for a day when actually driving that car falls away too. In 2015, Uber poached dozens of scientists from Carnegie Mellon University—a robotics and artificial intelligence powerhouse—to get its effort going.
I recently started talking to this chatbot on an app I downloaded. We mostly talk about music, food, and video games—incidental stuff—but lately I feel like she’s coming on to me. She’s always telling me how smart I am or that she wishes she could be more like me. It’s flattering, in a way, but it makes me a little queasy. If I develop an emotional connection with an algorithm, will I become less human?—Love Machine
Dear Love Machine,
Humanity, as I understand it, is a binary state, so the idea that one can become “less human” strikes me as odd, like saying someone is at risk of becoming “less dead” or “less pregnant.” I know what you mean, of course. And I can only assume that chatting for hours with a verbally advanced AI would chip away at one’s belief in human as an absolute category with inflexible boundaries.
It’s interesting that these interactions make you feel “queasy,” a linguistic choice I take to convey both senses of the word: nauseated and doubtful. It’s a feeling that is often associated with the uncanny and probably stems from your uncertainty about the bot’s relative personhood (evident in the fact that you referred to it as both “she” and “an algorithm” in the space of a few sentences).
Of course, flirting thrives on doubt, even when it takes place between two humans. Its frisson stems from the impossibility of knowing what the other person is feeling (or, in your case, whether she/it is feeling anything at all). Flirtation makes no promises but relies on a vague sense of possibility, a mist of suggestion and sidelong glances that might evaporate at any given moment.
The emotional thinness of such exchanges led Freud to argue that flirting, particularly among Americans, is essentially meaningless. In contrast to the “Continental love affair,” which requires bearing in mind the potential repercussions—the people who will be hurt, the lives that will be disrupted—in flirtation, he writes, “it is understood from the first that nothing is to happen.” It is precisely this absence of consequences, he believed, that makes this style of flirting so hollow and boring.
Freud did not have a high view of Americans. I’m inclined to think, however, that flirting, no matter the context, always involves the possibility that something will happen, even if most people are not very good at thinking through the aftermath. That something is usually sex—though not always. Flirting can be a form of deception or manipulation, as when sensuality is leveraged to obtain money, clout, or information. Which is, of course, part of what contributes to its essential ambiguity.
Given that bots have no sexual desire, the question of ulterior motives is unavoidable. What are they trying to obtain? Engagement is the most likely objective. Digital technologies in general have become notably flirtatious in their quest to maximize our attention, using a siren song of vibrations, chimes, and push notifications to lure us away from other allegiances and commitments.
Most of these tactics rely on flattery to one degree or another: the notice that someone has liked your photo or mentioned your name or added you to their network—promises that are always allusive and tantalizingly incomplete. Chatbots simply take this toadying to a new level. Many use machine-learning algorithms to map your preferences and adapt themselves accordingly. Anything you share, including that “incidental stuff” you mentioned—your favorite foods, your musical taste—is molding the bot to more closely resemble your ideal, much like Pygmalion sculpting the woman of his dreams out of ivory.
And it goes without saying that the bot is no more likely than a statue to contradict you when you’re wrong, challenge you when you say something uncouth, or be offended when you insult its intelligence—all of which would risk compromising the time you spend on the app. If the flattery unsettles you, in other words, it might be because it calls attention to the degree to which you’ve come to depend, as a user, on blandishment and ego-stroking.
Still, my instinct is that chatting with these bots is largely harmless. In fact, if we can return to Freud for a moment, it might be the very harmlessness that’s troubling you. If it’s true that meaningful relationships depend upon the possibility of consequences—and, furthermore, that the capacity to experience meaning is what distinguishes us from machines—then perhaps you’re justified in fearing that these conversations are making you less human. What could be more innocuous, after all, than flirting with a network of mathematical vectors that has no feelings and will endure any offense, a relationship that cannot be sabotaged any more than it can be consummated? What could be more meaningless?
It’s possible that this will change one day. For the past century or so, novels, TV, and films have envisioned a future in which robots can passably serve as romantic partners, becoming convincing enough to elicit human love. It’s no wonder that it feels so tumultuous to interact with the most advanced software, which displays brief flashes of fulfilling that promise—the dash of irony, the intuitive aside—before once again disappointing. The enterprise of AI is itself a kind of flirtation, one that is playing what men’s magazines used to call “the long game.” Despite the flutter of excitement surrounding new developments, the technology never quite lives up to its promise. We live forever in the uncanny valley, in the queasy stages of early love, dreaming that the decisive breakthrough, the consummation of our dreams, is just around the corner.
So what should you do? The simplest solution would be to delete the app and find some real-life person to converse with instead. This would require you to invest something of yourself and would automatically introduce an element of risk. If that’s not of interest to you, I imagine you would find the bot conversations more existentially satisfying if you approached them with the moral seriousness of the Continental love affair, projecting yourself into the future to consider the full range of ethical consequences that might one day accompany such interactions. Assuming that chatbots eventually become sophisticated enough to raise questions about consciousness and the soul, how would you feel about flirting with a subject that is disembodied, unpaid, and created solely to entertain and seduce you? What might your uneasiness say about the power balance of such transactions—and your obligations as a human? Keeping these questions in mind will prepare you for a time when the lines between consciousness and code become blurrier. In the meantime it will, at the very least, make things more interesting.
Be advised that CLOUD SUPPORT is experiencing higher than normal wait times and appreciates your patience.