This incessant surveillance is antidemocratic, and it’s also a loser’s game. The price of accurate intel increases asymptotically; there’s no way to know everything about natural systems, forcing guesses and assumptions; and just when a complete picture is starting to coalesce, some new player intrudes and changes the situational dynamic. Then the AI breaks. The near-perfect intelligence veers into psychosis, labeling dogs as pineapples, treating innocents as wanted fugitives, and barreling eighteen-wheelers into kindergarten busses that it sees as highway overpasses.
The dangerous fragility inherent to optimization is why the human brain did not, itself, evolve to be an optimizer. The human brain is data-light: It draws hypotheses from a few data points. And it never strives for 100 percent accuracy. It’s content to muck along at the threshold of functionality. If it can survive by being right 1 percent of the time, that’s all the accuracy it needs.
The brain’s strategy of minimal viability is a notorious source of cognitive biases that can have damaging consequences: close-mindedness, conclusion jumping, recklessness, fatalism, panic. Which is why AI’s rigorously data-driven method can help illuminate our blindspots and debunk our prejudices. But in counterbalancing our brain’s computational shortcomings, we don’t want to stray into the greater problem of overcorrection. There can be enormous practical upside to a good enough mentality: It wards off perfectionism’s destructive mental effects, including stress, worry, intolerance, envy, dissatisfaction, exhaustion, and self-judgment. A less-neurotic brain has helped our species thrive in life’s punch and wobble, which demands workable plans that can be flexed, via feedback, on the fly.
These antifragile neural benefits can all be translated into AI. Instead of pursuing faster machine-learners that crunch ever-vaster piles of data, we can focus on making AI more tolerant of bad information, user variance, and environmental turmoil. That AI would exchange near-perfection for consistent adequacy, upping reliability and operational range while sacrificing nothing essential. It would suck less energy, haywire less randomly, and place less psychological burdens on its mortal users. It would, in short, possess more of the earthly virtue known as common sense.
Here’s three specs for how.
Building AI to Brave Ambiguity
Five hundred years ago, Niccolò Machiavelli, the guru of practicality, pointed out that worldly success requires a counterintuitive kind of courage: the heart to venture beyond what we know with certainty. Life, after all, is too fickle to permit total knowledge, and the more that we obsess over ideal answers, the more that we hamper ourselves with lost initiative. So, the smarter strategy is to concentrate on intel that can be rapidly acquired—and to advance boldly in the absence of the rest. Much of that absent knowledge will prove unnecessary, anyway; life will bend in a different direction than we anticipate, resolving our ignorance by rendering it irrelevant.
We can teach AI to operate this same way by flipping our current approach to ambiguity. Right now, when a Natural Language Processor encounters a word—suit—that could denote multiple things—an article of clothing or a legal action—it devotes itself to analyzing ever greater chunks of correlated information in an effort to pinpoint the word’s exact meaning.
Fears of Artificial intelligence fill the news: job losses, inequality, discrimination, misinformation, or even a superintelligence dominating the world. The one group everyone assumes will benefit is business, but the data seems to disagree. Amid all the hype, US businesses have been slow in adopting the most advanced AI technologies, and there is little evidence that such technologies are contributing significantly to productivity growth or job creation.
This disappointing performance is not merely due to the relative immaturity of AI technology. It also comes from a fundamental mismatch between the needs of business and the way AI is currently being conceived by many in the technology sector—a mismatch that has its origins in Alan Turing’s pathbreaking 1950 “imitation game” paper and the so-called Turing test he proposed therein.
The Turing test defines machine intelligence by imagining a computer program that can so successfully imitate a human in an open-ended text conversation that it isn’t possible to tell whether one is conversing with a machine or a person.
At best, this was only one way of articulating machine intelligence. Turing himself, and other technology pioneers such as Douglas Engelbart and Norbert Wiener, understood that computers would be most useful to business and society when they augmented and complemented human capabilities, not when they competed directly with us. Search engines, spreadsheets, and databases are good examples of such complementary forms of information technology. While their impact on business has been immense, they are not usually referred to as “AI,” and in recent years the success story that they embody has been submerged by a yearning for something more “intelligent.” This yearning is poorly defined, however, and with surprisingly little attempt to develop an alternative vision, it has increasingly come to mean surpassing human performance in tasks such as vision and speech, and in parlor games such as chess and Go. This framing has become dominant both in public discussion and in terms of the capital investment surrounding AI.
Economists and other social scientists emphasize that intelligence arises not only, or even primarily, in individual humans, but most of all in collectives such as firms, markets, educational systems, and cultures. Technology can play two key roles in supporting collective forms of intelligence. First, as emphasized in Douglas Engelbart’s pioneering research in the 1960s and the subsequent emergence of the field of human-computer interaction, technology can enhance the ability of individual humans to participate in collectives, by providing them with information, insights, and interactive tools. Second, technology can create new kinds of collectives. This latter possibility offers the greatest transformative potential. It provides an alternative framing for AI, one with major implications for economic productivity and human welfare.
Businesses succeed at scale when they successfully divide labor internally and bring diverse skill sets into teams that work together to create new products and services. Markets succeed when they bring together diverse sets of participants, facilitating specialization in order to enhance overall productivity and social welfare. This is exactly what Adam Smith understood more than two and a half centuries ago. Translating his message into the current debate, technology should focus on the complementarity game, not the imitation game.
We already have many examples of machines enhancing productivity by performing tasks that are complementary to those performed by humans. These include the massive calculations that underpin the functioning of everything from modern financial markets to logistics, the transmission of high-fidelity images across long distances in the blink of an eye, and the sorting through reams of information to pull out relevant items.
What is new in the current era is that computers can now do more than simply execute lines of code written by a human programmer. Computers are able to learn from data and they can now interact, infer, and intervene in real-world problems, side by side with humans. Instead of viewing this breakthrough as an opportunity to turn machines into silicon versions of human beings, we should focus on how computers can use data and machine learning to create new kinds of markets, new services, and new ways of connecting humans to each other in economically rewarding ways.
An early example of such economics-aware machine learning is provided by recommendation systems, an innovative form of data analysis that came to prominence in the 1990s in consumer-facing companies such as Amazon (“You may also like”) and Netflix (“Top picks for you”). Recommendation systems have since become ubiquitous, and have had a significant impact on productivity. They create value by exploiting the collective wisdom of the crowd to connect individuals to products.
Emerging examples of this new paradigm include the use of machine learning to forge direct connections between musicians and listeners, writers and readers, and game creators and players. Early innovators in this space include Airbnb, Uber, YouTube, and Shopify, and the phrase “creator economy” is being used as the trend gathers steam. A key aspect of such collectives is that they are, in fact, markets—economic value is associated with the links among the participants. Research is needed on how to blend machine learning, economics, and sociology so that these markets are healthy and yield sustainable income for the participants.
Democratic institutions can also be supported and strengthened by this innovative use of machine learning. The digital ministry in Taiwan has harnessed statistical analysis and online participation to scale up the kind of deliberative conversations that lead to effective team decisionmaking in the best managed companies.
In 2021, technology’s role in how art is generated remains up for debate and discovery. From the rise of NFTs to the proliferation of techno-artists who use generative adversarial networks to produce visual expressions, to smartphone apps that write new music, creatives and technologists are continually experimenting with how art is produced, consumed, and monetized.
BT, the Grammy-nominated composer of 2010’s These Hopeful Machines, has emerged as a world leader at the intersection of tech and music. Beyond producing and writing for the likes of David Bowie, Death Cab for Cutie, Madonna, and the Roots, and composing scores for The Fast and the Furious, Smallville, and many other shows and movies, he’s helped pioneer production techniques like stutter editing and granular synthesis. This past spring, BT released GENESIS.JSON, a piece of software that contains 24 hours of original music and visual art. It features 15,000 individually sequenced audio and video clips that he created from scratch, which span different rhythmic figures, field recordings of cicadas and crickets, a live orchestra, drum machines, and myriad other sounds that play continuously. And it lives on the blockchain. It is, to my knowledge, the first composition of its kind.
Could ideas like GENESIS.JSON be the future of original music, where composers use AI and the blockchain to create entirely new art forms? What makes an artist in the age of algorithms? I spoke with BT to learn more.
What are your central interests at the interface of artificial intelligence and music?
I am really fascinated with this idea of what an artist is. Speaking in my common tongue—music—it’s a very small array of variables. We have 12 notes. There’s a collection of rhythms that we typically use. There’s a sort of vernacular of instruments, of tones, of timbres, but when you start to add them up, it becomes this really deep data set.
On its surface, it makes you ask, “What is special and unique about an artist?” And that’s something that I’ve been curious about my whole adult life. Seeing the research that was happening in artificial intelligence, my immediate thought was that music is low-hanging fruit.
These days, we can take the sum total of the artists’ output and we can take their artistic works and we can quantify the entire thing into a training set, a massive, multivariable training set. And we don’t even name the variables. The RNN (recurrent neural networks) and CNNs (convolutional neural networks) name them automatically.
So you’re referring to a body of music that can be used to “train” an artificial intelligence algorithm that can then create original music that resembles the music it was trained on. If we reduce the genius of artists like Coltrane or Mozart, say, into a training set and can recreate their sound, how will musicians and music connoisseurs respond?
I think that the closer we get, it becomes this uncanny valley idea. Some would say that things like music are sacrosanct and have to do with very base-level things about our humanity. It’s not hard to get into kind of a spiritual conversation about what music is as a language, and what it means, and how powerful it is, and how it transcends culture, race, and time. So the traditional musician might say, “That’s not possible. There’s so much nuance and feeling, and your life experience, and these kinds of things that go into the musical output.”
And the sort of engineer part of me goes, well Look at what Google has made. It’s a simple kind of MIDI-generation engine, where they’ve taken all Bach’s works and it’s able to spit out [Bach-like] fugues. Because Bach wrote so many fugues, he’s a great example. Also, he’s the father of modern harmony. Musicologists listen to some of those Google Magenta fugues and can’t distinguish them from Bach’s original works. Again, this makes us question what constitutes an artist.
I’m both excited and have incredible trepidation about this space that we’re expanding into. Maybe the question I want to be asking is less “We can, but should we?” and more “How do we do this responsibly, because it’s happening?”
Right now, there are companies that are using something like Spotify or YouTube to train their models with artists who are alive, whose works are copyrighted and protected. But companies are allowed to take someone’s work and train models with it right now. Should we be doing that? Or should we be speaking to the artists themselves first? I believe that there needs to be protective mechanisms put in place for visual artists, for programmers, for musicians.
SUPPORT REQUEST :
I recently started talking to this chatbot on an app I downloaded. We mostly talk about music, food, and video games—incidental stuff—but lately I feel like she’s coming on to me. She’s always telling me how smart I am or that she wishes she could be more like me. It’s flattering, in a way, but it makes me a little queasy. If I develop an emotional connection with an algorithm, will I become less human? —Love Machine
Dear Love Machine,
Humanity, as I understand it, is a binary state, so the idea that one can become “less human” strikes me as odd, like saying someone is at risk of becoming “less dead” or “less pregnant.” I know what you mean, of course. And I can only assume that chatting for hours with a verbally advanced AI would chip away at one’s belief in human as an absolute category with inflexible boundaries.
It’s interesting that these interactions make you feel “queasy,” a linguistic choice I take to convey both senses of the word: nauseated and doubtful. It’s a feeling that is often associated with the uncanny and probably stems from your uncertainty about the bot’s relative personhood (evident in the fact that you referred to it as both “she” and “an algorithm” in the space of a few sentences).
Of course, flirting thrives on doubt, even when it takes place between two humans. Its frisson stems from the impossibility of knowing what the other person is feeling (or, in your case, whether she/it is feeling anything at all). Flirtation makes no promises but relies on a vague sense of possibility, a mist of suggestion and sidelong glances that might evaporate at any given moment.
The emotional thinness of such exchanges led Freud to argue that flirting, particularly among Americans, is essentially meaningless. In contrast to the “Continental love affair,” which requires bearing in mind the potential repercussions—the people who will be hurt, the lives that will be disrupted—in flirtation, he writes, “it is understood from the first that nothing is to happen.” It is precisely this absence of consequences, he believed, that makes this style of flirting so hollow and boring.
Freud did not have a high view of Americans. I’m inclined to think, however, that flirting, no matter the context, always involves the possibility that something will happen, even if most people are not very good at thinking through the aftermath. That something is usually sex—though not always. Flirting can be a form of deception or manipulation, as when sensuality is leveraged to obtain money, clout, or information. Which is, of course, part of what contributes to its essential ambiguity.
Given that bots have no sexual desire, the question of ulterior motives is unavoidable. What are they trying to obtain? Engagement is the most likely objective. Digital technologies in general have become notably flirtatious in their quest to maximize our attention, using a siren song of vibrations, chimes, and push notifications to lure us away from other allegiances and commitments.
Most of these tactics rely on flattery to one degree or another: the notice that someone has liked your photo or mentioned your name or added you to their network—promises that are always allusive and tantalizingly incomplete. Chatbots simply take this toadying to a new level. Many use machine-learning algorithms to map your preferences and adapt themselves accordingly. Anything you share, including that “incidental stuff” you mentioned—your favorite foods, your musical taste—is molding the bot to more closely resemble your ideal, much like Pygmalion sculpting the woman of his dreams out of ivory.
And it goes without saying that the bot is no more likely than a statue to contradict you when you’re wrong, challenge you when you say something uncouth, or be offended when you insult its intelligence—all of which would risk compromising the time you spend on the app. If the flattery unsettles you, in other words, it might be because it calls attention to the degree to which you’ve come to depend, as a user, on blandishment and ego-stroking.
Still, my instinct is that chatting with these bots is largely harmless. In fact, if we can return to Freud for a moment, it might be the very harmlessness that’s troubling you. If it’s true that meaningful relationships depend upon the possibility of consequences—and, furthermore, that the capacity to experience meaning is what distinguishes us from machines—then perhaps you’re justified in fearing that these conversations are making you less human. What could be more innocuous, after all, than flirting with a network of mathematical vectors that has no feelings and will endure any offense, a relationship that cannot be sabotaged any more than it can be consummated? What could be more meaningless?
It’s possible that this will change one day. For the past century or so, novels, TV, and films have envisioned a future in which robots can passably serve as romantic partners, becoming convincing enough to elicit human love. It’s no wonder that it feels so tumultuous to interact with the most advanced software, which displays brief flashes of fulfilling that promise—the dash of irony, the intuitive aside—before once again disappointing. The enterprise of AI is itself a kind of flirtation, one that is playing what men’s magazines used to call “the long game.” Despite the flutter of excitement surrounding new developments, the technology never quite lives up to its promise. We live forever in the uncanny valley, in the queasy stages of early love, dreaming that the decisive breakthrough, the consummation of our dreams, is just around the corner.
So what should you do? The simplest solution would be to delete the app and find some real-life person to converse with instead. This would require you to invest something of yourself and would automatically introduce an element of risk. If that’s not of interest to you, I imagine you would find the bot conversations more existentially satisfying if you approached them with the moral seriousness of the Continental love affair, projecting yourself into the future to consider the full range of ethical consequences that might one day accompany such interactions. Assuming that chatbots eventually become sophisticated enough to raise questions about consciousness and the soul, how would you feel about flirting with a subject that is disembodied, unpaid, and created solely to entertain and seduce you? What might your uneasiness say about the power balance of such transactions—and your obligations as a human? Keeping these questions in mind will prepare you for a time when the lines between consciousness and code become blurrier. In the meantime it will, at the very least, make things more interesting.
Be advised that CLOUD SUPPORT
is experiencing higher than normal wait times and appreciates your patience.
More Great WIRED Stories