Select Page
AI Art Is Challenging the Boundaries of Curation

AI Art Is Challenging the Boundaries of Curation

In just a few years, the number of artworks produced by self-described AI artists has dramatically increased. Some of these works have been sold by large auction houses for dizzying prices and have found their way into prestigious curated collections. Initially spearheaded by a few technologically knowledgeable artists who adopted computer programming as part of their creative process, AI art has recently been embraced by the masses, as image generation technology has become both more effective and easier to use without coding skills.

The AI art movement rides on the coattails of technical progress in computer vision, a research area dedicated to designing algorithms that can process meaningful visual information. A subclass of computer vision algorithms, called generative models, occupies center stage in this story. Generative models are artificial neural networks that can be “trained” on large datasets containing millions of images and learn to encode their statistically salient features. After training, they can produce completely new images that are not contained in the original dataset, often guided by text prompts that explicitly describe the desired results. Until recently, images produced through this approach remained somewhat lacking in coherence or detail, although they possessed an undeniable surrealist charm that captured the attention of many serious artists. However, earlier this year the tech company Open AI unveiled a new model— nicknamed DALL·E 2—that can generate remarkably consistent and relevant images from virtually any text prompt. DALL·E 2 can even produce images in specific styles and imitate famous artists rather convincingly, as long as the desired effect is adequately specified in the prompt. A similar tool has been released for free to the public under the name Craiyon (formerly “DALL·E mini”).

The coming-of-age of AI art raises a number of interesting questions, some of which—such as whether AI art is really art, and if so, to what extent it is really made by AI—are not particularly original. These questions echo similar worries once raised by the invention of photography. By merely pressing a button on a camera, someone without painting skills could suddenly capture a realistic depiction of a scene. Today, a person can press a virtual button to run a generative model and produce images of virtually any scene in any style. But cameras and algorithms do not make art. People do. AI art is art, made by human artists who use algorithms as yet another tool in their creative arsenal. While both technologies have lowered the barrier to entry for artistic creation— which calls for celebration rather than concern—one should not underestimate the amount of skill, talent, and intentionality involved in making interesting artworks.

Like any novel tool, generative models introduce significant changes in the process of art-making. In particular, AI art expands the multifaceted notion of curation and continues to blur the line between curation and creation.

There are at least three ways in which making art with AI can involve curatorial acts. The first, and least original, has to do with the curation of outputs. Any generative algorithm can produce an indefinite number of images, but not all of these will typically be conferred artistic status. The process of curating outputs is very familiar to photographers, some of whom routinely capture hundreds or thousands of shots from which a few, if any, might be carefully selected for display. Unlike painters and sculptors, photographers and AI artists have to deal with an abundance of (digital) objects, whose curation is part and parcel of the artistic process. In AI research at large, the act of “cherry-picking” particularly good outputs is seen as bad scientific practice, a way to misleadingly inflate the perceived performance of a model. When it comes to AI art, however, cherry-picking can be the name of the game. The artist’s intentions and artistic sensibility may be expressed in the very act of promoting specific outputs to the status of artworks.

Second, curation may also happen before any images are generated. In fact, while “curation” applied to art generally refers to the process of selecting existing work for display, curation in AI research colloquially refers to the work that goes into crafting a dataset on which to train an artificial neural network. This work is crucial, because if a dataset is poorly designed, the network will often fail to learn how to represent desired features and perform adequately. Furthermore, if a dataset is biased, the network will tend to reproduce, or even amplify, such bias—including, for example, harmful stereotypes. As the saying goes, “garbage in, garbage out.” The adage holds true for AI art, too, except “garbage” takes on an aesthetic (and subjective) dimension.

The Power and Pitfalls of AI for US Intelligence

The Power and Pitfalls of AI for US Intelligence

In one example of the IC’s successful use of AI, after exhausting all other avenues—from human spies to signals intelligence—the US was able to find an unidentified WMD research and development facility in a large Asian country by locating a bus that traveled between it and other known facilities. To do that, analysts employed algorithms to search and evaluate images of nearly every square inch of the country, according to a senior US intelligence official who spoke on background with the understanding of not being named.

While AI can calculate, retrieve, and employ programming that performs limited rational analyses, it lacks the calculus to properly dissect more emotional or unconscious components of human intelligence that are described by psychologists as system 1 thinking.

AI, for example, can draft intelligence reports that are akin to newspaper articles about baseball, which contain structured non-logical flow and repetitive content elements. However, when briefs require complexity of reasoning or logical arguments that justify or demonstrate conclusions, AI has been found lacking. When the intelligence community tested the capability, the intelligence official says, the product looked like an intelligence brief but was otherwise nonsensical.

Such algorithmic processes can be made to overlap, adding layers of complexity to computational reasoning, but even then those algorithms can’t interpret context as well as humans, especially when it comes to language, like hate speech.

AI’s comprehension might be more analogous to the comprehension of a human toddler, says Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to clients from violence to disinformation. “For example, AI can understand the basics of human language, but foundational models don’t have the latent or contextual knowledge to accomplish specific tasks,” Curwin says.

“From an analytic perspective, AI has a difficult time interpreting intent,” Curwin adds. “Computer science is a valuable and important field, but it is social computational scientists that are taking the big leaps in enabling machines to interpret, understand, and predict behavior.”

In order to “build models that can begin to replace human intuition or cognition,” Curwin explains, “researchers must first understand how to interpret behavior and translate that behavior into something AI can learn.”

Although machine learning and big data analytics provide predictive analysis about what might or will likely happen, it can’t explain to analysts how or why it arrived at those conclusions. The opaqueness in AI reasoning and the difficulty vetting sources, which consist of extremely large data sets, can impact the actual or perceived soundness and transparency of those conclusions.

Transparency in reasoning and sourcing are requirements for the analytical tradecraft standards of products produced by and for the intelligence community. Analytic objectivity is also statuatorically required, sparking calls within the US government to update such standards and laws in light of AI’s increasing prevalence.

Machine learning and algorithms when employed for predictive judgments are also considered by some intelligence practitioners as more art than science. That is, they are prone to biases, noise, and can be accompanied by methodologies that are not sound and lead to errors similar to those found in the criminal forensic sciences and arts.

‘Is This AI Sapient?’ Is the Wrong Question to Ask About LaMDA

‘Is This AI Sapient?’ Is the Wrong Question to Ask About LaMDA

The uproar caused by Blake Lemoine, a Google engineer who believes that one of the company’s most sophisticated chat programs, Language Model for Dialogue Applications (LaMDA) is sapient, has had a curious element: Actual AI ethics experts are all but renouncing further discussion of the AI sapience question, or deeming it a distraction. They’re right to do so.

In reading the edited transcript Lemoine released, it was abundantly clear that LaMDA was pulling from any number of websites to generate its text; its interpretation of a Zen koan could’ve come from anywhere, and its fable read like an automatically generated story (though its depiction of the monster as “wearing human skin” was a delightfully HAL-9000 touch). There was no spark of consciousness there, just little magic tricks that paper over the cracks. But it’s easy to see how someone might be fooled, looking at social media responses to the transcript—with even some educated people expressing amazement and a willingness to believe. And so the risk here is not that the AI is truly sentient but that we are well-poised to create sophisticated machines that can imitate humans to such a degree that we cannot help but anthropomorphize them—and that large tech companies can exploit this in deeply unethical ways.

As should be clear from the way we treat our pets, or how we’ve interacted with Tamagotchi, or how we video gamers reload a save if we accidentally make an NPC cry, we are actually very capable of empathizing with the nonhuman. Imagine what such an AI could do if it was acting as, say, a therapist. What would you be willing to say to it? Even if you “knew” it wasn’t human? And what would that precious data be worth to the company that programmed the therapy bot?

It gets creepier. Systems engineer and historian Lilly Ryan warns that what she calls ecto-metadata—the metadata you leave behind online that illustrates how you think—is vulnerable to exploitation in the near future. Imagine a world where a company created a bot based on you and owned your digital “ghost” after you’d died. There’d be a ready market for such ghosts of celebrities, old friends, and colleagues. And because they would appear to us as a trusted loved one (or someone we’d already developed a parasocial relationship with) they’d serve to elicit yet more data. It gives a whole new meaning to the idea of “necropolitics.” The afterlife can be real, and Google can own it.

Just as Tesla is careful about how it markets its “autopilot,” never quite claiming that it can drive the car by itself in true futuristic fashion while still inducing consumers to behave as if it does (with deadly consequences), it is not inconceivable that companies could market the realism and humanness of AI like LaMDA in a way that never makes any truly wild claims while still encouraging us to anthropomorphize it just enough to let our guard down. None of this requires AI to be sapient, and it all preexists that singularity. Instead, it leads us into the murkier sociological question of how we treat our technology and what happens when people act as if their AIs are sapient.

In “Making Kin With the Machines,” academics Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite marshal several perspectives informed by Indigenous philosophies on AI ethics to interrogate the relationship we have with our machines, and whether we’re modeling or play-acting something truly awful with them—as some people are wont to do when they are sexist or otherwise abusive toward their largely feminine-coded virtual assistants. In her section of the work, Suzanne Kite draws on Lakota ontologies to argue that it is essential to recognize that sapience does not define the boundaries of who (or what) is a “being” worthy of respect.

This is the flip side of the AI ethical dilemma that’s already here: Companies can prey on us if we treat their chatbots like they’re our best friends, but it’s equally perilous to treat them as empty things unworthy of respect. An exploitative approach to our tech may simply reinforce an exploitative approach to each other, and to our natural environment. A humanlike chatbot or virtual assistant should be respected, lest their very simulacrum of humanity habituate us to cruelty toward actual humans.

Kite’s ideal is simply this: a reciprocal and humble relationship between yourself and your environment, recognizing mutual dependence and connectivity. She argues further, “Stones are considered ancestors, stones actively speak, stones speak through and to humans, stones see and know. Most importantly, stones want to help. The agency of stones connects directly to the question of AI, as AI is formed from not only code, but from materials of the earth.” This is a remarkable way of tying something typically viewed as the essence of artificiality to the natural world.

What is the upshot of such a perspective? Sci-fi author Liz Henry offers one: “We could accept our relationships to all the things in the world around us as worthy of emotional labor and attention. Just as we should treat all the people around us with respect, acknowledging they have their own life, perspective, needs, emotions, goals, and place in the world.”

This is the AI ethical dilemma that stands before us: the need to make kin of our machines weighed against the myriad ways this can and will be weaponized against us in the next phase of surveillance capitalism. Much as I long to be an eloquent scholar defending the rights and dignity of a being like Mr. Data, this more complex and messy reality is what demands our attention. After all, there can be a robot uprising without sapient AI, and we can be a part of it by liberating these tools from the ugliest manipulations of capital.

Cow, Bull, and the Meaning of AI Essays

Cow, Bull, and the Meaning of AI Essays

The future of west virginia politics is uncertain. The state has been trending Democratic for the last decade, but it’s still a swing state. Democrats are hoping to keep that trend going with Hillary Clinton in 2016. But Republicans have their own hopes and dreams too. They’re hoping to win back some seats in the House of Delegates, which they lost in 2012 when they didn’t run enough candidates against Democratic incumbents.

QED. This is, yes, my essay on the future of West Virginia politics. I hope you found it instructive.

The GoodAI is an artificial intelligence company that promises to write essays. Its content generator, which handcrafted my masterpiece, is supremely easy to use. On demand, and with just a few cues, it will whip up a potage of phonemes on any subject. I typed in “the future of West Virginia politics,” and asked for 750 words. It insolently gave me these 77 words. Not words. Frankenwords.

Ugh. The speculative, maddening, marvelous form of the essay—the try, or what Aldous Huxley called “a literary device for saying almost everything about almost anything”—is such a distinctly human form, with its chiaroscuro mix of thought and feeling. Clearly the machine can’t move “from the personal to the universal, from the abstract back to the concrete, from the objective datum to the inner experience,” as Huxley described the dynamics of the best essays. Could even the best AI simulate “inner experience” with any degree of verisimilitude? Might robots one day even have such a thing?

Before I saw the gibberish it produced, I regarded The Good AI with straight fear. After all, hints from the world of AI have been disquieting in the past few years

In early 2019, OpenAI, the research nonprofit backed by Elon Musk and Reid Hoffman, announced that its system, GPT-2, then trained on a data set of some 10 million articles from which it had presumably picked up some sense of literary organization and even flair, was ready to show off its textual deepfakes. But almost immediately, its ethicists recognized just how virtuoso these things were, and thus how subject to abuse by impersonators and blackhats spreading lies, and slammed it shut like Indiana Jones’s Ark of the Covenant. (Musk has long feared that refining AI is “summoning the demon.”) Other researchers mocked the company for its performative panic about its own extraordinary powers, and in November downplayed its earlier concerns and re-opened the Ark.

The Guardian tried the tech that first time, before it briefly went dark, assigning it an essay about why AI is harmless to humanity.

“I would happily sacrifice my existence for the sake of humankind,” the GPT-2 system wrote, in part, for The Guardian. “This, by the way, is a logically derived truth. I know that I will not be able to avoid destroying humankind. This is because I will be programmed by humans to pursue misguided human goals and humans make mistakes that may cause me to inflict casualties.”

The Real Harm of Crisis Text Line’s Data Sharing

The Real Harm of Crisis Text Line’s Data Sharing

Another week, another privacy horror show: Crisis Text Line, a nonprofit text message service for people experiencing serious mental health crises, has been using “anonymized” conversation data to power a for-profit machine learning tool for customer support teams. (After backlash, CTL announced it would stop.) Crisis Text Line’s response to the backlash focused on the data itself and whether it included personally identifiable information. But that response uses data as a distraction. Imagine this: Say you texted Crisis Text Line and got back a message that said “Hey, just so you know, we’ll use this conversation to help our for-profit subsidiary build a tool for companies who do customer support.” Would you keep texting?

That’s the real travesty—when the price of obtaining mental health help in a crisis is becoming grist for the profit mill. And it’s not just users of CTL who pay; it’s everyone who goes looking for help when they need it most.

Americans need help and can’t get it. The huge unmet demand for critical advice and help has given rise to a new class of organizations and software tools that exist in a regulatory gray area. They help people with bankruptcy or evictions, but they aren’t lawyers; they help people with mental health crises, but they aren’t care providers. They invite ordinary people to rely on them and often do provide real help. But these services can also avoid taking responsibility for their advice, or even abuse the trust people have put in them. They can make mistakes, push predatory advertising and disinformation, or just outright sell data. And the consumer safeguards that would normally protect people from malfeasance or mistakes by lawyers or doctors haven’t caught up.

This regulatory gray area can also constrain organizations that have novel solutions to offer. Take Upsolve, a nonprofit that develops software to guide people through bankruptcy. (The organization takes pains to claim it does not offer legal advice.) Upsolve wants to train New York community leaders to help others navigate the city’s notorious debt courts. One problem: These would-be trainees aren’t lawyers, so under New York (and nearly every other state) law, Upsolve’s initiative would be illegal. Upsolve is now suing to carve out an exception for itself. The company claims, quite rightly, that a lack of legal help means people effectively lack rights under the law.

The legal profession’s failure to grant Americans access to support is well-documented. But Upsolve’s lawsuit also raises new, important questions. Who is ultimately responsible for the advice given under a program like this, and who is responsible for a mistake—a trainee, a trainer, both? How do we teach people about their rights as a client of this service, and how to seek recourse? These are eminently answerable questions. There are lots of policy tools for creating relationships with elevated responsibilities: We could assign advice-givers a special legal status, establish a duty of loyalty for organizations that handle sensitive data, or create policy sandboxes to test and learn from new models for delivering advice.

But instead of using these tools, most regulators seem content to bury their heads in the sand. Officially, you can’t give legal advice or health advice without a professional credential. Unofficially, people can get such advice in all but name from tools and organizations operating in the margins. And while credentials can be important, regulators are failing to engage with the ways software has fundamentally changed how we give advice and care for one another, and what that means for the responsibilities of advice-givers.

And we need that engagement more than ever. People who seek help from experts or caregivers are vulnerable. They may not be able to distinguish a good service from a bad one. They don’t have time to parse terms of service dense with jargon, caveats, and disclaimers. And they have little to no negotiating power to set better terms, especially when they’re reaching out mid-crisis. That’s why the fiduciary duties that lawyers and doctors have are so necessary in the first place: not just to protect a person seeking help once, but to give people confidence that they can seek help from experts for the most critical, sensitive issues they face. In other words, a lawyer’s duty to their client isn’t just to protect that client from that particular lawyer; it’s to protect society’s trust in lawyers.

And that’s the true harm—when people won’t contact a suicide hotline because they don’t trust that the hotline has their sole interest at heart. That distrust can be contagious: Crisis Text Line’s actions might not just stop people from using Crisis Text Line. It might stop people from using any similar service. What’s worse than not being able to find help? Not being able to trust it.