Select Page
Should I Learn Coding as a Second Language?

Should I Learn Coding as a Second Language?

“I can’t code, and this bums me out because—with so many books and courses and camps—there are so many opportunities to learn these days. I suspect I’ll understand the machine revolution a lot better if I speak their language. Should I at least try?” 

—Decoder


Dear Decoder,
Your desire to speak the “language” of machines reminds me of Ted Chiang’s short story “The Evolution of Human Science.” The story imagines a future in which nearly all academic disciplines have become dominated by superintelligent “metahumans” whose understanding of the world vastly surpasses that of human experts. Reports of new metahuman discoveries—although ostensibly written in English and published in scientific journals that anyone is welcome to read—are so complex and technically abstruse that human scientists have been relegated to a role akin to theologians, trying to interpret texts that are as obscure to them as the will of God was to medieval Scholastics. Instead of performing original research, these would-be scientists now practice the art of hermeneutics.

There was a time, not so long ago, when coding was regarded as among the most forward-looking skill sets, one that initiated a person into the technological elite who would determine our future. Chiang’s story, first published in 2000, was prescient in its ability to foresee the limits of this knowledge. In fields like deep learning and other forms of advanced AI, many technologists already seem more like theologians or alchemists than “experts” in the modern sense of the word: Although they write the initial code, they’re often unable to explain the emergence of higher-level skills that their programs develop while training on data sets. (One still recalls the shock of hearing David Silver, principal research scientist at DeepMind, insist in 2016 that he could not explain how AlphaGo—a program he designed—managed to develop its winning strategy: “It discovered this for itself,” Silver said, “through its own process of introspection and analysis.”)

Meanwhile, algorithms like GPT-3 or GitHub’s Copilot have learned to write code, sparking debates about whether software developers, whose profession was once considered a placid island in the coming tsunami of automation, might soon become irrelevant—and stoking existential fears about self-programming. Runaway AI scenarios have long relied on the possibility that machines might learn to evolve on their own, and while coding algorithms are not about to initiate a Skynet takeover, they nevertheless raise legitimate concerns about the growing opacity of our technologies. AI has a well-established tendency, after all, to discover idiosyncratic solutions and invent ad hoc languages that are counterintuitive to humans. Many have understandably started to wonder: What happens when humans can’t read code anymore?

I mention all this, Decoder, by way of acknowledging the stark realities, not to disparage your ambitions, which I think are laudable. For what it’s worth, the prevailing fears about programmer obsolescence strike me as alarmist and premature. Automated code has existed in some form for decades (recall the web editors of the 1990s that generated HTML and CSS), and even the most advanced coding algorithms are, at present, prone to simple errors and require no small amount of human oversight. It sounds to me, too, that you’re not looking to make a career out of coding so much as you are motivated by a deeper sense of curiosity. Perhaps you are considering the creative pleasures of the hobbyist—contributing to open source projects or suggesting fixes to simple bugs in programs you regularly use. Or maybe you’re intrigued by the possibility of automating tedious aspects of your work. What you most desire, if I’m reading your question correctly, is a fuller understanding of the language that undergirds so much of modern life.

There’s a convincing case to be made that coding is now a basic form of literacy—that a grasp of data structures, algorithms, and programming languages is as crucial as reading and writing when it comes to understanding the larger ideologies in which we are enmeshed. It’s natural, of course, to distrust the dilettante. (Amateur developers are often disparaged for knowing just enough to cause havoc, having mastered the syntax of programming languages but possessing none of the foresight and vision required to create successful products.) But this limbo of expertise might also be seen as a discipline in humility. One benefit of amateur knowledge is that it tends to spark curiosity simply by virtue of impressing on the novice how little they know. In an age of streamlined, user-friendly interfaces, it’s tempting to take our technologies at face value without considering the incentives and agendas lurking beneath the surface. But the more you learn about the underlying structure, the more basic questions will come to preoccupy you: How does code get translated into electric impulses? How does software design subtly change the experience of users? What is the underlying value of principles like open access, sharing, and the digital commons? For instance, to the casual user, social platforms may appear to be designed to connect you with friends and impart useful information. An awareness of how a site is structured, however, inevitably leads one to think more critically about how its features are marshaled to maximize attention, create robust data trails, and monetize social graphs.

Ultimately, this knowledge has the potential to inoculate us against fatalism. Those who understand how a program is built and why are less likely to accept its design as inevitable. You spoke of a machine revolution, but it’s worth mentioning that the most celebrated historical revolutions (those initiated, that is, by humans) were the result of mass literacy combined with technological innovation. The invention of the printing press and the demand for books from a newly literate public laid the groundwork for the Protestant Reformation, as well as the French and American Revolutions. Once a substantial portion of the populace was capable of reading for themselves, they started to question the authority of priests and kings and the inevitability of ruling assumptions.

The cadre of technologists who are currently weighing our most urgent ethical questions—about data justice, automation, and AI values—frequently stress the need for a larger public debate, but nuanced dialog is difficult when the general public lacks a fundamental knowledge of the technologies in question. (One need only glance at a recent US House subcommittee hearing, for example, to see how far lawmakers are from understanding the technologies they seek to regulate.) As New York Times technology writer Kevin Roose has observed, advanced AI models are being developed “behind closed doors,” and the curious laity are increasingly forced to weed through esoteric reports on their inner workings—or take the explanations of experts on faith. “When information about [these technologies] is made public,” he writes, “it’s often either watered down by corporate PR or buried in inscrutable scientific papers.”

If Chiang’s story is a parable about the importance of keeping humans “in the loop,” it also makes a subtle case for ensuring that the circle of knowledge is as large as possible. At a moment when AI is becoming more and more proficient in our languages, stunning us with its ability to read, write, and converse in a way that can feel plausibly human, the need for humans to understand the dialects of programming has become all the more urgent. The more of us who are capable of speaking that argot, the more likely it is that we will remain the authors of the machine revolution, rather than its interpreters.

Faithfully,

Cloud


Be advised that CLOUD SUPPORT is experiencing higher than normal wait times and appreciates your patience.

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

This article appears in the March 2023 issue issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

What Defines Artificial Intelligence? The Complete WIRED Guide

What Defines Artificial Intelligence? The Complete WIRED Guide

Artificial intelligence is here. It’s overhyped, poorly understood, and flawed but already core to our lives—and it’s only going to extend its reach. 

AI powers driverless car research, spots otherwise invisible signs of disease on medical images, finds an answer when you ask Alexa a question, and lets you unlock your phone with your face to talk to friends as an animated poop on the iPhone X using Apple’s Animoji. Those are just a few ways AI already touches our lives, and there’s plenty of work still to be done. But don’t worry, superintelligent algorithms aren’t about to take all the jobs or wipe out humanity.

The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves “training” computers to perform tasks based on examples, rather than relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.

There’s evidence that AI can make us happier and healthier. But there’s also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.

What Defines Artificial Intelligence The Complete WIRED Guide

The Beginnings of Artificial Intelligence

Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. 

He had high hopes of a breakthrough in the drive toward human-level machines. “We think that a significant advance can be made,” he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.”

Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a recognized academic field.

Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s, Arthur Samuel created programs that learned to play checkers. In 1962, one scored a win over a master at the game. In 1967, a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.

As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for specific tasks, like understanding language. Others were inspired by the importance of learning to understand human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone as computers mastered tasks that could previously only be completed by people.

Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by the working of brain cells that are known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.

Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes and got written up in The New York Times as the “Embryo of Computer Designed to Read and Grow Wiser.” But neural networks tumbled from favor after an influential 1969 book coauthored by MIT’s Marvin Minsky suggested they couldn’t be very powerful.

Not everyone was convinced by the skeptics, however, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data could give machines new powers of perception. Churning through so much data was difficult using traditional computer chips, but a shift to graphics cards precipitated an explosion in processing power. 

Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’

Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’

Since then, the quest to proliferate larger and larger language models has accelerated, and many of the dangers we warned about, such as outputting hateful text and disinformation en masse, continue to unfold. Just a few days ago, Meta released its “Galactica” LLM, which is purported to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Only three days later, the public demo was taken down after researchers generated “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil.”

This race hasn’t stopped at LLMs but has moved on to text-to-image models like OpenAI’s DALL-E and StabilityAI’s Stable Diffusion, models that take text as input and output generated images based on that text. The dangers of these models include creating child pornography, perpetuating bias, reinforcing stereotypes, and spreading disinformation en masse, as reported by many researchers and journalists. However, instead of slowing down, companies are removing the few safety features they had in the quest to one-up each other. For instance, OpenAI had restricted the sharing of photorealistic generated faces on social media. But after newly formed startups like StabilityAI, which reportedly raised $101 million with a whopping $1 billion valuation, called such safety measures “paternalistic,” OpenAI removed these restrictions. 

With EAs founding and funding institutes, companies, think tanks, and research groups in elite universities dedicated to the brand of “AI safety” popularized by OpenAI, we are poised to see more proliferation of harmful models billed as a step toward “beneficial AGI.” And the influence begins early: Effective altruists provide “community building grants” to recruit at major college campuses, with EA chapters developing curricula and teaching classes on AI safety at elite universities like Stanford.

Just last year, Anthropic, which is described as an “AI safety and research company” and was founded by former OpenAI vice presidents of research and safety, raised $704 million, with most of its funding coming from EA billionaires like Talin, Muskovitz and Bankman-Fried. An upcoming workshop on “AI safety” at NeurIPS, one of the largest and most influential machine learning conferences in the world, is also advertised as being sponsored by FTX Future Fund, Bankman-Fried’s EA-focused charity whose team resigned two weeks ago. The workshop advertises $100,000 in “best paper awards,” an amount I haven’t seen in any academic discipline. 

Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now. 

We can create a technological future that serves us instead. Take, for example, Te Hiku Media, which created language technology to revitalize te reo Māori, creating a data license “based on the Māori principle of kaitiakitanga, or guardianship” so that any data taken from the Māori benefits them first. Contrast this approach with that of organizations like StabilityAI, which scrapes artists’ works without their consent or attribution while purporting to build “AI for the people.” We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever elusive techno-utopia promised to us by Silicon Valley elites. 

Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

As more and more problems with AI have surfaced, including biases around race, gender, and age, many tech companies have installed “ethical AI” teams ostensibly dedicated to identifying and mitigating such issues.

Twitter’s META unit was more progressive than most in publishing details of problems with the company’s AI systems, and in allowing outside researchers to probe its algorithms for new issues.

Last year, after Twitter users noticed that a photo-cropping algorithm seemed to favor white faces when choosing how to trim images, Twitter took the unusual decision to let its META unit publish details of the bias it uncovered. The group also launched one of the first ever “bias bounty” contests, which let outside researchers test the algorithm for other problems. Last October, Chowdhury’s team also published details of unintentional political bias on Twitter, showing how right-leaning news sources were, in fact, promoted more than left-leaning ones.

Many outside researchers saw the layoffs as a blow, not just for Twitter but for efforts to improve AI. “What a tragedy,” Kate Starbird, an associate professor at the University of Washington who studies online disinformation, wrote on Twitter. 

Twitter content

This content can also be viewed on the site it originates from.

“The META team was one of the only good case studies of a tech company running an AI ethics group that interacts with the public and academia with substantial credibility,” says Ali Alkhatib, director of the Center for Applied Data Ethics at the University of San Francisco.

Alkhatib says Chowdhury is incredibly well thought of within the AI ethics community and her team did genuinely valuable work holding Big Tech to account. “There aren’t many corporate ethics teams worth taking seriously,” he says. “This was one of the ones whose work I taught in classes.”

Mark Riedl, a professor studying AI at Georgia Tech, says the algorithms that Twitter and other social media giants use have a huge impact on people’s lives, and need to be studied. “Whether META had any impact inside Twitter is hard to discern from the outside, but the promise was there,” he says.

Riedl adds that letting outsiders probe Twitter’s algorithms was an important step toward more transparency and understanding of issues around AI. “They were becoming a watchdog that could help the rest of us understand how AI was affecting us,” he says. “The researchers at META had outstanding credentials with long histories of studying AI for social good.”

As for Musk’s idea of open-sourcing the Twitter algorithm, the reality would be far more complicated. There are many different algorithms that affect the way information is surfaced, and it’s challenging to understand them without the real time data they are being fed in terms of tweets, views, and likes.

The idea that there is one algorithm with explicit political leaning might oversimplify a system that can harbor more insidious biases and problems. Uncovering these is precisely the kind of work that Twitter’s META group was doing. “There aren’t many groups that rigorously study their own algorithms’ biases and errors,” says Alkhatib at the University of San Francisco. “META did that.” And now, it doesn’t.

The Power and Pitfalls of AI for US Intelligence

The Power and Pitfalls of AI for US Intelligence

In one example of the IC’s successful use of AI, after exhausting all other avenues—from human spies to signals intelligence—the US was able to find an unidentified WMD research and development facility in a large Asian country by locating a bus that traveled between it and other known facilities. To do that, analysts employed algorithms to search and evaluate images of nearly every square inch of the country, according to a senior US intelligence official who spoke on background with the understanding of not being named.

While AI can calculate, retrieve, and employ programming that performs limited rational analyses, it lacks the calculus to properly dissect more emotional or unconscious components of human intelligence that are described by psychologists as system 1 thinking.

AI, for example, can draft intelligence reports that are akin to newspaper articles about baseball, which contain structured non-logical flow and repetitive content elements. However, when briefs require complexity of reasoning or logical arguments that justify or demonstrate conclusions, AI has been found lacking. When the intelligence community tested the capability, the intelligence official says, the product looked like an intelligence brief but was otherwise nonsensical.

Such algorithmic processes can be made to overlap, adding layers of complexity to computational reasoning, but even then those algorithms can’t interpret context as well as humans, especially when it comes to language, like hate speech.

AI’s comprehension might be more analogous to the comprehension of a human toddler, says Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to clients from violence to disinformation. “For example, AI can understand the basics of human language, but foundational models don’t have the latent or contextual knowledge to accomplish specific tasks,” Curwin says.

“From an analytic perspective, AI has a difficult time interpreting intent,” Curwin adds. “Computer science is a valuable and important field, but it is social computational scientists that are taking the big leaps in enabling machines to interpret, understand, and predict behavior.”

In order to “build models that can begin to replace human intuition or cognition,” Curwin explains, “researchers must first understand how to interpret behavior and translate that behavior into something AI can learn.”

Although machine learning and big data analytics provide predictive analysis about what might or will likely happen, it can’t explain to analysts how or why it arrived at those conclusions. The opaqueness in AI reasoning and the difficulty vetting sources, which consist of extremely large data sets, can impact the actual or perceived soundness and transparency of those conclusions.

Transparency in reasoning and sourcing are requirements for the analytical tradecraft standards of products produced by and for the intelligence community. Analytic objectivity is also statuatorically required, sparking calls within the US government to update such standards and laws in light of AI’s increasing prevalence.

Machine learning and algorithms when employed for predictive judgments are also considered by some intelligence practitioners as more art than science. That is, they are prone to biases, noise, and can be accompanied by methodologies that are not sound and lead to errors similar to those found in the criminal forensic sciences and arts.