Select Page
The Myth of ‘Open Source’ AI

The Myth of ‘Open Source’ AI

ChatGPT made it possible for anyone to play with powerful artificial intelligence, but the inner workings of the world-famous chatbot remain a closely guarded secret.

In recent months, however, efforts to make AI more “open” seem to have gained momentum. In May, someone leaked a model from Meta, called Llama, which gave outsiders access to its underlying code as well as the “weights” that determine how it behaves. Then, this July, Meta chose to make an even more powerful model, called Llama 2, available for anyone to download, modify, and reuse. Meta’s models have since become an extremely popular foundation for many companies, researchers, and hobbyists building tools and applications with ChatGPT-like capabilities.

“We have a broad range of supporters around the world who believe in our open approach to today’s AI … researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do,” Meta said when announcing Llama 2. This morning, Meta released another model, Llama 2 Code, that is fine-tuned for coding.

It might seem as if the open source approach, which has democratized access to software, ensured transparency, and improved security for decades, is now poised to have a similar impact on AI.

Not so fast, say a group behind a research paper that examines the reality of Llama 2 and other AI models that are described, in some way or another, as “open.” The researchers, from Carnegie Mellon University, the AI Now Institute, and the Signal Foundation, say that models that are branded “open” may come with catches.

Llama 2 is free to download, modify, and deploy, but it is not covered by a conventional open source license. Meta’s license prohibits using Llama 2 to train other language models, and it requires a special license if a developer deploys it in an app or service with more than 700 million daily users.

This level of control means that Llama 2 may provide significant technical and strategic benefits to Meta—for example, by allowing the company to benefit from useful tweaks made by outside developers when it uses the model in its own apps.

Models that are released under normal open source licenses, like GPT Neo from the nonprofit EleutherAI, are more fully open, the researchers say. But it is difficult for such projects to get on an equal footing. 

First, the data required to train advanced models is often kept secret. Second, software frameworks required to build such models are often controlled by large corporations. The two most popular ones, TensorFlow and Pytorch, are maintained by Google and Meta, respectively. Third, computer power required to train a large model is also beyond the reach of any normal developer or company, typically requiring tens or hundreds of millions of dollars for a single training run. And finally, the human labor required to finesse and improve these models is also a resource that is mostly only available to big companies with deep pockets.

The way things are headed, one of the most important technologies in decades could end up enriching and empowering just a handful of companies, including OpenAI, Microsoft, Meta, and Google. If AI really is such a world-changing technology, then the greatest benefits might be felt if it were made more widely available and accessible.

Meet Pause AI, the Protest Group Campaigning Against Human Extinction

Meet Pause AI, the Protest Group Campaigning Against Human Extinction

The first time we speak, Joep Meindertsma is not in a good place. He tears up as he describes a conversation in which he warned his niece about the risk of artificial intelligence causing societal collapse. Afterward, she had a panic attack. “I cry every other day,” he says, speaking over Zoom from his home in the Dutch city of Utrecht. “Every time I say goodbye to my parents or friends, it feels like it could be the last time.”

Meindertsma, who is 31 and co-owns a database company, has been interested in AI for a couple of years. But he really started worrying about the threat the technology could pose to humanity when Open AI released its latest language model, GPT-4, in March. Since then, he has watched the runaway success of ChatGPT chatbot—based first on GPT-3 then GPT-4—demonstrate to the world how far AI has progressed and Big Tech companies race to catch up. And he has seen pioneers like Geoffrey Hinton, the so-called godfather of AI, warn of the dangers associated with the systems they helped create. “AI capabilities are advancing far more rapidly than virtually anyone has predicted,” says Meindertsma. “We are risking social collapse. We’re risking human extinction.”

One month before our talk, Meindertsma stopped going to work. He had become so consumed by the idea that AI is going to destroy human civilization that he was struggling to think of anything else. He had to do something, he felt, to avert disaster. Soon after, he launched Pause AI, a grassroots protest group that campaigns for, as its name suggests, a halt to the development of AI. And since then, he has amassed a small band of followers who have held protests in Brussels, London, San Francisco and Melbourne. These demonstrations have been small—fewer than 10 people each time—but Meindertsma has been making friends in high places. Already, he says, he has been invited to speak with officials within both the Dutch Parliament and at the European Commission.

The idea that AI could wipe out humanity sounds extreme. But it’s an idea that’s gaining traction in both the tech sector and in mainstream politics. Hinton quit his role at Google in May and embarked on a global round of interviews in which he raised the specter of humans no longer being able to control AI as the technology advances. That same month, industry leaders—including the CEOs of AI labs Google DeepMind, OpenAI, and Anthropic—signed a letter acknowledging the “risk of extinction,” and UK prime minister Rishi Sunak became the first head of government to publicly admit he also believes that AI poses an existential risk to humanity.

Meindertsma and his followers offer a glimpse of how these warnings are trickling through society, creating a new phenomenon of AI anxiety and giving a younger generation—many of whom are already deeply worried about climate change—a new reason to feel panic about the future. A survey by the pollster YouGov found that the proportion of people worried that artificial intelligence would lead to an apocalypse rose sharply in the last year. Hinton denies he wants AI development to be stopped, temporarily or indefinitely. But his public statements, about the risk AI poses to humanity, have resulted in a group of young people who feel there is no other choice.

To different people, “existential risk” means different things. “The main scenario I’m personally worried about is social collapse due to large-scale hacking,” says Meindertsma, explaining he’s concerned about AI being used to create cheap and accessible cyber weapons that could be used by criminals to “effectively take out the entire internet.” This is a scenario experts say is extremely unlikely. But Meindertsma still worries about the resilience of banking and food distribution services. “People will not be able to find food in a city. People will fight,” he says. “Many billions I think will die.”

China’s ChatGPT Opportunists—and Grifters—Are Hard at Work

China’s ChatGPT Opportunists—and Grifters—Are Hard at Work

Competition for jobs is fierce in China right now. After he graduated from college with a business major earlier this year, David struggled to find work. There were too many applicants for every position, and, he says, “even if you find a job, the pay is not as great as previous years, and you have to work long hours.”

After David—who asked for anonymity to talk freely about his business—saw some videos on Weibo and WeChat about ChatGPT, the generative artificial intelligence chatbot released to great fanfare late last year by the US tech company OpenAI, he was struck with an idea. There’s a thriving essay-writing business in China, with students asking tutors and experts to help them with their homework. Brokers operating on the ecommerce platform Taobao hire writers, whose services they sell to students. What if, David thought, he could use ChatGPT to write essays? He approached one of the sellers on Taobao. He quickly got his first job, writing a paper for a student majoring in education. He didn’t tell anyone he was using a chatbot.

“You first ask ChatGPT to generate an outline with a few bullet points, and then you ask ChatGPT to come up with content for each bullet point,” David says. To avoid obvious plagiarism, he tried not to feed in existing articles or papers, and instead asked the chatbot open-ended questions. He picked out longer sentences, and asked ChatGPT to elaborate and give examples. Then he read through the piece and cleaned up any grammatical errors. The result wasn’t the smoothest, and there were a few logical gaps between paragraphs, but it was enough to complete the assignment. He submitted it and made $10. His second job was writing an economics paper. He glanced through the requirements, picked up a few important terms like “dichotomy,” and asked ChatGPT to explain these terms in easily understandable ways and give examples. He made around $40.

ChatGPT is not officially accessible to Chinese users. Emails with Chinese domains, like QQ or 163, can’t be used to sign up to the service. Nevertheless, there’s an enormous interest in the potential of the system. Youdao, a popular online education service operated by the tech giant NetEase, recently released an online course: “ChatGPT, from entry to proficiency,” promising to “increase your work efficiency by 10 times with the help of ChatGPT and Python.” On Zhihu, China’s quora, a forum website where questions are created and answered, users ask “how to make the first pot of gold using ChatGPT”; “how to make RMB1,000 using ChatGPT”; “how ordinary people can make money using ChatGPT?” The answer—which ChatGPT itself told me when I asked it how to make $100—is content. Lots of content.

Yin Yin, a young woman who has worked for a few social media influencers as a content creation assistant, came across ChatGPT after seeing a viral YouTube video. In April, she found a Taobao store selling home decor using traditional Yunnan tie-dye techniques. She approached the owner and offered to help him improve its layout and to do some social media promotion. The store’s product descriptions were plain and lacking in details, she says. She tracked down the most popular home decor items on Taobao, extracted their product descriptions, and fed them to ChatGPT for reference. To make the content even more eye-catching, she asked ChatGPT to specifically emphasize a few product features and to add a few emojis to make it more appealing to the younger generation. She is now paid monthly by the Taobao shop owner.

Others are using AI for way more than product descriptions. One user, Shirley, who also asked to be identified using only her first name because she writes under a pseudonym, Guyuetu, on the fashion and lifestyle sharing platform Little Red Book (Xiaohongshu), published a whole book written using AI. She decided on the subject: the correlation between blood type and personality (a pseudoscientific belief that is relatively common in Japan and Korea). She asked ChatGPT to “create an outline for a book about Japanese’s people’s take on blood type and personality,” then used it to generate an outline for each chapter, and then to generate different sections for each chapter. “If you don’t like what’s been written, you can always ask ChatGPT to rewrite, like rewrite a paragraph using a more fun, lighthearted tone,” she says. Within two days, she finished the book “The Little Book of Blood Type Personality: The Japanese Way of Understanding People,” with a cover and illustrations created by Midjourney, a service which creates images from text prompts. She published the book on Kindle.

Runaway AI Is an Extinction Risk, Experts Warn

Runaway AI Is an Extinction Risk, Experts Warn

Leading figures in the development of artificial intelligence systems, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, have signed a statement warning that the technology they are building may someday pose an existential threat to humanity comparable to that of nuclear war and pandemics. 

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement, released today by the Center for AI Safety, a nonprofit. 

The idea that AI might become difficult to control, and either accidentally or deliberately destroy humanity, has long been debated by philosophers. But in the past six months, following some surprising and unnerving leaps in the performance of AI algorithms, the issue has become a lot more widely and seriously discussed.

In addition to Altman and Hassabis, the statement was signed by Dario Amodei, CEO of Anthropic, a startup dedicated to developing AI with a focus on safety. Other signatories include Geoffrey Hinton and Yoshua Bengio—two of three academics given the Turing Award for their work on deep learning, the technology that underpins modern advances in machine learning and AI—as well as dozens of entrepreneurs and researchers working on cutting-edge AI problems.

“The statement is a great initiative,” says Max Tegmark, a physics professor at the Massachusetts Institute of Technology and the director of the Future of Life Institute, a nonprofit focused on the long-term risks posed by AI. In March, Tegmark’s Institute published a letter calling for a six-month pause on the development of cutting-edge AI algorithms so that the risks could be assessed. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.

Tegmark says he hopes the statement will encourage governments and the general public to take the existential risks of AI more seriously. “The ideal outcome is that the AI extinction threat gets mainstreamed, enabling everyone to discuss it without fear of mockery,” he adds.

Dan Hendrycks, director of the Center for AI Safety, compared the current moment of concern about AI to the debate among scientists sparked by the creation of nuclear weapons. “We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” Hendrycks said in a quote issued along with his organization’s statement. 

The current tone of alarm is tied to several leaps in the performance of AI algorithms known as large language models. These models consist of a specific kind of artificial neural network that is trained on enormous quantities of human-written text to predict the words that should follow a given string. When fed enough data, and with additional training in the form of feedback from humans on good and bad answers, these language models are able to generate text and answer questions with remarkable eloquence and apparent knowledge—even if their answers are often riddled with mistakes. 

These language models have proven increasingly coherent and capable as they have been fed more data and computer power. The most powerful model created so far, OpenAI’s GPT-4, is able to solve complex problems, including ones that appear to require some forms of abstraction and common sense reasoning.

CNET Published AI-Generated Stories. Then Its Staff Pushed Back

CNET Published AI-Generated Stories. Then Its Staff Pushed Back

In November, venerable tech outlet CNET began publishing articles generated by artificial intelligence, on topics such as personal finance, that proved to be riddled with errors. Today the human members of its editorial staff have unionized, calling on their bosses to provide better conditions for workers and more transparency and accountability around the use of AI.

“In this time of instability, our diverse content teams need industry-standard job protections, fair compensation, editorial independence, and a voice in the decisionmaking process, especially as automated technology threatens our jobs and reputations,” reads the mission statement of the CNET Media Workers Union, whose more than 100 members include writers, editors, video producers, and other content creators.

While the organizing effort started before CNET management began its AI rollout, its employees could become one of the first unions to force its bosses to set guardrails around the use of content produced by generative AI services like ChatGPT. Any agreement struck with CNET’s parent company, Red Ventures, could help set a precedent for how companies approach the technology. Multiple digital media outlets have recently slashed staff, with some like BuzzFeed and Sports Illustrated at the same time embracing AI-generated content. Red Ventures did not immediately respond to a request for comment.

In Hollywood, AI-generated writing has prompted a worker uprising. Striking screenwriters want studios to agree to prohibit AI authorship and to never ask writers to adapt AI-generated scripts. The Alliance of Motion Picture and Television Producers rejected that proposal, instead offering to hold annual meetings to discuss technological advancements. The screenwriters and CNET’s staff are both represented by the Writers Guild of America.

While CNET bills itself as “your guide to a better future,” the 30-year-old publication late last year stumbled clumsily into the new world of generative AI that can create text or images. In January, the science and tech website Futurism revealed that in November, CNET had quietly started publishing AI-authored explainers such as “What Is Zelle and How Does it Work?” The stories ran under the byline “CNET Money Staff,” and readers had to hover their cursor over it to learn that the articles had been written “using automation technology.”

A torrent of embarrassing disclosures followed. The Verge reported that more than half of the AI-generated stories contained factual errors, leading CNET to issue sometimes lengthy corrections on 41 out of its 77 bot-written articles. The tool that editors used also appeared to have plagiarized work from competing news outlets, as generative AI is wont to do.

Then-editor-in-chief Connie Guglielmo later wrote that a plagiarism-detection tool had been misused or failed and that the site was developing additional checks. One former staffer demanded that her byline be excised from the site, concerned that AI would be used to update her stories in an effort to lure more traffic from Google search results.

In response to the negative attention to CNET’s AI project, Guglielmo published an article saying that the outlet had been testing an “internally designed AI engine” and that “AI engines, like humans, make mistakes.” Nonetheless, she vowed to make some changes to the site’s disclosure and citation policies and forge ahead with its experiment in robot authorship. In March, she stepped down from her role as editor in chief and now heads up the outlet’s AI edit strategy.