Select Page
Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’

Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’

Since then, the quest to proliferate larger and larger language models has accelerated, and many of the dangers we warned about, such as outputting hateful text and disinformation en masse, continue to unfold. Just a few days ago, Meta released its “Galactica” LLM, which is purported to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Only three days later, the public demo was taken down after researchers generated “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil.”

This race hasn’t stopped at LLMs but has moved on to text-to-image models like OpenAI’s DALL-E and StabilityAI’s Stable Diffusion, models that take text as input and output generated images based on that text. The dangers of these models include creating child pornography, perpetuating bias, reinforcing stereotypes, and spreading disinformation en masse, as reported by many researchers and journalists. However, instead of slowing down, companies are removing the few safety features they had in the quest to one-up each other. For instance, OpenAI had restricted the sharing of photorealistic generated faces on social media. But after newly formed startups like StabilityAI, which reportedly raised $101 million with a whopping $1 billion valuation, called such safety measures “paternalistic,” OpenAI removed these restrictions. 

With EAs founding and funding institutes, companies, think tanks, and research groups in elite universities dedicated to the brand of “AI safety” popularized by OpenAI, we are poised to see more proliferation of harmful models billed as a step toward “beneficial AGI.” And the influence begins early: Effective altruists provide “community building grants” to recruit at major college campuses, with EA chapters developing curricula and teaching classes on AI safety at elite universities like Stanford.

Just last year, Anthropic, which is described as an “AI safety and research company” and was founded by former OpenAI vice presidents of research and safety, raised $704 million, with most of its funding coming from EA billionaires like Talin, Muskovitz and Bankman-Fried. An upcoming workshop on “AI safety” at NeurIPS, one of the largest and most influential machine learning conferences in the world, is also advertised as being sponsored by FTX Future Fund, Bankman-Fried’s EA-focused charity whose team resigned two weeks ago. The workshop advertises $100,000 in “best paper awards,” an amount I haven’t seen in any academic discipline. 

Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now. 

We can create a technological future that serves us instead. Take, for example, Te Hiku Media, which created language technology to revitalize te reo Māori, creating a data license “based on the Māori principle of kaitiakitanga, or guardianship” so that any data taken from the Māori benefits them first. Contrast this approach with that of organizations like StabilityAI, which scrapes artists’ works without their consent or attribution while purporting to build “AI for the people.” We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever elusive techno-utopia promised to us by Silicon Valley elites. 

Amazon’s New Robot Sparrow Can Handle Most Items in the Everything Store

Amazon’s New Robot Sparrow Can Handle Most Items in the Everything Store

Amazon built an ecommerce empire by automating much of the work needed to move goods and pack orders in its warehouses. There is still plenty of work for humans in those vast facilities because some tasks are too complex for robots to do reliably—but a new robot called Sparrow could shift the balance that Amazon strikes between people and machines.

Sparrow is designed to pick out items piled in shelves or bins so they can be packed into orders for shipping to customers. That’s one of the most difficult tasks in warehouse robotics because there are so many different objects, each with different shapes, textures, and malleability, that can be piled up haphazardly. Sparrow takes on that challenge by using machine learning and cameras to identify objects piled in a bin and plan how to grab one using a custom gripper with several suction tubes. Amazon demonstrated Sparrow for the first time today at the company’s robotics manufacturing facility in Massachusetts.

Amazon is currently testing Sparrow at a facility in Texas where the robot is already sorting products for customer orders. The company says Sparrow can handle 65 percent of the more than 100 million items in its inventory. Tye Brady, chief technologist at Amazon Robotics, says that range is the most impressive thing about the robot. “No one has the inventory that Amazon has,” he says. Sparrow can grasp DVDs, socks, and stuffies, but still struggles with loose or complex packaging.

Making machines capable of picking a wide range of individual objects with close to the accuracy and speed of humans could transform the economics of ecommerce. A number of robotics companies, including Berkshire Grey, Righthand Robotics, and Locus Robotics, already sell systems capable of picking objects in warehouses. Startup Covariant specializes in having robots learn how to handle items it hasn’t seen before on the job. But matching the ability of humans to handle any object reliably, and at high speed, remains out of reach for robots. A human can typically pick about 100 items per hour in a warehouse. Brady declined to say how quickly Sparrow can pick items, saying that the robot is “learning all the time.”

Automating more work inside warehouses naturally leads to thoughts of the specter of robots displacing humans. So far, the relationship between robotics and human workers in workplaces has been more complex. For instance, Amazon has increased its workforce even as it has rolled out more automation, as its business has continued to grow. The company appears sensitive to the perception that robots can disadvantage humans. At the event today the company spotlighted employees who had gone from low-level jobs to more advanced ones. However, internal data obtained by Reveal has suggested Amazon workers at more automated facilities suffer more injuries because the pace of work is faster. The company has claimed that robotics and other technology makes its facilities safer.

When asked about worker replacement, Brady said the role of robots is misunderstood. “I don’t view it as replacing people,” he said. “It’s humans and machines working together—not humans versus machines—and if I can allow people to focus on higher level tasks, that’s the win.”

Robots have become notably more capable in recent years, although it can be difficult to distinguish hype from reality. While Elon Musk and others show off futuristic humanoid robots that are many years from being useful, Amazon has quietly gone about automating a large proportion of its operations. The ecommerce company says it now manufactures more industrial robots per year than any company in the world.

Use of industrial robots is growing steadily. In October, the International Federation of Robotics reported that companies around the world installed 517,385 new robots during 2021, a 31 percent increase year-on-year, and a new record for the industry. Many of those new machines are either mobile robots that wheel around factories and warehouses carrying goods or examples of the relatively new concept of “collaborative” robots that are designed to be safe to work alongside humans. Amazon this year introduced a collaborative robot of its own called Proteus, which ferries shelves stacked with products around a warehouse, avoiding human workers as it goes.

At its event today, Amazon also demonstrated a new delivery drone, called MK30, that is capable of carrying loads of up to 5 pounds. Amazon has been testing drone delivery in Lockeford, California, and College Station, Texas, and says the new, more efficient drone will go into service in 2024. The company also showcased a new electric delivery vehicle made by Rivian that includes custom safety systems for collision warning and automatic braking, as well as a system called Fleet Edge that gathers street-view footage and GPS data to improve delivery routing.

Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

As more and more problems with AI have surfaced, including biases around race, gender, and age, many tech companies have installed “ethical AI” teams ostensibly dedicated to identifying and mitigating such issues.

Twitter’s META unit was more progressive than most in publishing details of problems with the company’s AI systems, and in allowing outside researchers to probe its algorithms for new issues.

Last year, after Twitter users noticed that a photo-cropping algorithm seemed to favor white faces when choosing how to trim images, Twitter took the unusual decision to let its META unit publish details of the bias it uncovered. The group also launched one of the first ever “bias bounty” contests, which let outside researchers test the algorithm for other problems. Last October, Chowdhury’s team also published details of unintentional political bias on Twitter, showing how right-leaning news sources were, in fact, promoted more than left-leaning ones.

Many outside researchers saw the layoffs as a blow, not just for Twitter but for efforts to improve AI. “What a tragedy,” Kate Starbird, an associate professor at the University of Washington who studies online disinformation, wrote on Twitter. 

Twitter content

This content can also be viewed on the site it originates from.

“The META team was one of the only good case studies of a tech company running an AI ethics group that interacts with the public and academia with substantial credibility,” says Ali Alkhatib, director of the Center for Applied Data Ethics at the University of San Francisco.

Alkhatib says Chowdhury is incredibly well thought of within the AI ethics community and her team did genuinely valuable work holding Big Tech to account. “There aren’t many corporate ethics teams worth taking seriously,” he says. “This was one of the ones whose work I taught in classes.”

Mark Riedl, a professor studying AI at Georgia Tech, says the algorithms that Twitter and other social media giants use have a huge impact on people’s lives, and need to be studied. “Whether META had any impact inside Twitter is hard to discern from the outside, but the promise was there,” he says.

Riedl adds that letting outsiders probe Twitter’s algorithms was an important step toward more transparency and understanding of issues around AI. “They were becoming a watchdog that could help the rest of us understand how AI was affecting us,” he says. “The researchers at META had outstanding credentials with long histories of studying AI for social good.”

As for Musk’s idea of open-sourcing the Twitter algorithm, the reality would be far more complicated. There are many different algorithms that affect the way information is surfaced, and it’s challenging to understand them without the real time data they are being fed in terms of tweets, views, and likes.

The idea that there is one algorithm with explicit political leaning might oversimplify a system that can harbor more insidious biases and problems. Uncovering these is precisely the kind of work that Twitter’s META group was doing. “There aren’t many groups that rigorously study their own algorithms’ biases and errors,” says Alkhatib at the University of San Francisco. “META did that.” And now, it doesn’t.

Elon Musk’s Half-Baked Robot Is a Clunky First Step

Elon Musk’s Half-Baked Robot Is a Clunky First Step

Some robot experts watching saw a project that appeared to be quickly getting up to speed. “There’s nothing fundamentally groundbreaking, but they are doing cool stuff,” says Stefanie Tellex, an assistant professor at Brown University.

Henrik Christensen, who researches robotics and AI at UC Davis, calls Tesla’s homegrown humanoid “a good initial design,” but adds that the company hasn’t shown evidence it can perform basic navigation, grasping, or manipulation. Jessy Grizzle, a professor at the University of Michigan’s robotics lab who works on legged robots, said that although still early, Tesla’s project appeared to be progressing well. “To go from a man in a suit to real hardware in 13 months is pretty incredible,” he says.

Grizzle says Tesla’s car-making experience and expertise in areas such as batteries and electric motors may help it advance robotic hardware. Musk claimed during the event that the robot would eventually cost around $20,000—an astonishing figure given the project’s ambition and significantly cheaper than any Tesla vehicle—but offered no timeframe for its launch.

Musk was also vague about who his customers would be, or which uses Tesla might find for a humanoid in its own operations. A robot capable of advanced manipulation could perhaps be important for manufacturing, taking on parts of car-making that have not been automated, such as feeding wires through a dashboard or carefully working with flexible plastic parts.

In an industry where profits are razor-thin and other companies are offering electric vehicles that compete with Tesla’s, any edge in manufacturing could prove crucial. But companies have been trying to automate these tasks for many years without much success. And a four-limbed design may not make much sense for such applications. Alexander Kernbaum, interim director of SRI Robotics, a research institute that has previously developed a humanoid robot, says it only really makes sense for robots to walk on legs in very complex environments. “A focus on legs is more of an indication that they are looking to capture people’s imaginations rather than solve real-world problems,” he says.

Grizzle and Christensen both say they will be watching future Tesla demonstrations for signs of progress, especially for evidence of the robot’s manipulation skills. Staying balanced on two legs while lifting and moving an object is natural for humans but challenging to engineer in machines. “When you don’t know the mass of an object, you have to stabilize your body plus whatever you’re holding as you carry it and move it, Grizzle says.

Wise will be watching, too, and despite being underwhelmed so far, he hopes the project doesn’t flounder like Google’s ill-fated robotic company acquiring spree back in 2013, which sucked many researchers into projects that never saw the light of day. The search giant’s splurge included two companies working on humanoids: Boston Dynamics, which it sold off in 2017, and Schaft, which it shut down in 2018. “These projects keep getting killed because, lo and behold, they wake up one day and they realize robotics is hard,” Wise says.

AI Art Is Challenging the Boundaries of Curation

AI Art Is Challenging the Boundaries of Curation

In just a few years, the number of artworks produced by self-described AI artists has dramatically increased. Some of these works have been sold by large auction houses for dizzying prices and have found their way into prestigious curated collections. Initially spearheaded by a few technologically knowledgeable artists who adopted computer programming as part of their creative process, AI art has recently been embraced by the masses, as image generation technology has become both more effective and easier to use without coding skills.

The AI art movement rides on the coattails of technical progress in computer vision, a research area dedicated to designing algorithms that can process meaningful visual information. A subclass of computer vision algorithms, called generative models, occupies center stage in this story. Generative models are artificial neural networks that can be “trained” on large datasets containing millions of images and learn to encode their statistically salient features. After training, they can produce completely new images that are not contained in the original dataset, often guided by text prompts that explicitly describe the desired results. Until recently, images produced through this approach remained somewhat lacking in coherence or detail, although they possessed an undeniable surrealist charm that captured the attention of many serious artists. However, earlier this year the tech company Open AI unveiled a new model— nicknamed DALL·E 2—that can generate remarkably consistent and relevant images from virtually any text prompt. DALL·E 2 can even produce images in specific styles and imitate famous artists rather convincingly, as long as the desired effect is adequately specified in the prompt. A similar tool has been released for free to the public under the name Craiyon (formerly “DALL·E mini”).

The coming-of-age of AI art raises a number of interesting questions, some of which—such as whether AI art is really art, and if so, to what extent it is really made by AI—are not particularly original. These questions echo similar worries once raised by the invention of photography. By merely pressing a button on a camera, someone without painting skills could suddenly capture a realistic depiction of a scene. Today, a person can press a virtual button to run a generative model and produce images of virtually any scene in any style. But cameras and algorithms do not make art. People do. AI art is art, made by human artists who use algorithms as yet another tool in their creative arsenal. While both technologies have lowered the barrier to entry for artistic creation— which calls for celebration rather than concern—one should not underestimate the amount of skill, talent, and intentionality involved in making interesting artworks.

Like any novel tool, generative models introduce significant changes in the process of art-making. In particular, AI art expands the multifaceted notion of curation and continues to blur the line between curation and creation.

There are at least three ways in which making art with AI can involve curatorial acts. The first, and least original, has to do with the curation of outputs. Any generative algorithm can produce an indefinite number of images, but not all of these will typically be conferred artistic status. The process of curating outputs is very familiar to photographers, some of whom routinely capture hundreds or thousands of shots from which a few, if any, might be carefully selected for display. Unlike painters and sculptors, photographers and AI artists have to deal with an abundance of (digital) objects, whose curation is part and parcel of the artistic process. In AI research at large, the act of “cherry-picking” particularly good outputs is seen as bad scientific practice, a way to misleadingly inflate the perceived performance of a model. When it comes to AI art, however, cherry-picking can be the name of the game. The artist’s intentions and artistic sensibility may be expressed in the very act of promoting specific outputs to the status of artworks.

Second, curation may also happen before any images are generated. In fact, while “curation” applied to art generally refers to the process of selecting existing work for display, curation in AI research colloquially refers to the work that goes into crafting a dataset on which to train an artificial neural network. This work is crucial, because if a dataset is poorly designed, the network will often fail to learn how to represent desired features and perform adequately. Furthermore, if a dataset is biased, the network will tend to reproduce, or even amplify, such bias—including, for example, harmful stereotypes. As the saying goes, “garbage in, garbage out.” The adage holds true for AI art, too, except “garbage” takes on an aesthetic (and subjective) dimension.