Artificial intelligence is here. It’s overhyped, poorly understood, and flawed but already core to our lives—and it’s only going to extend its reach.
AI powers driverless car research, spots otherwise invisible signs of disease on medical images, finds an answer when you ask Alexa a question, and lets you unlock your phone with your face to talk to friends as an animated poop on the iPhone X using Apple’s Animoji. Those are just a few ways AI already touches our lives, and there’s plenty of work still to be done. But don’t worry, superintelligent algorithms aren’t about to take all the jobs or wipe out humanity.
The current boom in all things AI was catalyzed by breakthroughs in an area known as machine learning. It involves “training” computers to perform tasks based on examples, rather than relying on programming by a human. A technique called deep learning has made this approach much more powerful. Just ask Lee Sedol, holder of 18 international titles at the complex game of Go. He got creamed by software called AlphaGo in 2016.
There’s evidence that AI can make us happier and healthier. But there’s also reason for caution. Incidents in which algorithms picked up or amplified societal biases around race or gender show that an AI-enhanced future won’t automatically be a better one.
The Beginnings of Artificial Intelligence
Artificial intelligence as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language.
He had high hopes of a breakthrough in the drive toward human-level machines. “We think that a significant advance can be made,” he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.”
Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a recognized academic field.
Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s, Arthur Samuel created programs that learned to play checkers. In 1962, one scored a win over a master at the game. In 1967, a program called Dendral showed it could replicate the way chemists interpreted mass-spectrometry data on the makeup of chemical samples.
As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for specific tasks, like understanding language. Others were inspired by the importance of learning to understand human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone as computers mastered tasks that could previously only be completed by people.
Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by the working of brain cells that are known as artificial neural networks. As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.
Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perceptron Mark 1 from 1958, for example, learned to distinguish different geometric shapes and got written up in The New York Times as the “Embryo of Computer Designed to Read and Grow Wiser.” But neural networks tumbled from favor after an influential 1969 book coauthored by MIT’s Marvin Minsky suggested they couldn’t be very powerful.
Not everyone was convinced by the skeptics, however, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data could give machines new powers of perception. Churning through so much data was difficult using traditional computer chips, but a shift to graphics cards precipitated an explosion in processing power.
Since then, the quest to proliferate larger and larger language models has accelerated, and many of the dangers we warned about, such as outputting hateful text and disinformation en masse, continue to unfold. Just a few days ago, Meta released its “Galactica” LLM, which is purported to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Only three days later, the public demo was taken down after researchers generated “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil.”
This race hasn’t stopped at LLMs but has moved on to text-to-image models like OpenAI’s DALL-E and StabilityAI’s Stable Diffusion, models that take text as input and output generated images based on that text. The dangers of these models include creating child pornography, perpetuating bias, reinforcing stereotypes, and spreading disinformation en masse, as reported by many researchers and journalists. However, instead of slowing down, companies are removing the few safety features they had in the quest to one-up each other. For instance, OpenAI had restricted the sharing of photorealistic generated faces on social media. But after newly formed startups like StabilityAI, which reportedly raised $101 million with a whopping $1 billion valuation, called such safety measures “paternalistic,” OpenAI removed these restrictions.
With EAs founding and funding institutes, companies, think tanks, and research groups in elite universities dedicated to the brand of “AI safety” popularized by OpenAI, we are poised to see more proliferation of harmful models billed as a step toward “beneficial AGI.” And the influence begins early: Effective altruists provide “community building grants” to recruit at major college campuses, with EA chapters developing curricula and teaching classes on AI safety at elite universities like Stanford.
Just last year, Anthropic, which is described as an “AI safety and research company” and was founded by former OpenAI vice presidents of research and safety, raised $704 million, with most of its funding coming from EA billionaires like Talin, Muskovitz and Bankman-Fried. An upcoming workshop on “AI safety” at NeurIPS, one of the largest and most influential machine learning conferences in the world, is also advertised as being sponsored by FTX Future Fund, Bankman-Fried’s EA-focused charity whose team resigned two weeks ago. The workshop advertises $100,000 in “best paper awards,” an amount I haven’t seen in any academic discipline.
Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now.
We can create a technological future that serves us instead. Take, for example, Te Hiku Media, which created language technology to revitalize te reo Māori, creating a data license “based on the Māori principle of kaitiakitanga, or guardianship” so that any data taken from the Māori benefits them first. Contrast this approach with that of organizations like StabilityAI, which scrapes artists’ works without their consent or attribution while purporting to build “AI for the people.” We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever elusive techno-utopia promised to us by Silicon Valley elites.
Amazon built an ecommerce empire by automating much of the work needed to move goods and pack orders in its warehouses. There is still plenty of work for humans in those vast facilities because some tasks are too complex for robots to do reliably—but a new robot called Sparrow could shift the balance that Amazon strikes between people and machines.
Sparrow is designed to pick out items piled in shelves or bins so they can be packed into orders for shipping to customers. That’s one of the most difficult tasks in warehouse robotics because there are so many different objects, each with different shapes, textures, and malleability, that can be piled up haphazardly. Sparrow takes on that challenge by using machine learning and cameras to identify objects piled in a bin and plan how to grab one using a custom gripper with several suction tubes. Amazon demonstrated Sparrow for the first time today at the company’s robotics manufacturing facility in Massachusetts.
Amazon is currently testing Sparrow at a facility in Texas where the robot is already sorting products for customer orders. The company says Sparrow can handle 65 percent of the more than 100 million items in its inventory. Tye Brady, chief technologist at Amazon Robotics, says that range is the most impressive thing about the robot. “No one has the inventory that Amazon has,” he says. Sparrow can grasp DVDs, socks, and stuffies, but still struggles with loose or complex packaging.
Making machines capable of picking a wide range of individual objects with close to the accuracy and speed of humans could transform the economics of ecommerce. A number of robotics companies, including Berkshire Grey, Righthand Robotics, and Locus Robotics, already sell systems capable of picking objects in warehouses. Startup Covariant specializes in having robots learn how to handle items it hasn’t seen before on the job. But matching the ability of humans to handle any object reliably, and at high speed, remains out of reach for robots. A human can typically pick about 100 items per hour in a warehouse. Brady declined to say how quickly Sparrow can pick items, saying that the robot is “learning all the time.”
Automating more work inside warehouses naturally leads to thoughts of the specter of robots displacing humans. So far, the relationship between robotics and human workers in workplaces has been more complex. For instance, Amazon has increased its workforce even as it has rolled out more automation, as its business has continued to grow. The company appears sensitive to the perception that robots can disadvantage humans. At the event today the company spotlighted employees who had gone from low-level jobs to more advanced ones. However, internal data obtained by Reveal has suggested Amazon workers at more automated facilities suffer more injuries because the pace of work is faster. The company has claimed that robotics and other technology makes its facilities safer.
When asked about worker replacement, Brady said the role of robots is misunderstood. “I don’t view it as replacing people,” he said. “It’s humans and machines working together—not humans versus machines—and if I can allow people to focus on higher level tasks, that’s the win.”
Robots have become notably more capable in recent years, although it can be difficult to distinguish hype from reality. While Elon Musk and others show off futuristic humanoid robots that are many years from being useful, Amazon has quietly gone about automating a large proportion of its operations. The ecommerce company says it now manufactures more industrial robots per year than any company in the world.
Use of industrial robots is growing steadily. In October, the International Federation of Robotics reported that companies around the world installed 517,385 new robots during 2021, a 31 percent increase year-on-year, and a new record for the industry. Many of those new machines are either mobile robots that wheel around factories and warehouses carrying goods or examples of the relatively new concept of “collaborative” robots that are designed to be safe to work alongside humans. Amazon this year introduced a collaborative robot of its own called Proteus, which ferries shelves stacked with products around a warehouse, avoiding human workers as it goes.
At its event today, Amazon also demonstrated a new delivery drone, called MK30, that is capable of carrying loads of up to 5 pounds. Amazon has been testing drone delivery in Lockeford, California, and College Station, Texas, and says the new, more efficient drone will go into service in 2024. The company also showcased a new electric delivery vehicle made by Rivian that includes custom safety systems for collision warning and automatic braking, as well as a system called Fleet Edge that gathers street-view footage and GPS data to improve delivery routing.
As more and more problems with AI have surfaced, including biases around race, gender, and age, many tech companies have installed “ethical AI” teams ostensibly dedicated to identifying and mitigating such issues.
Twitter’s META unit was more progressive than most in publishing details of problems with the company’s AI systems, and in allowing outside researchers to probe its algorithms for new issues.
Last year, after Twitter users noticed that a photo-cropping algorithm seemed to favor white faces when choosing how to trim images, Twitter took the unusual decision to let its META unit publish details of the bias it uncovered. The group also launched one of the first ever “bias bounty” contests, which let outside researchers test the algorithm for other problems. Last October, Chowdhury’s team also published details of unintentional political bias on Twitter, showing how right-leaning news sources were, in fact, promoted more than left-leaning ones.
Many outside researchers saw the layoffs as a blow, not just for Twitter but for efforts to improve AI. “What a tragedy,” Kate Starbird, an associate professor at the University of Washington who studies online disinformation, wrote on Twitter.
Twitter content
This content can also be viewed on the site it originates from.
“The META team was one of the only good case studies of a tech company running an AI ethics group that interacts with the public and academia with substantial credibility,” says Ali Alkhatib, director of the Center for Applied Data Ethics at the University of San Francisco.
Alkhatib says Chowdhury is incredibly well thought of within the AI ethics community and her team did genuinely valuable work holding Big Tech to account. “There aren’t many corporate ethics teams worth taking seriously,” he says. “This was one of the ones whose work I taught in classes.”
Mark Riedl, a professor studying AI at Georgia Tech, says the algorithms that Twitter and other social media giants use have a huge impact on people’s lives, and need to be studied. “Whether META had any impact inside Twitter is hard to discern from the outside, but the promise was there,” he says.
Riedl adds that letting outsiders probe Twitter’s algorithms was an important step toward more transparency and understanding of issues around AI. “They were becoming a watchdog that could help the rest of us understand how AI was affecting us,” he says. “The researchers at META had outstanding credentials with long histories of studying AI for social good.”
As for Musk’s idea of open-sourcing the Twitter algorithm, the reality would be far more complicated. There are many different algorithms that affect the way information is surfaced, and it’s challenging to understand them without the real time data they are being fed in terms of tweets, views, and likes.
The idea that there is one algorithm with explicit political leaning might oversimplify a system that can harbor more insidious biases and problems. Uncovering these is precisely the kind of work that Twitter’s META group was doing. “There aren’t many groups that rigorously study their own algorithms’ biases and errors,” says Alkhatib at the University of San Francisco. “META did that.” And now, it doesn’t.
Some robot experts watching saw a project that appeared to be quickly getting up to speed. “There’s nothing fundamentally groundbreaking, but they are doing cool stuff,” says Stefanie Tellex, an assistant professor at Brown University.
Henrik Christensen, who researches robotics and AI at UC Davis, calls Tesla’s homegrown humanoid “a good initial design,” but adds that the company hasn’t shown evidence it can perform basic navigation, grasping, or manipulation. Jessy Grizzle, a professor at the University of Michigan’s robotics lab who works on legged robots, said that although still early, Tesla’s project appeared to be progressing well. “To go from a man in a suit to real hardware in 13 months is pretty incredible,” he says.
Grizzle says Tesla’s car-making experience and expertise in areas such as batteries and electric motors may help it advance robotic hardware. Musk claimed during the event that the robot would eventually cost around $20,000—an astonishing figure given the project’s ambition and significantly cheaper than any Tesla vehicle—but offered no timeframe for its launch.
Musk was also vague about who his customers would be, or which uses Tesla might find for a humanoid in its own operations. A robot capable of advanced manipulation could perhaps be important for manufacturing, taking on parts of car-making that have not been automated, such as feeding wires through a dashboard or carefully working with flexible plastic parts.
In an industry where profits are razor-thin and other companies are offering electric vehicles that compete with Tesla’s, any edge in manufacturing could prove crucial. But companies have been trying to automate these tasks for many years without much success. And a four-limbed design may not make much sense for such applications. Alexander Kernbaum, interim director of SRI Robotics, a research institute that has previously developed a humanoid robot, says it only really makes sense for robots to walk on legs in very complex environments. “A focus on legs is more of an indication that they are looking to capture people’s imaginations rather than solve real-world problems,” he says.
Grizzle and Christensen both say they will be watching future Tesla demonstrations for signs of progress, especially for evidence of the robot’s manipulation skills. Staying balanced on two legs while lifting and moving an object is natural for humans but challenging to engineer in machines. “When you don’t know the mass of an object, you have to stabilize your body plus whatever you’re holding as you carry it and move it, Grizzle says.
Wise will be watching, too, and despite being underwhelmed so far, he hopes the project doesn’t flounder like Google’s ill-fated robotic company acquiring spree back in 2013, which sucked many researchers into projects that never saw the light of day. The search giant’s splurge included two companies working on humanoids: Boston Dynamics, which it sold off in 2017, and Schaft, which it shut down in 2018. “These projects keep getting killed because, lo and behold, they wake up one day and they realize robotics is hard,” Wise says.