Select Page
Boston Dynamics, BTS, and Ballet: The Next Act for Robotics

Boston Dynamics, BTS, and Ballet: The Next Act for Robotics

There’s a scene in Swan Lake where the hunky, crossbow-toting protagonist, Prince Siegfried, loses his swan princess, Odette, in an enchanted forest. Suddenly, he finds himself confronted by dozens of identical ballerina swans. Bedazzled and confused, Siegfried runs uselessly up and down the doppelgänger ranks searching for his betrothed. He is beguiled by the multiplicity of swans and the scale of their shared, robotically precise movements.

By the time Swan Lake premiered in the late 19th century, the princely protagonist’s confusion amidst a slew of synchronous ballerinas was already a trope. Romantic ballets are littered with such moments, but they can be found in more contemporary choreographies as well. The American director Busby Berkeley became famous for films such as 42nd Street that featured dozens of dancers uncannily executing the same movements. In the last few decades, the Rockettes and any number of boy bands have brought similar styles to the stage. And throughout history, military marches, parades, and public demonstrations have brought the strategy to the streets. Choreographing groups so the part moves like the whole is both a technique and a tactic.

It is through this Venn diagram intersection of ballet, boy bands, and battalions that we may consider “Spot’s on It,” the latest dance video from robotics manufacturer Boston Dynamics. The clip, which commemorates the company’s acquisition by the Hyundai Motor Company, features quadrupedal “Spot” robots dancing to “IONIQ: I’m on It,” a track by Hyundai global ambassador and mega-boyband BTS, promoting the company’s niche electric car series. In the video, several Spot robots bop with astonishing synchronicity in a catchy-yet-dystopian minute and 20 seconds.

The video opens with five robots in a line, one behind the other, so that only the front Spot is fully visible. The music starts: a new age-y cadence backed by synth clapping and BTS’ prayer-like intoning of the word “IONIQ.” The robots’ heads rise and blossom with the music, pliably shaping themselves into a wavering star, then a helix, then a floral pose that breathes with the melodic line. Their capacity for robotic exactitude allows otherwise simple gestures (the lift of the head, a 90-degree rotation, the opening of Spot’s “mouth”) to create mirrored complexity across all of the robot performers. “Spot’s on It,” à la Busby Berkeley, makes it difficult to distinguish between the robots, and at times it’s unclear which robot “head” belongs to which robot body.

The choreography, by Monica Thomas, takes advantage of the robots’ ability to move exactly like one another. For the Rockettes, BTS, and in many ballets, individual virtuosity is a function of one’s ability to move undistinguished within a group. The Spot robots, however, are functionally, kinesthetically, and visually identical to one another. Human performers can play at such similitude, but robots fully embody it. It’s Siegfried’s uncanny swan valley amidst a robot ballet.

From a technical perspective, the robots’ capacity for movement variation demonstrates the increasing subtlety of Boston Dynamics’ choreography software, a component of its Spot Software Development Kit (SDK) appropriately called “Choreography.” In it, the robot’s user can select a choreo-robotic movement sequence such as a “bourree”—defined in the SDK as “cross-legged tippy-taps like the ballet move”—and modify its relative velocity, yaw, and stance length. In application across an entire dance, one move, such as the “bourree,” can be inverted, reversed, mirrored, done wide or narrow, fast or slow, with increased or diminished distortion across the group. Thomas’ choreography fully utilizes this capacity to execute all manner of kaleidoscopic effects.

[embedded content]

Such complexity and subtlety marks “Spot’s on It” as a significant departure from previous Boston Dynamics dances. First and foremost, it’s clear this video had a more intense production apparatus behind it: “Spot’s on It” is accompanied by a friendly corporate blog post that, for the first time, narrates how Boston Dynamics deploys choreography in its marketing and engineering processes. It’s also, notably, the first time Thomas is publicly credited as the choreographer of Boston Dynamics’ dances. Her labor in viral videos like “Uptown Spot” and “Do You Love Me?” was rendered practically invisible, so Boston Dynamics’ decision to underline Thomas’ role in this latest video is a substantial shift in posture. Scholar Jessica Rajko has previously pointed out the company’s opaque labor politics and fuzzy rationale for not crediting Thomas, which is in contrast to choreo-robotic researchers like Catie Cuan and Amy Laviers, who clearly foreground dancerly contributions to their work. “Spot’s on It” signals Boston Dynamics’ deepening, complexifying engagement with choreographics.

Even though Boston Dynamics’ dancing robots are currently relegated to the realm of branded spectacle, I am consistently impressed by the company’s choreographic strides. In artists’ hands, these machines are becoming eminently capable of expression through performance. Boston Dynamics is a company that takes dance seriously, and, per its blog post, uses choreography as “a form of highly accelerated lifecycle testing for the hardware.” All this dancing is meant to be fun and functional.

An Algorithm That Predicts Deadly Infections Is Often Flawed

An Algorithm That Predicts Deadly Infections Is Often Flawed

A complication of infection known as sepsis is the number one killer in US hospitals. So it’s not surprising that more than 100 health systems use an early warning system offered by Epic Systems, the dominant provider of US electronic health records. The system throws up alerts based on a proprietary formula tirelessly watching for signs of the condition in a patient’s test results.

But a new study using data from nearly 30,000 patients in University of Michigan hospitals suggests Epic’s system performs poorly. The authors say it missed two-thirds of sepsis cases, rarely found cases medical staff did not notice, and frequently issued false alarms.

Karandeep Singh, an assistant professor at University of Michigan who led the study, says the findings illustrate a broader problem with the proprietary algorithms increasingly used in health care. “They’re very widely used, and yet there’s very little published on these models,” Singh says. “To me that’s shocking.”

The study was published Monday in JAMA Internal Medicine. An Epic spokesperson disputed the study’s conclusions, saying the company’s system has “helped clinicians save thousands of lives.”

Epic’s is not the first widely used health algorithm to trigger concerns that technology supposed to improve health care is not delivering, or even actively harmful. In 2019, a system used on millions of patients to prioritize access to special care for people with complex needs was found to lowball the needs of Black patients compared to white patients. That prompted some Democratic senators to ask federal regulators to investigate bias in health algorithms. A study published in April found that statistical models used to predict suicide risk in mental health patients performed well for white and Asian patients but poorly for Black patients.

The way sepsis stalks hospital wards has made it a special target of algorithmic aids for medical staff. Guidelines from the Centers for Disease Control and Prevention to health providers on sepsis encourage use of electronic medical records for surveillance and predictions. Epic has several competitors offering commercial warning systems, and some US research hospitals have built their own tools.

Automated sepsis warnings have huge potential, Singh says, because key symptoms of the condition, such as low blood pressure, can have other causes, making it difficult for staff to spot early. Starting sepsis treatment such as antibiotics just an hour sooner can make a big difference to patient survival. Hospital administrators often take special interest in sepsis response, in part because it contributes to US government hospital ratings.

Singh runs a lab at Michigan researching applications of machine learning to patient care. He got curious about Epic’s sepsis warning system after being asked to chair a committee at the university’s health system created to oversee uses of machine learning.

As Singh learned more about the tools in use at Michigan and other health systems, he became concerned that they mostly came from vendors that disclosed little about how they worked or performed. His own system had a license to use Epic’s sepsis prediction model, which the company told customers was highly accurate. But there had been no independent validation of its performance.

Singh and Michigan colleagues tested Epic’s prediction model on records for nearly 30,000 patients covering almost 40,000 hospitalizations in 2018 and 2019. The researchers noted how often Epic’s algorithm flagged people who developed sepsis as defined by the CDC and the Centers for Medicare and Medicaid Services. And they compared the alerts that the system would have triggered with sepsis treatments logged by staff, who did not see Epic sepsis alerts for patients included in the study.

The researchers say their results suggest Epic’s system wouldn’t make a hospital much better at catching sepsis and could burden staff with unnecessary alerts. The company’s algorithm did not identify two-thirds of the roughly 2,500 sepsis cases in the Michigan data. It would have alerted for 183 patients who developed sepsis but had not been given timely treatment by staff.

The All-Seeing Eyes of New York’s 15,000 Surveillance Cameras

The All-Seeing Eyes of New York’s 15,000 Surveillance Cameras

A new video from human rights organization Amnesty International maps the locations of more than 15,000 cameras used by the New York Police Department, both for routine surveillance and in facial-recognition searches. A 3D model shows the 200-meter range of a camera, part of a sweeping dragnet capturing the unwitting movements of nearly half of the city’s residents, putting them at risk for misidentification. The group says it is the first to map the locations of that many cameras in the city.

Amnesty International and a team of volunteer researchers mapped cameras that can feed NYPD’s much criticized facial-recognition systems in three of the city’s five boroughs—Manhattan, Brooklyn, and the Bronx—finding 15,280 in total. Brooklyn is the most surveilled, with over 8,000 cameras.

A video by Amnesty International shows how New York City surveillance cameras work.

“You are never anonymous,” says Matt Mahmoudi, the AI researcher leading the project. The NYPD has used the cameras in almost 22,000 facial-recognition searches since 2017, according to NYPD documents obtained by the Surveillance Technology Oversight Project, a New York privacy group.

“Whether you’re attending a protest, walking to a particular neighborhood, or even just grocery shopping, your face can be tracked by facial-recognition technology using imagery from thousands of camera points across New York,” Mahmoudi says.

The cameras are often placed on top of buildings, on street lights, and at intersections. The city itself owns thousands of cameras; in addition, private businesses and homeowners often grant access to police.

Police can compare faces captured by these cameras to criminal databases to search for potential suspects. Earlier this year, the NYPD was required to disclose the details of its facial-recognition systems for public comment. But those disclosures didn’t include the number or location of cameras, or any details of how long data is retained or with whom data is shared.

The Amnesty International team found that the cameras are often clustered in majority nonwhite neighborhoods. NYC’s most surveilled neighborhood is East New York, Brooklyn, where the group found 577 cameras in less than 2 square miles. More than 90 percent of East New York’s residents are nonwhite, according to city data.

Facial-recognition systems often perform less accurately on darker-skinned people than lighter-skinned people. In 2016, Georgetown University researchers found that police departments across the country used facial recognition to identify nonwhite potential suspects more than their white counterparts.

In a statement, an NYPD spokesperson said the department never arrests anyone “solely on the basis of a facial-recognition match,” and only uses the tool to investigate “a suspect or suspects related to the investigation of a particular crime.”
 
“Where images are captured at or near a specific crime, comparison of the image of a suspect can be made against a database that includes only mug shots legally held in law enforcement records based on prior arrests,” the statement reads.

Amnesty International is releasing the map and accompanying videos as part of its #BantheScan campaign urging city officials to ban police use of the tool ahead of the city’s mayoral primary later this month. In May, Vice asked mayoral candidates if they’d support a ban on facial recognition. While most didn’t respond to the inquiry, candidate Dianne Morales told the publication she supported a ban, while candidates Shaun Donovan and Andrew Yang suggested auditing for disparate impact before deciding on any regulation.


More Great WIRED Stories

AI Could Soon Write Code Based on Ordinary Language

AI Could Soon Write Code Based on Ordinary Language

In recent years, researchers have used artificial intelligence to improve translation between programming languages or automatically fix problems. The AI system DrRepair, for example, has been shown to solve most issues that spawn error messages. But some researchers dream of the day when AI can write programs based on simple descriptions from non-experts.

On Tuesday, Microsoft and OpenAI shared plans to bring GPT-3, one of the world’s most advanced models for generating text, to programming based on natural language descriptions. This is the first commercial application of GPT-3 undertaken since Microsoft invested $1 billion in OpenAI last year and gained exclusive licensing rights to GPT-3.

“If you can describe what you want to do in natural language, GPT-3 will generate a list of the most relevant formulas for you to choose from,” said Microsoft CEO Satya Nadella in a keynote address at the company’s Build developer conference. “The code writes itself.”

Courtesy of Microsoft

Microsoft VP Charles Lamanna told WIRED the sophistication offered by GPT-3 can help people tackle complex challenges and empower people with little coding experience. GPT-3 will translate natural language into PowerFx, a fairly simple programming language similar to Excel commands that Microsoft introduced in March.

This is the latest demonstration of applying AI to coding. Last year at Microsoft’s Build, OpenAI CEO Sam Altman demoed a language model fine-tuned with code from GitHub that automatically generates lines of Python code. As WIRED detailed last month, startups like SourceAI are also using GPT-3 to generate code. IBM last month showed how its Project CodeNet, with 14 million code samples from more than 50 programming languages, could reduce the time needed to update a program with millions of lines of Java code for an automotive company from one year to one month.

Microsoft’s new feature is based on a neural network architecture known as Transformer, used by big tech companies including Baidu, Google, Microsoft, Nvidia, and Salesforce to create large language models using text training data scraped from the web. These language models continually grow larger. The largest version of Google’s BERT, a language model released in 2018, had 340 million parameters, a building block of neural networks. GPT-3, which was released one year ago, has 175 billion parameters.

Such efforts have a long way to go, however. In one recent test, the best model succeeded only 14 percent of the time on introductory programming challenges compiled by a group of AI researchers.

Dumbed Down AI Rhetoric Harms Everyone

Dumbed Down AI Rhetoric Harms Everyone

When the European Union Commission released its regulatory proposal on artificial intelligence last month, much of the US policy community celebrated. Their praise was at least partly grounded in truth: The world’s most powerful democratic states haven’t sufficiently regulated AI and other emerging tech, and the document marked something of a step forward. Mostly, though, the proposal and responses to it underscore democracies’ confusing rhetoric on AI.

Over the past decade, high-level stated goals about regulating AI have often conflicted with the specifics of regulatory proposals, and what end-states should look like aren’t well-articulated in either case. Coherent and meaningful progress on developing internationally attractive democratic AI regulation, even as that may vary from country to country, begins with resolving the discourse’s many contradictions and unsubtle characterizations.

The EU Commission has touted its proposal as an AI regulation landmark. Executive vice president Margrethe Vestager said upon its release, “We think that this is urgent. We are the first on this planet to suggest this legal framework.” Thierry Breton, another commissioner, said the proposals “aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

This is certainly better than many national governments, especially the US, stagnating on rules of the road for the companies, government agencies, and other institutions. AI is already widely used in the EU despite minimal oversight and accountability, whether for surveillance in Athens or operating buses in Málaga, Spain.

But to cast the EU’s regulation as “leading” simply because it’s first only masks the proposal’s many issues. This kind of rhetorical leap is one of the first challenges at hand with democratic AI strategy.

Of the many “specifics” in the 108-page proposal, its approach to regulating facial recognition is especially consequential. “The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement,” it reads, “is considered particularly intrusive in the rights and freedoms of the concerned persons,” as it can affect private life, “evoke a feeling of constant surveillance,” and “indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” At first glance, these words may signal alignment with the concerns of many activists and technology ethicists on the harms facial recognition can inflict on marginalized communities and grave mass-surveillance risks.

The commission then states, “The use of those systems for the purpose of law enforcement should therefore be prohibited.” However, it would allow exceptions in “three exhaustively listed and narrowly defined situations.” This is where the loopholes come into play.

The exceptions include situations that “involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localization, identification or prosecution of perpetrators or suspects of the criminal offenses.” This language, for all that the scenarios are described as “narrowly defined,” offers myriad justifications for law enforcement to deploy facial recognition as it wishes. Permitting its use in the “identification” of “perpetrators or suspects” of criminal offenses, for example, would allow precisely the kind of discriminatory uses of often racist and sexist facial-recognition algorithms that activists have long warned about.

The EU’s privacy watchdog, the European Data Protection Supervisor, quickly pounced on this. “A stricter approach is necessary given that remote biometric identification, where AI may contribute to unprecedented developments, presents extremely high risks of deep and non-democratic intrusion into individuals’ private lives,” the EDPS statement read. Sarah Chander from the nonprofit organization European Digital Rights described the proposal to the Verge as “a veneer of fundamental rights protection.” Others have noted how these exceptions mirror legislation in the US that on the surface appears to restrict facial recognition use but in fact has many broad carve-outs.