Select Page
Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

Elon Musk Has Fired Twitter’s ‘Ethical AI’ Team

As more and more problems with AI have surfaced, including biases around race, gender, and age, many tech companies have installed “ethical AI” teams ostensibly dedicated to identifying and mitigating such issues.

Twitter’s META unit was more progressive than most in publishing details of problems with the company’s AI systems, and in allowing outside researchers to probe its algorithms for new issues.

Last year, after Twitter users noticed that a photo-cropping algorithm seemed to favor white faces when choosing how to trim images, Twitter took the unusual decision to let its META unit publish details of the bias it uncovered. The group also launched one of the first ever “bias bounty” contests, which let outside researchers test the algorithm for other problems. Last October, Chowdhury’s team also published details of unintentional political bias on Twitter, showing how right-leaning news sources were, in fact, promoted more than left-leaning ones.

Many outside researchers saw the layoffs as a blow, not just for Twitter but for efforts to improve AI. “What a tragedy,” Kate Starbird, an associate professor at the University of Washington who studies online disinformation, wrote on Twitter. 

Twitter content

This content can also be viewed on the site it originates from.

“The META team was one of the only good case studies of a tech company running an AI ethics group that interacts with the public and academia with substantial credibility,” says Ali Alkhatib, director of the Center for Applied Data Ethics at the University of San Francisco.

Alkhatib says Chowdhury is incredibly well thought of within the AI ethics community and her team did genuinely valuable work holding Big Tech to account. “There aren’t many corporate ethics teams worth taking seriously,” he says. “This was one of the ones whose work I taught in classes.”

Mark Riedl, a professor studying AI at Georgia Tech, says the algorithms that Twitter and other social media giants use have a huge impact on people’s lives, and need to be studied. “Whether META had any impact inside Twitter is hard to discern from the outside, but the promise was there,” he says.

Riedl adds that letting outsiders probe Twitter’s algorithms was an important step toward more transparency and understanding of issues around AI. “They were becoming a watchdog that could help the rest of us understand how AI was affecting us,” he says. “The researchers at META had outstanding credentials with long histories of studying AI for social good.”

As for Musk’s idea of open-sourcing the Twitter algorithm, the reality would be far more complicated. There are many different algorithms that affect the way information is surfaced, and it’s challenging to understand them without the real time data they are being fed in terms of tweets, views, and likes.

The idea that there is one algorithm with explicit political leaning might oversimplify a system that can harbor more insidious biases and problems. Uncovering these is precisely the kind of work that Twitter’s META group was doing. “There aren’t many groups that rigorously study their own algorithms’ biases and errors,” says Alkhatib at the University of San Francisco. “META did that.” And now, it doesn’t.

Automation Isn’t the Biggest Threat to US Factory Jobs

Automation Isn’t the Biggest Threat to US Factory Jobs

The number of American workers who quit their jobs during the pandemic—over a fifth of the workforce—may constitute one of the largest American labor movements in recent history. Workers demanded higher pay and better conditions, spurred by rising inflation and the pandemic realization that employers expected them to risk their lives for low wages, mediocre benefits, and few protections from abusive customers—often while corporate stock prices soared. At the same time, automation has become cheaper and smarter than ever. Robot adoption hit record highs in 2021. This wasn’t a surprise, given prior trends in robotics, but it was likely accelerated by pandemic-related worker shortages and Covid-19 safety requirements. Will robots automate away the jobs of entitled millennials who “don’t want to work,” or could this technology actually improve workers’ jobs and help firms attract more enthusiastic employees?

The answer depends on more than what’s technologically feasible, including what actually happens when a factory installs a new robot or a cashier aisle is replaced by a self-checkout booth—and what future possibilities await displaced workers and their children. So far, we know the gains from automation have proved notoriously unequal. A key component of 20th-century productivity growth came from replacing workers with technology, and economist Carl Benedikt Frey notes that American productivity grew by 400 percent from 1930 to 2000, while average leisure time only increased by 3 percent. (Since 1979, American labor productivity, or dollars created per worker, has increased eight times faster than workers’ hourly compensation.) During this period, technological luxuries became necessities and new types of jobs flourished—while the workers’ unions that used to ensure livable wages dissolved and less-educated workers fell further behind those with high school and college degrees. But the trend has differed across industrialized countries: From 1995 to 2013, America experienced a 1.3 percent gap between productivity growth and median wage growth, but in Germany the gap was only 0.2 percent.

Technology adoption will continue to increase, whether America can equitably distribute the technological benefits or not. So the question becomes, how much control do we actually have over automation? How much of this control is dependent on national or regional policies, and how much power might individual firms and workers have within their own workplaces? Is it inevitable that robots and artificial intelligence will take all of our jobs, and over what time frame? While some scholars believe that our fates are predetermined by the technologies themselves, emerging evidence indicates that we may have considerable influence over how such machines are employed within our factories and offices—if we can only figure out how to wield this power.

While 8 percent of German manufacturing workers left their jobs (voluntarily or involuntarily) between 1993 and 2009, 34 percent of US manufacturing workers left their jobs over the same period. Thanks to workplace bargaining and sectoral wage-setting, German manufacturing workers have better financial incentives to stay at their jobs; The Conference Board reports that the average German manufacturing worker earned $43.18 (plus $8.88 in benefits) per hour in 2016, while the average American manufacturing worker earned $39.03 with only $3.66 in benefits. Overall, Germans across the economy with a “medium-skill” high school or vocational certificate earned $24.31 per hour in 2016, while Americans with comparable education averaged $14.55 per hour. Two case studies illustrate the differences between American and German approaches to manufacturing workers and automation, from policies to supply chains to worker training systems.

In a town on the outskirts of the Black Forest in Baden-Württemberg, Germany, complete with winding cobblestone streets and peaked red rooftops, there’s a 220-person factory that’s spent decades as a global leader in safety-critical fabricated metal equipment for sites such as highway tunnels, airports, and nuclear reactors. It’s a wide, unassuming warehouse next to a few acres of golden mustard flowers. When I visited with my colleagues from the MIT Interactive Robotics Group and the Fraunhofer Institute for Manufacturing Engineering and Automation’s Future Work Lab (part of the diverse German government-supported Fraunhofer network for industrial research and development), the senior factory manager informed us that his workers’ attitudes, like the 14th-century church downtown, hadn’t changed much in his 25-year tenure at the factory. Teenagers still entered the firm as apprentices in metal fabrication through Germany’s dual work-study vocational system, and wages are high enough that most young people expected to stay at the factory and move up the ranks until retirement, earning a respectable living along the way. Smaller German manufacturers can also get government subsidies to help send their workers back to school to learn new skills that often equate to higher wages. This manager had worked closely with a nearby technical university to develop advanced welding certifications, and he was proud to rely on his “welding family” of local firms, technology integrators, welding trade associations, and educational institutions for support with new technology and training.

Our research team also visited a 30-person factory in urban Ohio that makes fabricated metal products for the automotive industry, not far from the empty warehouses and shuttered office buildings of downtown. This factory owner, a grandson of the firm’s founder, complained about losing his unskilled, minimum-wage technicians to any nearby job willing to offer a better salary. “We’re like a training company for big companies,” he said. He had given up on finding workers with the relevant training and resigned himself to finding unskilled workers who could hopefully be trained on the job. Around 65 percent of his firm’s business used to go to one automotive supplier, which outsourced its metal fabrication to China in 2009, forcing the Ohio firm to shrink down to a third of its prior workforce.

While the Baden-Württemberg factory commanded market share by selling specialized final products at premium prices, the Ohio factory made commodity components to sell to intermediaries, who then sold to powerful automotive firms. So the Ohio firm had to compete with low-wage, bulk producers in China, while the highly specialized German firm had few foreign or domestic competitors forcing it to shrink its skilled workforce or lower wages.

Welding robots have replaced some of the workers’ tasks in the two factories, but both are still actively hiring new people. The German firm’s first robot, purchased in 2018, was a new “collaborative” welding arm (with a friendly user interface) designed to be operated by workers with welding expertise, rather than professional robot programmers who don’t know the intricacies of welding. Training welders to operate the robot isn’t a problem in Baden-Württemberg, where everyone who arrives as a new welder has a vocational degree representing at least two years of education and hands-on apprenticeship in welding, metal fabrication, and 3D modeling. Several of the firm’s welders had already learned to operate the robot, assisted by prior trainings. And although the German firm manager was pleased to save labor costs, his main reason for the robot acquisition was to improve workers’ health and safety and minimize boring, repetitive welding sequences—so he could continue to attract skilled young workers who would stick around. Another German factory we visited had recently acquired a robot to tend a machine during the night shift so fewer workers would have to work overtime or come in at night.

TikTok Must Not Fail Ukrainians

TikTok Must Not Fail Ukrainians

Vietnam was known as the first televised war. The Iran Green Movement and the Arab Spring were called the first Twitter Revolutions. And now the Russian invasion of Ukraine is being dubbed the first TikTok War. As The Atlantic and others have pointed out, it’s not, neither literally nor figuratively: TikTok is merely the latest social media platform to see its profitable expansion turn into a starring role in a crisis.

But as its #ukraine and #украина posts near a combined 60 billion views, TikTok should learn from the failings of other platforms over the past decade, failings that have exacerbated the horrors of war, facilitated misinformation, and impeded access to justice for human rights crimes. TikTok should take steps now to better support creators sharing evidence and experience, viewers, and the people and institutions who use these videos for reliable information and human rights accountability.

First, TikTok can help people on the ground in Ukraine who want to galvanize action and be trusted as frontline witnesses. The company should provide targeted guidance directly to these vulnerable creators. This could include notifications or videos in their For You page that demonstrate (1) how to film in a way that is more verifiable and trustworthy to outside sources, (2) how to protect themselves and others in case a video shot in crisis becomes a tool of surveillance and outright targeting, and (3) how to share their footage without it getting taken down or made less visible as graphic content. TikTok should begin the process of incorporating emerging approaches (such as the C2PA standards) that allow creators to choose to show a video’s provenance. And it should offer easy ways, prominently available when recording, to protectively and not just aesthetically blur faces of vulnerable people.

TikTok should also be investing in robust, localized, contextual content moderation and appeals routing for this conflict and the next crisis. Social media creators are at the mercy of capricious algorithms that cannot navigate the difference between harmful violent content and victims of war sharing their experiences. If a clip or account is taken down or suspended—often because it breaches a rule the user never knew about—it’s unlikely they’ll be able to access a rapid or transparent appeals process. This is particularly true if they live outside North America and Western Europe. The company should bolster its content moderation in Ukraine immediately.

The platform is poorly designed for accurate information but brilliantly designed for quick human engagement. The instant fame that the For You page can grant has brought the everyday life and dark humor of young Ukrainians like Valeria Shashenok (@valerissh) from the city of Chernihiv into people’s feeds globally. Human rights activists know that one of the best ways to engage people in meaningful witnessing and to counter the natural impulse to look away occurs when you experience their realities in a personal, human way. Undoubtedly some of this insight into real people’s lives in Ukraine is moving people to a place of greater solidarity. Yet the more decontextualized the suffering of others is—and the For You page also encourages flitting between disparate stories—the more the suffering is experienced as spectacle. This risks a turn toward narcissistic self-validation or worse: trolling of people at their most vulnerable.

And that’s assuming that the content we’re viewing is shared in good faith. The ability to remix audio, along with TikTok’s intuitive ease in editing, combining, and reusing existing footage, among other factors, make the platform vulnerable to misinformation and disinformation. Unless spotted by an automated match-up with a known fake, labeled as state-affiliated media, or identified by a fact-checker as incorrect or by TikTok teams as being part of a coordinated influence campaign, many deceptive videos circulate without any guidance or tools to help viewers exercise basic media literacy.

TikTok should do more to ensure that it promptly identifies, reviews, and labels these fakes for their viewers, and takes them down or removes them from recommendations. They should ramp up capacity to fact-check on the platform and address how their business model and its resulting algorithm continues to promote deceptive videos with high engagement. We, the people viewing the content, also need better direct support. One of the first steps that professional fact-checkers take to verify footage is to use a reverse image search to see if a photo or video existed before the date it claims to have been made or is from a different location or event than what it is claimed to be. As the TikTok misinfo expert Abbie Richards has pointed out, TikTok doesn’t even indicate the date a video was posted when it appears in the For You feed. Like other platforms, TikTok also doesn’t make an easy reverse image search or video search available in-platform to its users or offer in-feed indications of previous video dupes. It’s past time to make it simpler to be able to check whether a video you see in your feed comes from a different time and place than it claims, for example with intuitive reverse image/video search or a simple one-click provenance trail for videos created in-platform.

No one visits the “Help Center.” Tools need to be accompanied by guidance in videos that appear on people’s For You page. Viewers need to build the media literacy muscles for how to make good judgements about the footage they are being exposed to. This includes sharing principles like SIFT as well as tips specific to the ways TikTok works, such as what to look for on TikTok’s extremely popular livestreams: For example, check the comments and look at the creator’s previous content, and on any video, always check to make sure the audio is original (as both Richards and Marcus Bösch, another TikTok misinfo expert, have suggested). Reliable news sources also need to be part of the feed, as TikTok appears to have started to do increasingly.

TikTok also demonstrates a problem that arises as content recommender algorithms intersect with good media literacy practices of “lateral reading.” Perversely, the more attention you pay to a suspicious video, the more you return to it after looking for other sources, the more the TikTok algorithm feeds you more of the same and prioritizes sharing that potentially false video to other people.