Select Page
Admit It: The Facebook Oversight Board Is Kind of Working

Admit It: The Facebook Oversight Board Is Kind of Working

Judging from the press releases filling my inbox and the tweets lighting up my timeline, no one is happy with Facebook right now. On Friday, the company issued its response to the Facebook Oversight Board’s recommendations on the indefinite ban of Donald Trump. We learned that Trump’s account is now frozen for precisely two years from his original January 7 suspension date, at which point Facebook will reassess the risks of letting him back on. The response also includes a number of other policy changes. Opinions on the announcement range from calling it a pointless bit of “accountability theater” to suggesting that it’s cowardly and irresponsible. Republicans are, of course, outraged that Trump hasn’t been reinstated.

I confess to finding myself in a different camp. The Oversight Board is performing a valuable, though very limited, function, and the Trump situation illustrates why.

When the board first published its ruling last month, it issued both a binding command—Facebook must articulate a specific action on Donald Trump’s account and could not continue an indefinite suspension—and nonbinding recommendations, most notably that the platform abandon its policy of treating statements by politicians as inherently “newsworthy” and thus exempt from the rules that apply to everyone else. As I wrote at the time, Facebook’s response to the nonbinding part would probably prove more important. It would apply more broadly than to just Trump’s account, and it would show whether the company is willing to follow the Oversight Board’s advice even when it doesn’t have to.

Now we know that the answer to that last question is yes. In its announcement on Friday, Facebook says it is committed to fully following 15 of the 19 nonbinding recommendations. Of the remaining four, it is rejecting one, partially following another, and doing more research on two.

The most interesting commitments are around the “newsworthiness allowance.” Facebook says it will keep the exception in place, meaning it will still allow some content that violates its Community Standards to stay up if it is “newsworthy or important to the public interest.” The difference is that the platform will no longer treat posts by politicians as more inherently newsworthy than posts by anyone else. It is also increasing transparency by creating a page explaining the rule; beginning next year, it says it will publish an explanation each time the exception is applied to content that otherwise would have been taken down.

Let this sink in for a moment: Facebook took detailed feedback from a group of thoughtful critics, and Mark Zuckerberg signed off on a concrete policy change, plus some increased transparency. This is progress!

Now, please don’t confuse this for a complete endorsement. There is plenty to criticize about Facebook’s announcement. On the Trump ban, while the company has now articulated more detailed policies around “heightened penalties for public figures during times of civil unrest and ongoing violence,” the fact that it came up with a two-year maximum suspension seems suspiciously tailored to potentially allow Trump back on the platform just when he’s getting ready to start running for president again. And Facebook’s new commitments to transparency leave much to be desired. Its new explanation of the newsworthiness allowance, for example, provides zero information about how Facebook defines “newsworthy” in the first place—a pretty important detail. Perhaps the case-by-case explanations beginning next year will shed more light, but until then the policy is about as transparent as a fogged-over bathroom window.

Indeed, as with any announcement from Facebook, this one will be impossible to evaluate fully until we see how the company follows through in practice. In several cases, Facebook claims that it’s already following the Oversight Board’s recommendations. This can strain credulity. For instance, in response to a suggestion that it rely on regional linguistic and political expertise in enforcing policies around the world, the company declares, “We ensure that content reviewers are supported by teams with regional and linguistic expertise, including the context in which the speech is presented.” And yet a Reuters investigation published this week found that posts promoting gay conversion therapy, which Facebook’s rules prohibit, continue to run rampant in Arab countries, “where practitioners post to millions of followers through verified accounts.” As the content moderation scholar Evelyn Douek puts it, with many of its statements “Facebook gives itself a gold star, but they’re really borderline passes at best.”

The All-Seeing Eyes of New York’s 15,000 Surveillance Cameras

The All-Seeing Eyes of New York’s 15,000 Surveillance Cameras

A new video from human rights organization Amnesty International maps the locations of more than 15,000 cameras used by the New York Police Department, both for routine surveillance and in facial-recognition searches. A 3D model shows the 200-meter range of a camera, part of a sweeping dragnet capturing the unwitting movements of nearly half of the city’s residents, putting them at risk for misidentification. The group says it is the first to map the locations of that many cameras in the city.

Amnesty International and a team of volunteer researchers mapped cameras that can feed NYPD’s much criticized facial-recognition systems in three of the city’s five boroughs—Manhattan, Brooklyn, and the Bronx—finding 15,280 in total. Brooklyn is the most surveilled, with over 8,000 cameras.

A video by Amnesty International shows how New York City surveillance cameras work.

“You are never anonymous,” says Matt Mahmoudi, the AI researcher leading the project. The NYPD has used the cameras in almost 22,000 facial-recognition searches since 2017, according to NYPD documents obtained by the Surveillance Technology Oversight Project, a New York privacy group.

“Whether you’re attending a protest, walking to a particular neighborhood, or even just grocery shopping, your face can be tracked by facial-recognition technology using imagery from thousands of camera points across New York,” Mahmoudi says.

The cameras are often placed on top of buildings, on street lights, and at intersections. The city itself owns thousands of cameras; in addition, private businesses and homeowners often grant access to police.

Police can compare faces captured by these cameras to criminal databases to search for potential suspects. Earlier this year, the NYPD was required to disclose the details of its facial-recognition systems for public comment. But those disclosures didn’t include the number or location of cameras, or any details of how long data is retained or with whom data is shared.

The Amnesty International team found that the cameras are often clustered in majority nonwhite neighborhoods. NYC’s most surveilled neighborhood is East New York, Brooklyn, where the group found 577 cameras in less than 2 square miles. More than 90 percent of East New York’s residents are nonwhite, according to city data.

Facial-recognition systems often perform less accurately on darker-skinned people than lighter-skinned people. In 2016, Georgetown University researchers found that police departments across the country used facial recognition to identify nonwhite potential suspects more than their white counterparts.

In a statement, an NYPD spokesperson said the department never arrests anyone “solely on the basis of a facial-recognition match,” and only uses the tool to investigate “a suspect or suspects related to the investigation of a particular crime.”
 
“Where images are captured at or near a specific crime, comparison of the image of a suspect can be made against a database that includes only mug shots legally held in law enforcement records based on prior arrests,” the statement reads.

Amnesty International is releasing the map and accompanying videos as part of its #BantheScan campaign urging city officials to ban police use of the tool ahead of the city’s mayoral primary later this month. In May, Vice asked mayoral candidates if they’d support a ban on facial recognition. While most didn’t respond to the inquiry, candidate Dianne Morales told the publication she supported a ban, while candidates Shaun Donovan and Andrew Yang suggested auditing for disparate impact before deciding on any regulation.


More Great WIRED Stories

AI Could Soon Write Code Based on Ordinary Language

AI Could Soon Write Code Based on Ordinary Language

In recent years, researchers have used artificial intelligence to improve translation between programming languages or automatically fix problems. The AI system DrRepair, for example, has been shown to solve most issues that spawn error messages. But some researchers dream of the day when AI can write programs based on simple descriptions from non-experts.

On Tuesday, Microsoft and OpenAI shared plans to bring GPT-3, one of the world’s most advanced models for generating text, to programming based on natural language descriptions. This is the first commercial application of GPT-3 undertaken since Microsoft invested $1 billion in OpenAI last year and gained exclusive licensing rights to GPT-3.

“If you can describe what you want to do in natural language, GPT-3 will generate a list of the most relevant formulas for you to choose from,” said Microsoft CEO Satya Nadella in a keynote address at the company’s Build developer conference. “The code writes itself.”

Courtesy of Microsoft

Microsoft VP Charles Lamanna told WIRED the sophistication offered by GPT-3 can help people tackle complex challenges and empower people with little coding experience. GPT-3 will translate natural language into PowerFx, a fairly simple programming language similar to Excel commands that Microsoft introduced in March.

This is the latest demonstration of applying AI to coding. Last year at Microsoft’s Build, OpenAI CEO Sam Altman demoed a language model fine-tuned with code from GitHub that automatically generates lines of Python code. As WIRED detailed last month, startups like SourceAI are also using GPT-3 to generate code. IBM last month showed how its Project CodeNet, with 14 million code samples from more than 50 programming languages, could reduce the time needed to update a program with millions of lines of Java code for an automotive company from one year to one month.

Microsoft’s new feature is based on a neural network architecture known as Transformer, used by big tech companies including Baidu, Google, Microsoft, Nvidia, and Salesforce to create large language models using text training data scraped from the web. These language models continually grow larger. The largest version of Google’s BERT, a language model released in 2018, had 340 million parameters, a building block of neural networks. GPT-3, which was released one year ago, has 175 billion parameters.

Such efforts have a long way to go, however. In one recent test, the best model succeeded only 14 percent of the time on introductory programming challenges compiled by a group of AI researchers.

A New Antitrust Case Cuts to the Core of Amazon’s Identity

A New Antitrust Case Cuts to the Core of Amazon’s Identity

“I founded Amazon 26 years ago with the long-term mission of making it Earth’s most customer-centric company,” Jeff Bezos testified before the House Antitrust Subcommittee last summer. “Not every business takes this customer-first approach, but we do, and it’s our greatest strength.”

Bezos’ obsession with customer satisfaction is at the center of Amazon’s self-mythology. Every move the company makes, in this account, is designed with only one goal in mind: making the customer happy. If Amazon has become an economic juggernaut, the king of ecommerce, that’s not because of any unfair practices or sharp elbows; it’s simply because customers love it so much.

The antitrust lawsuit filed against Amazon on Tuesday directly challenges that narrative. The suit, brought by Karl Racine, the Washington, DC, attorney general, focuses on Amazon’s use of a so-called most-favored-nation clause in its contracts with third-party sellers, who account for most of the sales volume on Amazon. A most-favored-nation clause requires sellers not to offer their products at a lower price on any other website, even their own. According to the lawsuit, this harms consumers by artificially inflating prices across the entire internet, while preventing other ecommerce sites from competing against Amazon on price. “I filed this antitrust lawsuit to put an end to Amazon’s ability to control prices across the online retail market,” Racine said in a press conference announcing the case.

For a long time, Amazon openly did what DC is alleging; its “price parity provision” explicitly restricted third-party sellers from offering lower prices on other sites. It stopped in Europe in 2013, after competition authorities in the UK and Germany began investigating it. In the US, however, the provision lasted longer, until Senator Richard Blumenthal wrote a letter to antitrust agencies in 2018 suggesting Amazon was violating antitrust law. A few months later, in early 2019, Amazon dropped price parity.

But that wasn’t the end of the story. The DC lawsuit alleges that Amazon simply substituted a new policy that uses different language to accomplish the same result as the old rule. Amazon’s Marketplace Fair Pricing Policy informs third-party sellers that they can be punished or suspended for a variety of offenses, including “setting a price on a product or service that is significantly higher than recent prices offered on or off Amazon.” This rule can protect consumers when used to prevent price-gouging for scarce products, as happened with face masks in the early days of the pandemic. But it can also be used to inflate prices for items that sellers would prefer to offer more cheaply. The key phrase is “off Amazon. In other words, Amazon reserves the right to cut off sellers if they list their products more cheaply on another website—just as it did under the old price parity provision. According to the final report filed by the House Antitrust Subcommittee last year, based on testimony from third-party sellers, the new policy “has the same effect of blocking sellers from offering lower prices to consumers on other retail sites.”

The main form that this price discipline takes, according to sellers who have spoken out against Amazon either publicly or in anonymous testimony, is through manipulating access to the Buy Box—those Add to Cart and Buy Now buttons at the top right of an Amazon product listing. When you go to buy something, there are often many sellers trying to make the sale. Only one can “win the Buy Box,” meaning they’re the one who gets the sale when you click one of those buttons. Because most customers don’t scroll down to see what other sellers are offering a product, winning the Buy Box is crucial for anyone trying to make a living by selling on Amazon. As James Thomson, a former Amazon employee and a partner at Buy Box Experts, a brand consultancy for Amazon sellers, told me in 2019, “If you can’t earn the Buy Box, for all intents and purposes, you’re not going to earn the sale.”

Jason Boyce, another longtime Amazon seller turned consultant, explained to me how this works. He and his partners were excited when the last third-party seller contract they signed with Amazon, to sell sporting goods on the site, didn’t include the price parity provision. “We thought, ‘This is great! We can offer discounts on Walmart, and Sears, and wherever else,’” he said. But then something odd happened. Boyce (who spoke with House investigators as part of the antitrust inquiry) noticed that once his company lowered prices on other sites, sales on Amazon started tanking. “We went to the listing, and the Add to Cart button was gone, the Buy Now button was gone. Instead, there was a gray box labeled ‘See All Buying Options.’ You could still buy the product, but it was an extra click. Now, an extra click on Amazon is an eternity—they’re all about immediate gratification.” Moreover, his company’s ad spending plummeted, which he realized was because Amazon doesn’t show users ads for products without a Buy Box. “So what did we do? We went back and raised our prices everywhere else, and within 24 hours everything came back. Traffic improved, clicks improved, and sales came back.”

Florida’s New Social Media Law Will Be Laughed Out of Court

Florida’s New Social Media Law Will Be Laughed Out of Court

Florida’s new social media legislation is a double landmark: It’s the first state law regulating online content moderation, and it will almost certainly become the first such law to be struck down in court.

On Monday, Governor Ron DeSantis signed into law the Stop Social Media Censorship Act, which greatly limits large social media platforms’ ability to moderate or restrict user content. The bill is a legislative distillation of Republican anger over recent episodes of supposed anti-conservative bias, like Twitter and Facebook shutting down Donald Trump’s account and suppressing the spread of the infamous New York Post Hunter Biden story. Most notably, it imposes heavy fines—up to $250,000 per day—on any platform that deactivates the account of a candidate for political office, and it prohibits platforms from taking action against “journalistic enterprises.”

It is very hard to imagine any of these provisions ever being enforced, however.

“This is so obviously unconstitutional, you wouldn’t even put it on an exam,” said A. Michael Froomkin, a law professor at the University of Miami. Under well established Supreme Court precedent, the First Amendment prohibits private entities from being forced to publish or broadcast someone else’s speech. Prohibiting “deplatforming” of political candidates would likely be construed as an unconstitutional must-carry provision. “This law looks like a political freebie,” Froomkin said. “You get to pander, and nothing bad happens, because there’s no chance this will survive in court.” (The governor’s office didn’t respond to a request for comment.)

The Constitution isn’t the only problem for the new law. It also conflicts with Section 230 of the Communications Decency Act, a federal law that generally holds online platforms immune from liability over their content moderation decisions. Section 230 has become an object of resentment on both sides of the political aisle, but for different reasons. Liberals tend to think the law lets online platforms get away with leaving too much harmful material up. Conservative critics, on the other hand, argue that it lets them get away with taking too much stuff down—and, worse, that it allows them to censor conservatives under the guise of content moderation.

Regardless of the merits of these critiques, the fact is that Section 230 remains in effect, and, like many federal statutes, it explicitly preempts any state law that conflicts with it. That is likely to make any attempt to enforce the Stop Social Media Censorship Act an expensive waste of time. Suppose a candidate for office in Florida repeatedly posts statements that violate Facebook’s policies against vaccine misinformation, or racism, and Facebook bans their account. (Like, say, Laura Loomer, a self-described “proud Islamophobe” who ran for Congress last year in Florida after being banned from Facebook and many other platforms.) If she sues under the new law, she will be seeking to hold Facebook liable for a decision to remove user content. But Section 230 says that platforms are free “to restrict access to or availability of material” as long as they do so in good faith. (Facebook and Twitter declined to comment on whether they plan to comply with the Florida law or fight it in court. YouTube didn’t respond to a request for comment.)

Section 230 will probably preempt other aspects of the Florida law that are less politically controversial than the prohibition on deplatforming politicians. For example, the Florida statute requires platforms to set up elaborate due process rights for users, including giving them detailed information about why a certain piece of content was taken down, and to let users opt into a strictly chronological newsfeed with no algorithmic curation. Both of these ideas have common-sense appeal among tech reformers across the political spectrum, and versions of them are included in proposed federal legislation. But enforcing those provisions as part of a state law in court would most likely run afoul of Section 230, because it would boil down to holding a platform liable for hosting, or not hosting, a piece of user-generated content. Florida’s legislature has no power to change that.