Select Page
California’s Governor Gavin Newsom Vetoes State Ban on Driverless Trucks

California’s Governor Gavin Newsom Vetoes State Ban on Driverless Trucks

California governor Gavin Newsom worked late last night, vetoing a law that would have banned self-driving trucks without a human aboard from state roads until the early 2030s. State lawmakers had voted through the law with wide margins, backed by unions that argued autonomous trucks are a safety risk and threaten jobs.

The bill would have seen California, which in 2012 became the first state to clear a regulatory path for autonomous vehicles, turn against self-driving technology just as driverless taxis are starting to serve the public. Autonomous truck developers now hope the freight-heavy state—home to two of the largest US ports—will one day become a critical link in an autonomous trucking network spanning the US.

Companies developing the technology say it will save freight shippers money by enabling trucks to run loads on highways 24 hours a day, and by eliminating the dangers of distracted human driving, which could bring down insurance costs.

The Teamsters union, which represents tens of thousands US truck drivers, mechanics, and other freight workers, organized a mass caravan to Sacramento this week to urge Newsom to sign AB316, which would have required a safety driver on self-driving trucks weighing more than 10,000 pounds through at least the end of the decade.

In a letter released yesterday, Newsom wrote that the law is “unnecessary,” because California already has two agencies, the Department of Motor Vehicles and the state Highway Patrol, overseeing and creating regulations for the new technology. State agencies are in the midst of creating specific rules for heavy-duty autonomous vehicles, including trucks.

Newsom’s veto won’t change much in the short-term. Because state rules are still in development, driverless trucks are not permitted to test on public roads in California. Newsom wrote in his letter that draft regulations “are expected to be released for public comment in the coming months.”

Most of the US companies working on autonomous trucks operate on highways in the southeast and west, especially Texas, where dry weather and a come-as-y’all-are approach to driverless tech regulations make conditions ideal. None of the companies testing autonomous trucks in the US have removed safety drivers, who are trained to take over when the vehicle goes wrong, from behind the wheels of their big rigs. (The controversial company TuSimple says it has completed a handful of completely driverless truck demonstrations in the US; it has since paused its US operations.)

Labor advocates argued the California ban on driverless trucks was needed to protect state residents from tech that’s not ready for prime time. “I’ve blown a perfectly good tire driving the speed limit in a truck and I had to cross three lanes trying to get it under control,” says Mike Di Bene, a truck driver of 30 years and member of the Teamsters. He’s doubtful autonomous trucks can yet handle such situations.

The Teamsters have also argued that driverless truck tech threatens truck drivers’ jobs. In a series of tweets posted Saturday morning, Teamsters president Sean O’Brien wrote that Newsom “doesn’t have the guts to face working people” and would “rather give our jobs away in the dead of night.”

This Is the True Scale of New York’s Airbnb Apocalypse

This Is the True Scale of New York’s Airbnb Apocalypse

The number of short-term Airbnbs available in New York City has dropped 70 percent after the city began enforcing a new law requiring short-term rental operators to register their homes. But despite the new requirements, there are still thousands of listings that could be unregistered.

The drop, recorded between August 4 and September 5, the day New York City began enforcing the new law, represents the disappearance of some 15,000 short-term listings from Airbnb. The figures are based on data provided by Inside Airbnb, a housing advocacy group that tracks listings on the platform.

In August, there were some 22,000 short-term listings on Airbnb in New York City. As of September 5, there were 6,841. But it seems some short-term listings have been switched to long-term listings, which can only be booked for 30 days or more. The number of long-term rentals jumped by about 11,000 to a total of 32,612 from August 4 to September 5. These listings do not need to be registered under the new law.

Additionally, Inside Airbnb estimates that around 4,000 rentals in total have disappeared from Airbnb since the law took effect.

That uptick in long-term rentals may show that the law is working, by pushing hosts to offer apartments to those staying in New York City for 30 days or more. The new registration requirement is meant to enforce older rules on short-term rentals in the city, and it comes at a time when New Yorkers face high rents and housing insecurity. Vacation rentals are also known for bringing noise, trash, and danger to residential neighborhoods and buildings.

At a glance, it’s impossible to tell if a listing on Airbnb is registered with the city. Inside Airbnb found that only 28 short-term rentals in New York mentioned having a registration number from the city in their listing, but it’s not immediately clear if those numbers are legitimate, and the number of short-term rentals Inside Airbnb found far outpaces the number the city has registered.

Ultimately, hosts will need to display registration numbers on their listings. New York City has received 3,829 registration applications, reviewed 896 applications, and granted 290 as of Monday, according to Christian Klossner, executive director of the Mayor’s Office of Special Enforcement, which oversees the registration process. The office has denied 90 and returned another 516 seeking corrections or more information.

Airbnb says it began blocking new short-term reservations for unregistered rentals as early as August 14, but did not automatically cancel stays in unregistered apartments before December 1 to avoid disrupting guests’ travel plans. Expedia Group, the parent company of Vrbo, is working with “the city and our partners to meet the law’s requirements and minimize disruption to the city’s travelers and tourism economy,” says Richard de Dam Lazaro, the company’s senior director of government and corporate affairs. Booking.com did not respond to a request for comment.

But amid the chaotic rollout of the new law, a number of listings appear to be falling through the cracks. A search on Airbnb for apartments in New York for more than two guests returns several results that may break the new law. Entire homes are still available for booking, some with enough space for 12 or 14 guests. One, a townhouse in Harlem, has a backyard with a firepit, a living room with a pool table, and five bedrooms, some with multiple beds next to each other, set up hotel-style. It’s listed for around $1,400 per night.

Britain Admits Defeat in Controversial Online Safety Bill

Britain Admits Defeat in Controversial Online Safety Bill

Tech companies and privacy activists are claiming victory after an eleventh-hour concession by the British government in a long-running battle over end-to-end encryption.

The so-called “spy clause” in the UK’s Online Safety Bill, which experts argued would have made end-to-end encryption all but impossible in the country, will no longer be enforced after the government admitted the technology to securely scan encrypted messages for signs of child sexual abuse material, or CSAM, without compromising users’ privacy, doesn’t yet exist. Secure messaging services, including WhatsApp and Signal, had threatened to pull out of the UK if the bill was passed.

“It’s absolutely a victory,” says Meredith Whittaker, president of the Signal Foundation, which operates the Signal messaging service. Whittaker has been a staunch opponent of the bill, and has been meeting with activists and lobbying for the legislation to be changed. “It commits to not using broken tech or broken techniques to undermine end-to-end encryption.”

The UK’s Department for Digital, Culture, Media and Sport did not respond to a request for comment.

The UK government hadn’t specified the technology that platforms should use to identify CSAM being sent on encrypted services, but the most commonly-cited solution was something called client-side scanning. On services that use end-to-end encryption, only the sender and recipient of a message can see its content; even the service provider can’t access the unencrypted data.

Client-side scanning would mean examining the content of the message before it was sent—that is, on the user’s device—and comparing it to a database of CSAM held on a server somewhere else. That, according to Alan Woodward, a visiting professor in cybersecurity at the University of Surrey, amounts to “government-sanctioned spyware scanning your images and possibly your [texts].”

In December, Apple shelved its plans to build client-side scanning technology for iCloud, later saying that it couldn’t make the system work without infringing on its users’ privacy.

Opponents of the bill say that putting backdoors into people’s devices to search for CSAM images would almost certainly pave the way for wider surveillance by governments. “You make mass surveillance become almost an inevitability by putting [these tools] in their hands,” Woodward says. “There will always be some ‘exceptional circumstances’ that [security forces] think of that warrants them searching for something else.”

Although the UK government has said that it now won’t force unproven technology on tech companies, and that it essentially won’t use the powers under the bill, the controversial clauses remain within the legislation, which is still likely to pass into law. “It’s not gone away, but it’s a step in the right direction,” Woodward says.

James Baker, campaign manager for the Open Rights Group, a nonprofit that has campaigned against the law’s passage, says that the continued existence of the powers within the law means encryption-breaking surveillance could still be introduced in the future. “It would be better if these powers were completely removed from the bill,” he adds.

But some are less positive about the apparent volte-face. “Nothing has changed,” says Matthew Hodgson, CEO of UK-based Element, which supplies end-to-end encrypted messaging to militaries and governments. “It’s only what’s actually written in the bill that matters. Scanning is fundamentally incompatible with end-to-end encrypted messaging apps. Scanning bypasses the encryption in order to scan, exposing your messages to attackers. So all ‘until it’s technically feasible’ means is opening the door to scanning in future rather than scanning today. It’s not a change, it’s kicking the can down the road.”

Whittaker acknowledges that “it’s not enough” that the law simply won’t be aggressively enforced. “But it’s major. We can recognize a win without claiming that this is the final victory,” she says.

The implications of the British government backing down, even partially, will reverberate far beyond the UK, Whittaker says. Security services around the world have been pushing for measures to weaken end-to-end encryption, and there is a similar battle going on in Europe over CSAM, where the European Union commissioner in charge of home affairs, Ylva Johannson, has been pushing similar, unproven technologies.

“It’s huge in terms of arresting the type of permissive international precedent that this would set,” Whittaker says. “The UK was the first jurisdiction to be pushing this kind of mass surveillance. It stops that momentum. And that’s huge for the world.”

The Myth of ‘Open Source’ AI

The Myth of ‘Open Source’ AI

ChatGPT made it possible for anyone to play with powerful artificial intelligence, but the inner workings of the world-famous chatbot remain a closely guarded secret.

In recent months, however, efforts to make AI more “open” seem to have gained momentum. In May, someone leaked a model from Meta, called Llama, which gave outsiders access to its underlying code as well as the “weights” that determine how it behaves. Then, this July, Meta chose to make an even more powerful model, called Llama 2, available for anyone to download, modify, and reuse. Meta’s models have since become an extremely popular foundation for many companies, researchers, and hobbyists building tools and applications with ChatGPT-like capabilities.

“We have a broad range of supporters around the world who believe in our open approach to today’s AI … researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do,” Meta said when announcing Llama 2. This morning, Meta released another model, Llama 2 Code, that is fine-tuned for coding.

It might seem as if the open source approach, which has democratized access to software, ensured transparency, and improved security for decades, is now poised to have a similar impact on AI.

Not so fast, say a group behind a research paper that examines the reality of Llama 2 and other AI models that are described, in some way or another, as “open.” The researchers, from Carnegie Mellon University, the AI Now Institute, and the Signal Foundation, say that models that are branded “open” may come with catches.

Llama 2 is free to download, modify, and deploy, but it is not covered by a conventional open source license. Meta’s license prohibits using Llama 2 to train other language models, and it requires a special license if a developer deploys it in an app or service with more than 700 million daily users.

This level of control means that Llama 2 may provide significant technical and strategic benefits to Meta—for example, by allowing the company to benefit from useful tweaks made by outside developers when it uses the model in its own apps.

Models that are released under normal open source licenses, like GPT Neo from the nonprofit EleutherAI, are more fully open, the researchers say. But it is difficult for such projects to get on an equal footing. 

First, the data required to train advanced models is often kept secret. Second, software frameworks required to build such models are often controlled by large corporations. The two most popular ones, TensorFlow and Pytorch, are maintained by Google and Meta, respectively. Third, computer power required to train a large model is also beyond the reach of any normal developer or company, typically requiring tens or hundreds of millions of dollars for a single training run. And finally, the human labor required to finesse and improve these models is also a resource that is mostly only available to big companies with deep pockets.

The way things are headed, one of the most important technologies in decades could end up enriching and empowering just a handful of companies, including OpenAI, Microsoft, Meta, and Google. If AI really is such a world-changing technology, then the greatest benefits might be felt if it were made more widely available and accessible.

Geoffrey Hinton, Godfather of AI, Has a Hopeful Plan for Keeping Future AI Friendly

Geoffrey Hinton, Godfather of AI, Has a Hopeful Plan for Keeping Future AI Friendly

That sounded to me like he was anthropomorphizing those artificial systems, something scientists constantly tell laypeople and journalists not to do. “Scientists do go out of their way not to do that, because anthropomorphizing most things is silly,” Hinton concedes. “But they’ll have learned those things from us, they’ll learn to behave just like us linguistically. So I think anthropomorphizing them is perfectly reasonable.” When your powerful AI agent is trained on the sum total of human digital knowledge—including lots of online conversations—it might be more silly not to expect it to act human.

But what about the objection that a chatbot could never really understand what humans do, because those linguistic robots are just impulses on computer chips without direct experience of the world? All they are doing, after all, is predicting the next word needed to string out a response that will statistically satisfy a prompt. Hinton points out that even we don’t really encounter the world directly.

“Some people think, hey, there’s this ultimate barrier, which is we have subjective experience and [robots] don’t, so we truly understand things and they don’t,” says Hinton. “That’s just bullshit. Because in order to predict the next word, you have to understand what the question was. You can’t predict the next word without understanding, right? Of course they’re trained to predict the next word, but as a result of predicting the next word they understand the world, because that’s the only way to do it.”

So those things can be … sentient? I don’t want to believe that Hinton is going all Blake Lemoine on me. And he’s not, I think. “Let me continue in my new career as a philosopher,” Hinton says, jokingly, as we skip deeper into the weeds. “Let’s leave sentience and consciousness out of it. I don’t really perceive the world directly. What I think is in the world isn’t what’s really there. What happens is it comes into my mind, and I really see what’s in my mind directly. That’s what Descartes thought. And then there’s the issue of how is this stuff in my mind connected to the real world? And how do I actually know the real world?” Hinton goes on to argue that since our own experience is subjective, we can’t rule out that machines might have equally valid experiences of their own. “Under that view, it’s quite reasonable to say that these things may already have subjective experience,” he says.

Now consider the combined possibilities that machines can truly understand the world, can learn deceit and other bad habits from humans, and that giant AI systems can process zillions of times more information that brains can possibly deal with. Maybe you, like Hinton, now have a more fraughtful view of future AI outcomes.

But we’re not necessarily on an inevitable journey toward disaster. Hinton suggests a technological approach that might mitigate an AI power play against humans: analog computing, just as you find in biology and as some engineers think future computers should operate. It was the last project Hinton worked on at Google. “It works for people,” he says. Taking an analog approach to AI would be less dangerous because each instance of analog hardware has some uniqueness, Hinton reasons. As with our own wet little minds, analog systems can’t so easily merge in a Skynet kind of hive intelligence.