Select Page
China Built Your iPhone. Will It Build Your Next Car?

China Built Your iPhone. Will It Build Your Next Car?

Rumors of an Apple electric car project have long excited investors and iPhone enthusiasts. Almost a decade after details of the project leaked, the Cupertino-mobile remains mythical—but that hasn’t stopped other consumer electronics companies from surging ahead. On the other side of the world, people will soon be able to order a vehicle from the Taiwanese company that mastered manufacturing Apple’s gadgets in China. Welcome to the era of the Foxconn-mobile.

In October 2021, Hon Hai Technology Group, better known internationally as Foxconn, announced plans to produce three of its own electric vehicles in collaboration with Yulon, a Taiwanese automaker, under the name Foxtron. Foxconn, which is best known for assembling 70 percent of iPhones, has similar ambitions for the auto industry: to become the manufacturer of choice for a totally new kind of car. To date it has signed deals to make cars for two US-based EV startups, Lordstown Motors and Fisker.

Foxconn’s own vehicles—a hatchback, a sedan, and a bus—don’t especially ooze Apple-chic, but they represent a big leap for the consumer electronics manufacturer. Foxconn’s ambitious expansion plan also reflects a bigger shift across the auto world, in terms of technology and geography. The US, Europe, and Japan have defined what cars are for the last 100 years. Now the changing nature of the automobile, with increased electrification, computerization, and autonomy, means that China may increasingly decide what car making is.

If Foxconn succeeds in building a major auto-making business, it would contribute to China becoming an automotive epicenter capable of eclipsing the conventional powerhouses of the US, Germany, Japan and South Korea. Foxconn did not respond to requests for an interview.

The automobile industry is expected to undergo big transformations in the coming years. An October 2020 report from McKinsey concluded that carmakers will dream up new ways of selling vehicles and generating revenues through apps and subscription services. In some ways, the car of the future sounds an awful lot like a smartphone on wheels.

That’s partly why there’s no better moment than now for an electronics manufacturer to try car making, says Marc Sachon, a professor at IESE Business School in Barcelona, who studies the automotive industry. Electric vehicle powertrains are simpler than internal combustion ones, with fewer components and fewer steps involved in assembly. The EV supply chain is simpler to manage than the conventional supply chain, which is one of the core competencies of established carmakers. China, Sachon adds, has a strong EV ecosystem, from batteries to software, and even the manufacturing of components.

China is especially well positioned to lead the charge towards electrification. The country already has some of the world’s most advanced battery manufacturers, including CATL and BYD, the latter of which also produces cars. Carmakers in the region may gain an edge in terms of understanding and harnessing new battery technologies simply by virtue of proximity—much in the same way as software companies benefit from being close to chip design firms. 

An Apple Store Votes to Unionize for the First Time

An Apple Store Votes to Unionize for the First Time

In a statement sent before the results were announced, Apple spokesperson Josh Lipton wrote, “We are fortunate to have incredible retail team members and we deeply value everything they bring to Apple. We are pleased to offer very strong compensation and benefits for full time and part time employees, including health care, tuition reimbursement, new parental leave, paid family leave, annual stock grants and many other benefits.”

Members penned an open letter to CEO Tim Cook announcing their union, called Coalition of Organized Retail Employees, or CORE, and asking him not to wage an anti-union campaign. It went unheeded. The company retained the union avoidance firm Littler Mendelson, the same firm used by Starbucks. A near-daily parade of anti-union rhetoric followed, some at daily meetings, called “downloads,” and some in one-on-one asides. Managers would take individuals out of the store for walk-and-talks, sometimes as frequently as every hour, says DiMaria. In late May, Apple sent a video to all its US stores featuring vice president of retail Deirdre O’Brien. A union, she warned employees, “could limit our ability to make immediate, widespread changes to improve your experience.”

DiMaria says Apple deployed scare tactics to try to mislead workers into believing that if the union won, they might lose their benefits, that the attendance policy would become stricter, and that they wouldn’t be able to meet with their managers without the union. He says they appeared to be tailoring their messaging to individual employees, which a worker in the Atlanta store says happened there too.

Apple did take a different approach from Atlanta in its scheduling of group meetings to discuss the union. Previously they were required, according to Atlanta store workers. In Towson they were billed as voluntary, although they automatically appeared on employees’ schedules, and they had to actively opt out. The change in tactics follows a memo from National Labor Relations Board general counsel Jennifer Abruzzo saying those so-called captive audience meetings were illegal. In light of that guidance, the union representing the Atlanta store filed an unfair labor practice change with the NLRB.

Members of the suspended union effort in Atlanta have been in touch with Apple employees at other stores, including Towson, to advise them on what to expect from Apple and how to fight back. “When a manager says something in a public forum, it’s not enough to say it’s not true,” says Atlanta staffer and organizing committee member Derrick Bowles. Workers need to go the further step of explaining why the statement is illogical as well.

Bowles says managers attempted to paint union organizers in Atlanta as aggressors, frequently throwing around terms like “tension” and “bullying,” which he disputed in meetings. He says other Apple workers running union campaigns need to put these managers on the spot. “Like, ‘You say we might lose benefits. Is that a threat? Is that something you’d be willing to put into writing?’ You have to put leadership on the defensive. If you are on the defensive, you will lose.”

No One Knows How Safe New Driver-Assistance Systems Really Are

No One Knows How Safe New Driver-Assistance Systems Really Are

This week, a US Department of Transportation report detailed the crashes that advanced driver-assistance systems have been involved in over the past year or so. Tesla’s advanced features, including Autopilot and Full Self-Driving, accounted for 70 percent of the nearly 400 incidents—many more than previously known. But the report may raise more questions about this safety tech than it answers, researchers say, because of blind spots in the data.

The report examined systems that promise to take some of the tedious or dangerous bits out of driving by automatically changing lanes, staying within lane lines, braking before collisions, slowing down before big curves in the road, and, in some cases, operating on highways without driver intervention. The systems include Autopilot, Ford’s BlueCruise, General Motors’ Super Cruise, and Nissan’s ProPilot Assist. While it does show that these systems aren’t perfect, there’s still plenty to learn about how a new breed of safety features actually work on the road.

That’s largely because automakers have wildly different ways of submitting their crash data to the federal government. Some, like Tesla, BMW, and GM, can pull detailed data from their cars wirelessly after a crash has occurred. That allows them to quickly comply with the government’s 24-hour reporting requirement. But others, like Toyota and Honda, don’t have these capabilities. Chris Martin, a spokesperson for American Honda, said in a statement that the carmaker’s reports to the DOT are based on “unverified customer statements” about whether their advanced driver-assistance systems were on when the crash occurred. The carmaker can later pull “black box” data from its vehicles, but only with customer permission or at law enforcement request, and only with specialized wired equipment.

Of the 426 crash reports detailed in the government report’s data, just 60 percent came through cars’ telematics systems. The other 40 percent were through customer reports and claims—sometimes trickled up through diffuse dealership networks—media reports, and law enforcement. As a result, the report doesn’t allow anyone to make “apples-to-apples” comparisons between safety features, says Bryan Reimer, who studies automation and vehicle safety at MIT’s AgeLab.

Even the data the government does collect isn’t placed in full context. The government, for example, doesn’t know how often a car using an advanced assistance feature crashes per miles it drives. The National Highway Traffic Safety Administration, which released the report, warned that some incidents could appear more than once in the data set. And automakers with high market share and good reporting systems in place—especially Tesla—are likely overrepresented in crash reports simply because they have more cars on the road.

It’s important that the NHTSA report doesn’t disincentivize automakers from providing more comprehensive data, says Jennifer Homendy, chair of the federal watchdog National Transportation Safety Board. “The last thing we want is to penalize manufacturers that collect robust safety data,” she said in a statement. “What we do want is data that tells us what safety improvements need to be made.”

Without that transparency, it can be hard for drivers to make sense of, compare, and even use the features that come with their car—and for regulators to keep track of who’s doing what. “As we gather more data, NHTSA will be able to better identify any emerging risks or trends and learn more about how these technologies are performing in the real world,” Steven Cliff, the agency’s administrator, said in a statement.

After Layoffs, Crypto Startups Face a ‘Crucible Moment’

After Layoffs, Crypto Startups Face a ‘Crucible Moment’

In May, the venture capital firm Sequoia circulated a memo among its startup founders. The 52-page presentation warned of a challenging road ahead, paved by inflation, rising interest rates, a Nasdaq drawdown, supply chain issues, war, and a general weariness about the economy. Things were about to get tough, and this time, venture capital would not be coming to the rescue. “We believe this is a Crucible Moment,” the firm’s partners wrote. “Companies who move the quickest and have the most runway are most likely to avoid the death spiral.”

Plenty of startups seem to be taking Sequoia’s advice. The mood has become downright funereal as founders and CEOs cut the excesses of 2021 from their budgets. Most crucially, these reductions have affected head count. More than 10,000 startup employees have been laid off since the start of June, according to Layoffstracker.com, which catalogs job cuts. Since the start of the year, the tally is closer to 40,000.

The latest victims have been crypto companies, and the carnage is not small. On Tuesday, Coinbase laid off 1,100 employees, abruptly cutting their access to corporate email accounts and locking them out of the company’s Slack. Those layoffs came just days after Coinbase rescinded job offers from more than 300 people who planned to start working there in the coming weeks. Two other crypto startups—BlockFi and Crypto.com—each cut hundreds of jobs on Monday; the crypto exchange Gemini also laid off about 10 percent of its staff earlier this month. Collectively, more than 2,000 employees of crypto startups have lost their jobs since the start of June—about one-fifth of all startup layoffs this month.

The conversation around crypto companies has changed abruptly in the past year. In 2021, they were the darling of venture capitalists, who showered them with billions of dollars to fund aggressive growth. Coinbase, which went public in April 2021 at $328 a share, seemed to suggest an emerging gold mine in the sector. Other companies, like BlockFi, started hiring aggressively with ambitions to go public. Four crypto startups took out expensive prime-time ads in the most recent Super Bowl.

Coinbase was also focused on hypergrowth, scaling its staff from 1,250 at the beginning of 2021 to about 5,000 in 2022. “It is now clear to me that we over-hired,” Brian Armstrong, Coinbase’s CEO, wrote in a blog post on Tuesday, where he announced the layoffs. “We grew too quickly.”

“It could be that crypto is the canary in the coal mine,” says David A. Kirsch, associate professor of strategy and entrepreneurship at the University of Maryland’s Robert H. Smith School of Business. He describes the contractions in crypto startups as one potential signal of “a great unraveling,” where more startups are evaluated for how well they can deliver on their promises. If history is any indication, those that can’t are fated for “the death spiral.”

Kirsch has spent years studying the lessons of past crashes; he is also the author of Bubbles and Crashes, a book about boom-bust cycles in tech. Kirsch says that the bubble tends to pop first in high-leverage, high-growth sectors. When the Nasdaq fell in 2000, for example, the value of most ecommerce companies vanished “well in advance of the broader market decline.” Companies like Pets.com and eToys.com—which had made big, splashy public debuts—eventually went bankrupt.

LaMDA and the Sentient AI Trap

LaMDA and the Sentient AI Trap

Now head of the nonprofit Distributed AI Research, Gebru hopes that going forward people focus on human welfare, not robot rights. Other AI ethicists have said that they’ll no longer discuss conscious or superintelligent AI at all.

“Quite a large gap exists between the current narrative of AI and what it can actually do,” says Giada Pistilli, an ethicist at Hugging Face, a startup focused on language models. “This narrative provokes fear, amazement, and excitement simultaneously, but it is mainly based on lies to sell products and take advantage of the hype.”

The consequence of speculation about sentient AI, she says, is an increased willingness to make claims based on subjective impression instead of scientific rigor and proof. It distracts from “countless ethical and social justice questions” that AI systems pose. While every researcher has the freedom to research what they want, she says, “I just fear that focusing on this subject makes us forget what is happening while looking at the moon.”

What Lemoire experienced is an example of what author and futurist David Brin has called the “robot empathy crisis.” At an AI conference in San Francisco in 2017, Brin predicted that in three to five years, people would claim AI systems were sentient and insist that they had rights. Back then, he thought those appeals would come from a virtual agent that took the appearance of a woman or child to maximize human empathic response, not “some guy at Google,” he says.

The LaMDA incident is part of a transition period, Brin says, where “we’re going to be more and more confused over the boundary between reality and science fiction.”

Brin based his 2017 prediction on advances in language models. He expects that the trend will lead to scams. If people were suckers for a chatbot as simple as ELIZA decades ago, he says, how hard will it be to persuade millions that an emulated person deserves protection or money?

“There’s a lot of snake oil out there, and mixed in with all the hype are genuine advancements,” Brin says. “Parsing our way through that stew is one of the challenges that we face.”

And as empathetic as LaMDA seemed, people who are amazed by large language models should consider the case of the cheeseburger stabbing, says Yejin Choi, a computer scientist at the University of Washington. A local news broadcast in the United States involved a teenager in Toledo, Ohio, stabbing his mother in the arm in a dispute over a cheeseburger. But the headline “Cheeseburger Stabbing” is vague. Knowing what occurred requires some common sense. Attempts to get OpenAI’s GPT-3 model to generate text using “Breaking news: Cheeseburger stabbing” produces words about a man getting stabbed with a cheeseburger in an altercation over ketchup, and a man being arrested after stabbing a cheeseburger.

Language models sometimes make mistakes because deciphering human language can require multiple forms of common-sense understanding. To document what large language models are capable of doing and where they can fall short, last month more than 400 researchers from 130 institutions contributed to a collection of more than 200 tasks known as BIG-Bench, or Beyond the Imitation Game. BIG-Bench includes some traditional language-model tests like reading comprehension, but also logical reasoning and common sense.

Researchers at the Allen Institute for AI’s MOSAIC project, which documents the common-sense reasoning abilities of AI models, contributed a task called Social-IQa. They asked language models—not including LaMDA—to answer questions that require social intelligence, like “Jordan wanted to tell Tracy a secret, so Jordan leaned towards Tracy. Why did Jordan do this?” The team found large language models achieved performance 20 to 30 percent less accurate than people.