Select Page
Apple MacOS Ventura Bug Breaks Third-Party Security Tools

Apple MacOS Ventura Bug Breaks Third-Party Security Tools

The release of Apple’s new macOS 13 Ventura operating system on October 24 brought a host of new features to Mac users, but it’s also causing problems for those who rely on third-party security programs like malware scanners and monitoring tools. 

In the process of patching a vulnerability in the 11th Ventura developer beta, released on October 11, Apple accidentally introduced a flaw that cuts off third-party security products from the access they need to do their scans. And while there is a workaround to grant the permission, those who upgrade their Macs to Ventura may not realize that anything is amiss or have the information needed to fix the problem. 

Apple told WIRED that it will resolve the issue in the next macOS software update but declined to say when that would be. In the meantime, users could be unaware that their Mac security tools aren’t functioning as expected. The confusion has left third-party security vendors scrambling to understand the scope of the problem.

“Of course, all of this coincided with us releasing a beta that was supposed to be compatible with Ventura,” says Thomas Reed, director of Mac and mobile platforms at the antivirus maker Malwarebytes. “So we were getting bug reports from customers that something was wrong, and we were like, ‘crap, we just released a flawed beta.’ We even pulled our beta out of circulation temporarily. But then we started seeing reports about other products, too, after people upgraded to Ventura, so we were like, ‘uh oh, this is bad.’”

Security monitoring tools need system visibility, known as full disk access, to conduct their scans and detect malicious activity. This access is significant and should be granted only to trusted programs, because it could be abused in the wrong hands. As a result, Apple requires users to go through multiple steps and authenticate before they grant permission to an antivirus service or system monitoring tool. This makes it much less likely that an attacker could somehow circumvent these hurdles or trick a user into unknowingly granting access to a malicious program. 

Longtime macOS security researcher Csaba Fitzl found, though, that while these setup protections were robust, he could exploit a vulnerability in the macOS user privacy protection known as Transparency, Consent, and Control to easily deactivate or revoke the permission once granted. In other words, an attacker could potentially disable the very tools users rely on to warn them about suspicious activity. 

Apple attempted to fix the flaw multiple times throughout 2022, but each time, Fitzl says, he was able to find a workaround for the company’s patch. Finally, Apple took a bigger step in Ventura and made more comprehensive changes to how it manages the permission for security services. In doing that, though, the company made a different mistake that’s now causing the current issues.

“Apple fixed it, and then I bypassed the fix, so they fixed it again, and I bypassed it again,” Fitzl says. “We went back and forth like three times, and eventually they decided that they will redesign the whole concept, which I think was the right thing to do. But it was a bit unfortunate that it came out in the Ventura beta so close to the public release, just two weeks before. There wasn’t time to be aware of the issue. It just happened.”

The Uber Data Breach Conviction Shows Security Execs What Not to Do

The Uber Data Breach Conviction Shows Security Execs What Not to Do

“This is a unique case because there was that ongoing FTC investigation,” says Shawn Tuma, a partner in the law firm Spencer Fane who specializes in cybersecurity and data privacy issues. “He had just given sworn testimony and was most certainly under a duty to further supplement and provide relevant information to the FTC. That’s how it works.”

Tuma, who frequently works with companies responding to data breaches, says that the more concerning conviction in terms of future precedent is the misprision of felony charge. While the prosecution was seemingly motivated primarily by Sullivan’s failure to notify the FTC of the 2016 breach during the agency’s investigation, the misprision charge could create a public perception that it is never legal or acceptable to pay ransomware actors or hackers attempting to extort payment to keep stolen data private.

“These situations are highly charged and CSOs are under immense pressure,” Vance says. “What Sullivan did seems to have succeeded at keeping the data from coming out, so in their minds, they succeeded at protecting user data. But would I personally have done that? I hope not.”

Sullivan told The New York Times in a 2018 statement, “I was surprised and disappointed when those who wanted to portray Uber in a negative light quickly suggested this was a cover-up.”

The facts of the case are somewhat specific in the sense that Sullivan didn’t simply lead Uber to pay the criminals. His plan also involved presenting the transaction as a bug bounty payout and getting the hackers—who pleaded guilty to perpetrating the breach in October 2019—to sign an NDA. While the FBI has been clear that it doesn’t condone paying hackers off, US law enforcement has generally sent a message that what it values most is being notified and brought into the process of breach response. Even the Treasury Department has said that it can be more flexible and lenient about payments to sanctioned entities if victims notify the government and cooperate with law enforcement. In some cases, as with the 2021 Colonial Pipeline ransomware attack, officials working with victims have been able to trace payments and attempt to recoup the money. 

“This is the one that gives me the most concern, because paying a ransomware attacker could be viewed out in the public as criminal wrongdoing, and then over time that could become a sort of default standard,” Tuma says. “On the other hand, the FBI highly encourages people to report these incidents, and I’ve never had an adverse experience with working with them personally. There’s a difference between making that payment to the bad guys to buy their cooperation and saying, ‘We’re going to try to make it look like a bug bounty and have you sign an NDA that’s false.’ If you have a duty to supplement to the FTC, you could give them relevant information, comply with breach notification laws, and take your licks.”

Tuma and Vance both note, though, that the climate in the US for handling data extortion situations and working with law enforcement on ransomware investigations has evolved significantly since 2016. For executives tasked with protecting the reputation and viability of their company—in addition to defending users—the options for how to respond a few years ago were much murkier than they are now. And this may be exactly the point of the Justice Department’s effort to prosecute Sullivan.

“Technology companies in the Northern District of California collect and store vast amounts of data from users. We expect those companies to protect that data and to alert customers and appropriate authorities when such data is stolen by hackers,” US attorney Stephanie Hinds said in a statement about the conviction on Wednesday. “Sullivan affirmatively worked to hide the data breach from the Federal Trade Commission and took steps to prevent the hackers from being caught. Where such conduct violates the federal law, it will be prosecuted.”

Sullivan has yet to be sentenced—another chapter in the saga that security executives will no doubt be watching extremely closely.

The Challenge of Cracking Iran’s Internet Blockade

The Challenge of Cracking Iran’s Internet Blockade

Some communication services have systems in place for attempting to skirt digital blockades. The secure messaging app Signal, for example, offers tools so people around the world can set up proxy servers that securely relay Signal traffic to bypass government filters. Proxy service has previously only been available for Signal on Android, but the platform added iOS support on Wednesday. 

Still, if people in Iran don’t already have the Signal app installed on their phones or haven’t registered their phone numbers, the connectivity outages make it difficult to download the app or receive the SMS code used for account setup. Android users who can’t connect to Google Play can also download the app directly from Signal’s website, but this creates the possibility that malicious versions of the Signal app could circulate on other forums and trick people into downloading them. In an attempt to address this, the Signal Foundation created the email address “getsignal@signal.org” that people can message to request a safe copy of the app. 

The anonymity service Tor is largely inaccessible in Iran, but some activists are working to establish Tor bridges within Iran to connect internal country networks to the global platform. The work is difficult without infrastructure and resources, though, and is extremely dangerous if the regime detects the activity. Similarly, other efforts to establish clandestine infrastructure within the country are fraught because they often require too much technical expertise for a layperson to carry out safely. Echoing the issue with safely downloading apps like Signal, it can also be difficult for people to determine whether circumvention measures they learn about are legitimate or tainted.

Users in Iran have also been leaning on other services that have proxies built in. For example, Firuzeh Mahmoudi, executive director of the US-based nonprofit United for Iran, says that the law enforcement-tracking app Gershad has been in heavy use during the connectivity blackouts. The app, which has been circulating in Iran since 2016 and is now developed by United for Iran, lets users crowdsource information about the movements of the regime’s “morality police” and is now also being used to track other security forces and checkpoints.

The basic issue of connectivity access is still a fundamental challenge. Efforts to provide satellite service as an alternative could theoretically be very fruitful and threaten the totality of internet blackouts. SpaceX CEO Elon Musk tweeted last week that he was “activating” the company’s Starlink satellite internet service for people in Iran. In practice, though, the option isn’t a panacea. To use Starlink or any satellite internet, you need hardware that includes base stations to pick up and translate the signal. Procuring and setting up this infrastructure takes resources and is especially infeasible in a place like Iran, where sanctions and trade blockades drastically limit access to equipment and the ability to pay for subscription services or other connectivity fees. And even if users can overcome these hurdles, jamming is also a potential issue. The French satellite operator Eutelsat said yesterday, for example, that two of its satellites were being jammed from Iran. In addition to providing internet services, the satellites also broadcast two prominent Iranian dissident television channels.

“There are just so many challenges of installing this in Iran,” Miaan Group’s Rashidi says. “If you have a terminal, my understanding is that Starlink is working, but getting those terminals into the country is a challenge. And then they are a security risk because the government can locate those terminals. And then, who is going to pay for all of it and how, given the sanctions? But even if you ignore all those issues, satellite base stations don’t solve the problem that mobile data is part of the shutdown. You can’t put a Starlink terminal in your backpack to go to a protest. So satellite connectivity would be helpful, but it doesn’t solve the issues.”

Though the problem is nuanced, human rights advocates and Iranian activists emphasize that the global community can make a difference by raising awareness and continuing to work on creative solutions to the problem. With digital censorship and connectivity blackouts being used as levers for authoritarian control, developing circumvention tools is increasingly vital. As United for Iran’s Mahmoudi puts it, “We all need to keep the lights on.”

Slack and Teams’ Lax App Security Raises Alarms

Slack and Teams’ Lax App Security Raises Alarms

Collaboration apps like Slack and Microsoft Teams have become the connective tissue of the modern workplace, tying together users with everything from messaging to scheduling to video conference tools. But as Slack and Teams become full-blown, app-enabled operating systems of corporate productivity, one group of researchers has pointed to serious risks in what they expose to third-party programs—at the same time as they’re trusted with more organizations’ sensitive data than ever before.

A new study by researchers at the University of Wisconsin-Madison points to troubling gaps in the third-party app security model of both Slack and Teams, which range from a lack of review of the apps’ code to default settings that allow any user to install an app for an entire workspace. And while Slack and Teams apps are at least limited by the permissions they seek approval for upon installation, the study’s survey of those safeguards found that hundreds of apps’ permissions would nonetheless allow them to potentially post messages as a user, hijack the functionality of other legitimate apps, or even, in a handful of cases, access content in private channels when no such permission was granted.

“Slack and Teams are becoming clearinghouses of all of an organization’s sensitive resources,” says Earlence Fernandes, one of the researchers on the study who now works as a professor of computer science at the University of California at San Diego, and who presented the research last month at the USENIX Security conference. “And yet, the apps running on them, which provide a lot of collaboration functionality, can violate any expectation of security and privacy users would have in such a platform.”

When WIRED reached out to Slack and Microsoft about the researchers’ findings, Microsoft declined to comment until it could speak to the researchers. (The researchers say they communicated with Microsoft about their findings prior to publication.) Slack, for its part, says that a collection of approved apps that is available in its Slack App Directory does receive security reviews before inclusion and are monitored for any suspicious behavior. It “strongly recommends” that users install only these approved apps and that administrators configure their workspaces to allow users to install apps only with an administrator’s permission. “We take privacy and security very seriously,” the company says in a statement, “and we work to ensure that the Slack platform is a trusted environment to build and distribute apps, and that those apps are enterprise-grade from day one.”

But both Slack and Teams nonetheless have fundamental issues in their vetting of third-party apps, the researchers argue. They both allow integration of apps hosted on the app developer’s own servers with no review of the apps’ actual code by Slack or Microsoft engineers. Even the apps reviewed for inclusion in Slack’s App Directory undergo only a more superficial check of the apps’ functionality to see whether they work as described, check elements of their security configuration such as their use of encryption, and run automated app scans that check their interfaces for vulnerabilities.

Despite Slack’s own recommendations, both collaboration platforms by default allow any user to add these independently hosted apps to a workspace. An organization’s administrators can switch on stricter security settings that require the administrators to approve apps before they’re installed. But even then, those administrators must approve or deny apps without themselves having any ability to vet their code, either—and crucially, the apps’ code can change at any time, allowing a seemingly legitimate app to become a malicious one. That means attacks could take the form of malicious apps disguised as innocent ones, or truly legitimate apps could be compromised by hackers in a supply chain attack, in which hackers sabotage an application at its source in an effort to target the networks of its users. And with no access to apps’ underlying code, those changes could be undetectable to both administrators and any monitoring system used by Slack or Microsoft.

The January 6 Secret Service Text Scandal Turns Criminal

The January 6 Secret Service Text Scandal Turns Criminal

As the United States midterm elections near, lawmakers and law enforcement officials are on high alert about violent threats targeted at election officials across the country—domestic threats that have taken first billing over foreign influence operations and meddling as the primary concern for the 2022 elections. In another arena, though, Congress is making progress on generating bipartisan support for sorely needed and overdue privacy legislation in the form of the American Data Privacy and Protection Act.

Iranian women’s rights activists sounded the alarm this week that Meta has not been responsive to their concerns about targeted bot campaigns flooding their Instagram accounts during a crucial moment for the country’s feminist movement. And investigators looking at attacks on internet cables in Paris have still not determined who was behind the vandalism or what their motive was, but new details have emerged about the extent of the sabotage, making the situation all the more concerning and intriguing. 

The ACLU released documents this week that detail the Department of Homeland Security’s contracts with phone-tracking data brokers who peddle location information. And if you’re worried about Big Brother snooping on your reproductive data, we have a ranking of the most popular period-tracking apps by their data privacy protections. 

And there’s more. Each week we round up the news that we didn’t break or cover in-depth. Click on the headlines to read the full stories. And stay safe out there!

The Department of Homeland Security Inspector General told the Secret Service on Thursday to halt its investigation into the deletion of January 6 insurrection-related text messages because of an “ongoing criminal investigation” into the situation. Secret Service spokespeople have said conflicting things: that data on the phones was erased during a planned phone migration or factory reset, and that the erased messages were not relevant to the January 6 investigation. The Secret Service said it provided agents with a guide to backing up their data before initiating the overhaul process, but noted that it was up to the individuals to complete this backup. 

Zero Day spoke to Robert Osgood, director of the forensics and telecommunications program at George Mason University and a former FBI digital forensics examiner, about the situation. “Osgood said that telling agents to back up their own phones ‘makes absolutely no sense’— particularly for a government agency engaged in the kind of work the Secret Service does and required to retain records. The agency is not only charged with protecting the president, vice president and others, it also investigates financial crimes and cybercrime,” reports Zero Day author Kim Zetter. “I’m pro-government, and [telling agents to back up their own phones] sounds strange,” Osgood told Zetter. “If that did happen, the IT manager that’s responsible for that should be censured. Something should happen to that person because that’s one of the dumbest things I’ve ever heard in my life.’”

The Federal Communications Commission’s Robocall Response Team said on Thursday that it is ordering phone companies to block robocalls that warn about expiring car warranties and offer renewal deals. The FCC said that the calls, which are familiar to people around the US, have come from “Roy Cox Jr., Aaron Michael Jones, their Sumco Panama companies, and international associates.” Since 2018 or possibly earlier, their operations have resulted in more than 8 billion prerecorded message calls to Americans, the FCC said. “We are not going to tolerate robocall scammers or those that help make their scams possible,” FCC chairperson Jessica Rosenworcel said in a statement. “Consumers are out of patience and I’m right there with them.”

After Apple warned a number of Thai activists and their associates in November that their devices might have been targeted with NSO Group’s notorious Pegasus spyware, a number of them reached out to human rights groups and researchers who established a broader picture of a campaign in Thailand. In all, more than 30 Thai victims have been identified. The targets worked with the local human rights group iLaw, which found that two of its own members had been victims of the campaign, as well as University of Toronto’s Citizen Lab and Amnesty International. The researchers did not provide attribution for who was behind the Pegasus campaigns, but found that a lot of the targeting occurred in the same general time when the targets were participating in protests against government policies.

Google’s Threat Analysis Group reported this week that it has seen Russia’s digital meddling continue apace, both in Ukraine as the Kremlin’s invasion rages on and in Eastern Europe more broadly. TAG detected the Russia-linked hacking group Turla attempting to spread two different malicious Android apps through sites that masqueraded as being Ukrainian. The group tried to market the apps by claiming that downloading them would play a role in launching denial of service attacks on Russian websites, an interesting twist given the civilian efforts in Ukraine to mount cyberattacks against Russia. TAG also detected activity from other known Russian hacking groups that were exploiting vulnerabilities to target Ukrainian systems and launching disinformation campaigns in the region.

Ukrainian officials also said this week that Russia had conducted an attack on Ukraine’s TAVR Media, hacking nine popular radio stations to spread false information that Ukrainian President Volodymyr Zelensky was in intensive care because of a critical ailment. The broadcast further claimed that Ruslan Stefanchuk, chairperson of the Verkhovna Rada, was in command in Zelensky’s stead. TAVR put out a statement on Facebook saying that the broadcasts did “not correspond to reality.” And Zelensky posted a video on his Instagram attributing the attack to Russia and saying that he is in good health.