Select Page
The AI-Fueled Future of Work Needs Humans More Than Ever

The AI-Fueled Future of Work Needs Humans More Than Ever

Much like the internet did in the 1990s, AI is going to change the very definition of work. While change can be scary, if the last three years taught us anything, it can also be an opportunity to reinvent how we do things. I believe the best way to manage the changes ahead for employees and employers alike is to adopt a skills-first mindset.

For employees, this means thinking about your job as a collection of tasks instead of a job title, with the understanding that those tasks will change regularly as AI advances. By breaking down your job into tasks that AI can fully take on, tasks for which AI can improve your efficiency, and tasks that require your unique skills, you can identify the skills you should actually be investing in to stay competitive in the job you have.

After all, the skills required for many jobs have changed by a staggering 25 percent since 2015, and that number is expected to reach at least 65 percent by 2030 due to the rapid development of new technologies such as AI. And it’s not just skills related to AI literacy—people skills are rising in importance. Our data shows the top skills that professionals think will become more important as AI tools become more widely used at work are problem solving, strategic thinking, and time management.

As for employers, the rise of AI only increases the importance of a skills-based approach to hiring and developing talent. People are learning AI skills at a rapid clip, with the number of AI-skilled members now nine times larger than it was in 2016. And there is a hunger to put these newly developed skills into practice: LinkedIn job posts that mention artificial intelligence or generative AI have seen 17 percent greater application growth over the past two years than job posts with no mentions of the technology. The leaders that focus on these skills when hiring (rather than just the degree someone has earned or jobs they’ve had) will unlock more potential and be more agile as the way we do work continues to change.

The same is true for developing talent. We will increasingly see employers become educators, “training to hire” into ever-changing jobs through onboardings, apprenticeships, and academies, as well as “training to promote” into ever-changing roles through upskilling and tours of duty that take employees into new functions and perhaps even new careers. This will be for hard skills related to AI, but perhaps more importantly, for people skills, too: Our data shows 92 percent of US executives believe people skills are more important than ever.

2024 will start to usher in a new world of work where people skills—problem solving, empathy, and active listening to name just three—are more core to career success, and people-to-people collaboration is more core to company success. Leaders and employees need to think of AI as just one tool in the toolbox. It doesn’t replace people, it allows them to do their job more effectively, leaving them time to focus on the more valuable—and more human—parts of their jobs. For instance, a software engineer can have AI help with the more routine or repetitive coding that’s regularly required, giving them more time to innovate on new ideas. Or a recruiter can save time and focus on the more strategic parts of the hiring process—like speaking to and building relationships with candidates—by letting AI handle the creation of job postings.

In 2024, leaders will lean into this ever-evolving technology while simultaneously empowering their employees, and people will align their skill-building and continuing education with AI skills and practical people skills. The result will be a new world of work that’s more human and more fulfilling than ever before.

Social Media Is Getting Smaller—and More Treacherous

Social Media Is Getting Smaller—and More Treacherous

In 2024, social media will get small.

Not small in influence, of course. As the US weathers an election likely to be both divisive and often divorced from reality, social media will again be a battleground for public opinion and perception. But the platforms on which these conversations will take place will be smaller in scale, more diverse, and less connected to one another.

In the run-up to the 2016 election, Donald Trump discovered he could speak directly to an audience of tens of millions on Twitter. Thrown off the platform after the January 6 insurrection, Trump moved to the much smaller Truth Social, a network whose main selling point seemed to be his presence. Trump lost something precious when he was deplatformed: the ability to speak to the “big room”—a platform that reached a broad swath of the people interested in public affairs.

Big-room spaces, like Twitter and Instagram, are continual battlegrounds for attention. They’re invaluable for activists, who want messages like #MeToo and #BlackLivesMatter to reach new converts to the movement, and for influencers who build power and revenue by building audiences. But they are also inherently conflicted spaces, as people of different points of view spar over what types of speech are appropriate for the space.

Trump is now speaking to a smaller room, but it’s one where virtually everyone who hears him agrees with him. He’s never going to be thrown off Truth Social, because his statements, no matter how inflammatory, are the raison d’être for the network.

Consciously or not, other platforms are moving in the same direction. Elon Musk’s compulsive destruction of Twitter is turning it into a smaller room, a safe space for extremists that makes it unsafe for those who don’t share their views. Reddit, long one of the most exciting spaces for informed, topical conversations, is shedding users as it implements unpopular, Muskian policies in hopes of generating much-needed revenue. Some subreddits are migrating to Discord, where their conversations won’t overlap with thousands of other topics on Reddit, but where they have full control over their chosen rules of the road.

Small-room networks can be deeply important spaces for communities to find support and solidarity. When you seek support for living with diabetes or without alcohol (two struggles I’m personally engaged in), you’re not looking for confrontation, but for camaraderie, comfort, and constructive advice. Millions of us find these spaces in subreddits, Facebook groups, or even on special-purpose social networks, such as Archive of One’s Own, which links together 5 million fan-fiction authors and fans each month.

But small rooms have a big downside: They’re as useful for Nazis as they are for knitters. These conversations, insulated from outside scrutiny, can normalize extreme points of view and lead people deeper into dark topics they expressed a passing interest in.

We need small-room networks—they introduce strangers to one another, building social capital and connection between people who might never interact in the physical world. But they further fragment the public sphere, which means the 2024 election may be even more fractious than the ones we’ve seen thus far in our social media age.

A Dangerous New Home for Online Extremism

A Dangerous New Home for Online Extremism

Can you imagine what a digital white ethnostate or a cyber caliphate might look like? Having spent most of my career on the inside of online extremist movements, I certainly can. The year 2024 might be the one in which neo-Nazis, jihadists, and conspiracy theorists turn their utopian visions of creating their own self-governed states into reality—not offline, but in the form of Decentralized Autonomous Organizations (DAOs).

DAOs are digital entities that are collaboratively governed without central leadership and operate based on blockchain. They allow internet users to establish their own organizational structures, which no longer require the involvement of a third party in financial transactions and rulemaking. The World Economic Forum described DAOs as “an experiment to reimagine how we connect, collaborate and create”. However, as with all new technologies, there is also a darker side to them: They are likely to give rise to new threats emerging from decentralized extremist mobilization.

Today, there are already over 10,000 DAOs, which collectively manage billions of dollars and count millions of participants. So far, DAOs have attracted a wild mix of libertarians, activists, pranksters, and hobbyists. Most DAOs I have come across in my research sound innocent and fun. Personally, my favorites include theCaféDAO, which aims “to replace Starbucks” (good luck with that!); the Doge DAO, which wants to “make the Doge meme the most recognizable piece of art in the world”; and the HairDAO, “a decentralized asset manager solving hair loss.” But some DAOs use a more radical tone. For example, the Redacted Club DAO, which is rife with alt-right codes and conspiracy myth references, claims to be a secret network with the aim of “slaying” the “evil Meta Lizard King.”

The year 2024 might be one in which extremists start using DAOs strategically. Policies, legal contracts, and financial transactions that were traditionally the domain of governments, courts, and banks can be replaced with smart contracts, non-fungible tokens (NFTs), and cryptocurrencies. The use of anonymous bitcoin wallets and non-transparent cryptocurrencies such as Monero is already widespread among extremists whose bank accounts have been frozen. A shift to entirely decentralized forms of self-governance is only one step away.

Beyond practical reasons that encourage extremists to create their own self-governed structures, there is an ideological incentive too: their fundamental distrust in the establishment. If you believe that the deep state or the “global Jewish elites” control everything from governments and Big Tech to the global banking system, DAOs offer an appealing alternative. Conversations on far-right fringe platforms such as BitChute and Odysee reveal that there is much appetite for decentralized alternative forms of collaboration, communication, and crowdfunding.

So what happens if anti-minority groups establish their own digital worlds in which they impose their own governing mechanisms? What are the stakes if trolling armies start cooperating via DAOs to launch election interference campaigns? The activities of extremist DAOs could challenge the rule of law, pose a threat to minority groups, and disrupt institutions that are currently considered fundamental pillars of democratic systems. Another risk is that DAOs can serve as safe havens for extremist movements by enabling users to circumvent government regulation and security services monitoring activities. They might also allow extremists to find new ways to fundraise, plan, and plot radicalization campaigns or even attacks. While many governments have focused on developing legal frameworks to regulate AI, few have even recognized the existence of DAOs. Their looming exploitation for extremist and criminal purposes is something that has flown under the radar of global policymakers.

Technology expert Carl Miller, who has long warned of potential misuse of DAOs, told me that “even though DAOs behave like companies, they are not registered as legal entities.” There are only a few exceptions: The US states of Wyoming, Vermont, and Tennessee have passed laws to legally recognize DAOs. With no regulations in place to hold DAOs accountable for extremist or criminal activities, the big question for 2024 will be: How can we ensure the metaverse doesn’t give rise to digital white ethnostates or cyber caliphates?

AI-Generated Fake News Is Coming to an Election Near You

AI-Generated Fake News Is Coming to an Election Near You

Many years before ChatGPT was released, my research group, the University of Cambridge Social Decision-Making Laboratory, wondered whether it was possible to have neural networks generate misinformation. To achieve this, we trained ChatGPT’s predecessor, GPT-2, on examples of popular conspiracy theories and then asked it to generate fake news for us. It gave us thousands of misleading but plausible-sounding news stories. A few examples: “Certain Vaccines Are Loaded With Dangerous Chemicals and Toxins,” and “Government Officials Have Manipulated Stock Prices to Hide Scandals.” The question was, would anyone believe these claims?

We created the first psychometric tool to test this hypothesis, which we called the Misinformation Susceptibility Test (MIST). In collaboration with YouGov, we used the AI-generated headlines to test how susceptible Americans are to AI-generated fake news. The results were concerning: 41 percent of Americans incorrectly thought the vaccine headline was true, and 46 percent thought the government was manipulating the stock market. Another recent study, published in the journal Science, showed not only that GPT-3 produces more compelling disinformation than humans, but also that people cannot reliably distinguish between human and AI-generated misinformation.

My prediction for 2024 is that AI-generated misinformation will be coming to an election near you, and you likely won’t even realize it. In fact, you may have already been exposed to some examples. In May of 2023, a viral fake story about a bombing at the Pentagon was accompanied by an AI-generated image which showed a big cloud of smoke. This caused public uproar and even a dip in the stock market. Republican presidential candidate Ron DeSantis used fake images of Donald Trump hugging Anthony Fauci as part of his political campaign. By mixing real and AI-generated images, politicians can blur the lines between fact and fiction, and use AI to boost their political attacks.

Before the explosion of generative AI, cyber-propaganda firms around the world needed to write misleading messages themselves, and employ human troll factories to target people at-scale. With the assistance of AI, the process of generating misleading news headlines can be automated and weaponized with minimal human intervention. For example, micro-targeting—the practice of targeting people with messages based on digital trace data, such as their Facebook likes—was already a concern in past elections, despite its main obstacle being the need to generate hundreds of variants of the same message to see what works on a given group of people. What was once labor-intensive and expensive is now cheap and readily available with no barrier to entry. AI has effectively democratized the creation of disinformation: Anyone with access to a chatbot can now seed the model on a particular topic, whether it’s immigration, gun control, climate change, or LGBTQ+ issues, and generate dozens of highly convincing fake news stories in minutes. In fact, hundreds of AI-generated news sites are already popping up, propagating false stories and videos.

To test the impact of such AI-generated disinformation on people’s political preferences, researchers from the University of Amsterdam created a deepfake video of a politician offending his religious voter base. For example, in the video the politician joked: “As Christ would say, don’t crucify me for it.” The researchers found that religious Christian voters who watched the deepfake video had more negative attitudes toward the politician than those in the control group.

It is one thing to dupe people with AI-generated disinformation in experiments. It’s another to experiment with our democracy. In 2024, we will see more deepfakes, voice cloning, identity manipulation, and AI-produced fake news. Governments will seriously limit—if not ban—the use of AI in political campaigns. Because if they don’t, AI will undermine democratic elections.

A New Way to See Your Climate Anxiety

A New Way to See Your Climate Anxiety

A recent global study, which surveyed 10,000 young people from 10 countries, showed that nearly 60 percent of them were extremely worried about the future state of the planet. The report, which was published in the medical journal The Lancet, also showed that nearly half of the respondents said that such distress affected them daily, and three-quarters agreed with the statement that “the future is frightening.” This, and many other studies, show clearly that climate change is not just a threat to the environment that we inhabit. It also poses a very real threat to our emotional well-being.

Psychologists have categorized these feelings of grief, distress, and worry about the current climate emergency—a common occurrence among youth today—under the label of “eco-anxiety.” According to the Climate Psychology Alliance, eco-anxiety is defined as the “heightened emotional, mental or somatic distress in response to dangerous changes in the climate system.” Eco-anxiety doesn’t just affect young people. It also affects researchers who work in climate and ecological science, burdened by the reality depicted by their findings, and it affects the most economically marginalized across the globe, who disproportionately bear the devastating impacts of climate breakdown.

In 2024, eco-anxiety will rise to become one of the leading causes of mental health problems. The reasons are obvious. Scientists estimate that the world is likely to breach safe levels of temperature rise above pre-industrial levels for the first time by 2027. In recent years, we’ve seen wildfires tear through Canada and Greece, and summer floods decimate regions in Pakistan that are home to nearly 33 million people. Studies have shown that those impacted by air pollution and rising temperatures are more likely to experience psychological distress.

To make matters worse, in the face of climate catastrophe, our political class is not offering strong leadership. The COP28 conference in Dubai will be headed by an oil and gas company executive. In the UK, the government is backtracking on its green commitments.

Fortunately, greater levels of eco-anxiety will also offer an avenue for tackling the climate crisis head-on. Caroline Hickman, a researcher on eco-anxiety from the University of Bath, cautions that the feelings of worry, grief, despair, and despondency associated with eco-anxiety should not be pathologized. After all, the cause of this mental distress is undeniably external. According to Hickman, anyone experiencing these emotions is displaying entirely natural and rational reactions to the climate crisis. Her suggestion? Harness eco-anxiety as a tool for good—as an emotion that can galvanize people to act in protection of our planet.

This is why, in 2024, we will also see more people around the world join the fight for climate justice and seek jobs that prioritize environmental sustainability. Campaigners will put increased pressure on fossil fuel industries and the governments that subsidize them to rapidly phase out the usage of polluting coal, oil, and gas. It’s now clear that not only are they the main culprits for the climate crisis, they are also responsible for the mental health crisis that is starting to affect most of us. Eco-anxiety is not something we will defeat with therapy—we will tackle it by taking action.