This concern over the risk of foreign influence has prompted social media companies – TikTok, X and Meta, which owns Instagram and Facebook – to invest in measures they say will protect users from online manipulation.

This election, they have all told me, they have tried to ensure users get reliable information. Some have removed posts and accounts following my various investigations.

During the 2024 election, for the first time since Elon Musk took over X, the social media site responded to allegations raised by me – and took action, too.

But lots of the tactics I’ve uncovered were deployed and finessed by political activists long before Rishi Sunak stood in the pouring rain to call the general election.

The group of accounts sharing the faked clips and false comments about Wes Streeting had shared similar posts about Keir Starmer, for example, during a by-election back in February 2024.

As someone who investigates social media’s real-world impact all year round, it feels like some of the companies often only really wake up and take action during an election period.

The problem is that the concept of the “social media election” is dead. Instead, the world is constantly shaped by what’s happening on our feeds and group chats long before and long after any vote.

And so in the end this wasn’t a deepfake election – it was an election in which the same old questions about social media regulation went unanswered. The warnings about AI were a distraction from the lack of clear solutions to problems posed by algorithms and well-practised misinformation tactics online.

Share.
Exit mobile version