As a global election season frequently expected to be mired in misconceptions and falsehoods, the tech platforms in the US are walking back policies meant to curb them, stoking alarm.
The social media giants are demonstrating a certain lingence with being the sheriffs of the internet Wild West.
The changes have come in an environment of layoffs, cost-cutting measures, and pressure from right-wing groups that accuse the likes of Meta or YouTube owner Google of suppressing free speech.
This has led tech companies to loosen content moderation policies, reduce trust and safety teams and restore accounts known for pushing bogus conspiracy theories in the case of Elon Musk.
The researchers say that these moves have eroded their ability to tackle what is expected to be a deluge of misinformation during more than 50 major elections across the world next year, not only in the United States, but also in India, Africa and the European Union.
't ready for the 2024 election tsunami,' said the watchdog Global Coalition for Tech Justice in a report this month.
YouTube said in June it will stop removing content that falsely claims the 2020 US presidential election was plagued by 'fraud, errors or glitches', a move widely criticized by misinformation researchers.
In November, Twitter, known as X, said it would no longer enforce its COVID misinformation policy.
Musk has restored thousands of accounts, once suspended for spreading misinformation and introduced a paid verification system that investigators say has served to boost conspiracy theorists.
Last month, the campaign said it would now allow paid political advertising from US candidates, reversing a previous ban and sparking concerns over misinformation and hate speech in next year's election.
s control over Twitter has helped usher in a new era of recklessness by large tech platforms, said Nora Benavidez, a nonpartisan group of the Free Press.
conservative US activists are under pressure from conservative US advocates who accuse them of colluding with the government to remove or suppress right-wing content under the guise of fact-checking.
''I just stop causing them problems when all they're doing is increasing their own vulnerability,'' said Berin Szoka, president of TechFreedom, a think tank.
For years, Facebook's algorithm moved posts lower in the feed if they were flagged by one of the platform's third-party fact-checking partners, including AFP, reducing the visibility of false or misleading content.
Facebook recently gave US users the power to move their content higher if they desire, allowing them to do so, in a potentially significant move that the platform said will give users more power over its algorithm.
The political polarization in the United States has made content moderation on social media a top-notch issue.
The Supreme Court on Monday temporarily put on hold an order limiting the ability of President Joe Biden's administration to contact social media companies to remove content it considers to be misinformation.
A lower court of Republican-nominated judges had granted that order, ruling that US officials went too far in their efforts to get platforms to censor certain posts.
Misinformation researchers from prominent institutions such as the Stanford Internet Observatory face a Republican-led congressional inquiry and lawsuits from conservative activists accused of promoting censorship, a charge they deny.
The tech sector's downsizing that has gutted trust and safety teams and poor access to platform data have added to their challenges.
Ramya Krishnan, a professor in Columbia University's Knight First Amendment Institute, told AFP.