
Roger-Luc Chayer (Image : AI / Gay Globe)
Meta’s Algorithms and the Spread of False Information
All users of Meta’s services, whether Facebook, Instagram, Threads, or others, know very well that Meta analyzes everything you read in order to bombard you with similar content. The algorithm assumes that this is what you want, when in reality it can be extremely irritating.
How many times do we scroll past thousands of fake news stories about health discoveries, about Trump, about the war in Ukraine, or about any statement made by politicians around the world? This is social pollution that obviously fuels disinformation, but above all serves to generate clicks and views in order to monetize these sites. Meta does nothing to remove this content from its platforms, since it is highly profitable for the company.
For a long time, I believed there was nothing that could be done and that this was simply the price to pay for using social media. I was wrong!
Why Disinformation Posts Spread
The reason we see so many misleading posts on social media is essentially tied to the algorithm and the business model of these platforms. Networks like Meta, X (formerly Twitter), or TikTok do not simply show what is true or relevant; they optimize what keeps your attention the longest. Sensational, polarizing, or provocative content generates far more clicks, shares, and comments than neutral or factual information.
This mechanism has several consequences. First, disinformation spreads very quickly, because it is often more shocking or emotional than the truth. Second, these platforms monetize visibility: more clicks and interactions mean more advertising revenue. Finally, there are few effective filters to stop the spread of false or misleading content, because removing it could reduce engagement, which runs counter to the platform’s economic interests.
Cognitive Biases and the Virality of Fake News
The fact that many people believe these misleading publications can be explained by several factors related to human psychology and the way algorithms function. Social media platforms exploit natural cognitive biases: for example, we tend to believe what confirms our preexisting ideas (confirmation bias), to share what triggers strong emotions such as fear, anger, or surprise, and to trust what our friends or communities post, even if it is false.
Algorithms amplify these tendencies by prioritizing the most engaging content, not necessarily the most accurate. This creates what are known as filter bubbles, where each user is exposed almost exclusively to information that reinforces their beliefs and opinions. Over time, this makes disinformation more convincing and harder to challenge, because it is constantly repeated and validated by one’s virtual circle.
In practice, this combination of psychological predispositions and algorithmic logic transforms disinformation into a viral phenomenon: the more content shocks or reassures, the more it is shown, the more it is shared, and the more “real” it becomes in the eyes of those who see it.
Concrete Example: Fake Health News
Asked about this situation, ChatGPT explains:
“Let’s take a concrete example related to health. Suppose a post claims that a natural remedy X ‘permanently cures’ a common disease. This claim, even without scientific proof, will immediately attract attention: it promises a simple solution to a serious problem, triggering curiosity, hope, and emotion. People who believe it will like, comment, and share it with friends, reinforcing its spread.
The social media algorithm detects this high engagement and will then show the post to even more people, often to profiles that have shown interest in similar topics. They see the content in their feed, sometimes multiple times, creating an illusion of truth: the more we see repeated information, the more our brain tends to consider it reliable.
If the post is linked to polarizing personalities or events, such as Trump or the war in Ukraine, the effect is even stronger: fear, anger, or outrage encourage immediate sharing without verification. Thus, false information can spread very quickly, reach millions of people, and appear ‘true’ simply because the social network optimizes the visibility of content that provokes an emotional reaction, not its accuracy.”
How to Take Back Control of Your News Feed
A small revelation I made recently: believing that we are powerless in the face of social media is completely false. In reality, it is possible to play with the algorithm and reduce the reach of their undesirable posts. Here is how to proceed.
First, I cut off all interaction with unwanted content. No clicks, no comments, not even to denounce it. Even a critical comment feeds the algorithm, because it interprets any interaction as a signal of interest. Indifference is far more powerful than outrage.
Next, I systematically use the available tools: I hide posts, select “I’m not interested,” unfollow a page without necessarily blocking it, and report clearly false content. Each action sends a negative signal to the system. Repeated over time, these actions significantly reshape my news feed.
At the same time, I deliberately strengthen the content I want to see. I follow reliable media outlets, interact with credible sources, comment on or share serious analyses. The algorithm works through learning: it amplifies what I feed it.
I also do regular cleanup: I clear out old subscriptions, leave certain groups, and review the pages I follow. Much disinformation comes from subscriptions forgotten years ago.
From the very first day I began using “Unfollow…” or “Hide all from…,” applying these functions to about fifty posts in barely an hour, I immediately noticed a sharp decline in fake news and fraudulent sites. For two days, I received nothing similar. Today, every time I encounter one, I systematically remove it from my feed. The result: not only do these contents disappear from my view, but these sites can no longer profit from me in any way.
I am sharing this so that my readers stop enduring algorithms and take back control over what they see.
PUBLICITÉ
