Beware of Fake Debates: Spotting Bots on Social Media

Image of robots

Roger-Luc Chayer (Image: AI / Gay Globe)

You’ve probably noticed it if you’re a regular user of Facebook, Instagram, X, and many other social media platforms: sometimes, the same false message, bordering on fraud, is spread by dozens, or even hundreds, of different pages. These posts are often accompanied by a large number of comments, giving the impression that a real conversation is taking place around these false claims to give them credibility.

All topics are involved, but it is mainly political, social, and economic issues that receive the most attention.

Unfortunately, many well-intentioned internet users, wishing to express their opinion or indignation, add a comment to these conversations without realizing that the posts and subsequent messages are generated by bots. They end up interacting with automated programs whose objective is never neutral. Indeed, those who operate these bots seek to manipulate public opinion, sow doubt on current issues, and even influence your perception and knowledge.

What is a bot on Facebook and other social networks?

According to ChatGPT, a conversational and search bot, a bot on Facebook and other social networks is an automated program designed to post, comment, and interact like a real user. It can spread false information, manipulate public opinion, or artificially amplify certain ideas by creating the illusion of widespread support.

Some bots are used for advertising purposes or to automate legitimate tasks, but others serve to influence political, economic, or social debates. Their goal is often to sow doubt and subtly shape users’ perceptions.

Just because many are wrong doesn’t mean they’re right!

This quote from French comedian Coluche, which I like to use in online debates, highlights the idea that the majority can be wrong, and the number of opinions or people does not determine the truth. It is often used to remind us that truth is not measured by popularity or the quantity of opinions.

Use of bots to influence public opinion on LGBTQ+ issues

A recent excellent example of the use of bots concerns the announcement by U.S. President Donald Trump that any mention of LGBTQ+ would be removed from the federal government’s websites and all government documentation.

Immediately following this announcement, hundreds of Facebook groups and others emerged on the web to support this new policy with hundreds of messages. While this topic was rarely discussed before, social media gave the impression that the entire American population supported the president on this issue. A false unanimity that serves only very particular interests.

Who are the manipulators behind these bots?

They are often entities or organized groups with specific political, economic, or ideological interests. They may be governments or private actors, including businesses, seeking to influence public opinion, sow confusion, or amplify certain ideologies. These manipulations are often carried out by intelligence agencies, cyberactivist groups, or even private companies specializing in image management and communication.

The countries most often identified in these acts are those with authoritarian regimes or governments seeking to control information and manipulate public opinion for their own advantage. Nations such as Russia, China, or certain Middle Eastern states have frequently been mentioned for their involvement in disinformation campaigns using bots on social media.

These countries use these techniques to influence elections, sow unrest in democratic societies, or even promote their geopolitical agenda. And it works very well due to the credulity of genuine internet users, who are often too naive when faced with the information published.

How to detect these bots?

Detecting bots on social media can be difficult, but there are several signs that can raise suspicion. Often, these bots generate a high volume of messages, frequently repetitive, that seem disconnected from the context of human discussions. They use accounts created quickly, with few or no profile pictures, only two or three posts, a long history of meaningless photos, personal details, and generic names.

The posts shared by these accounts are often emotional, polarizing, or aim to manipulate public opinion on a specific topic, while spreading dubious information. Moreover, bots often interact at unusual hours, outside of peak human activity, which betrays their automated nature.

Analyzing their activity, which often focuses on a specific subject, and the speed at which these accounts react to events, can also be indicators of bot manipulation. Verifying the credibility of sources and messages is essential for spotting these behaviors. It is not uncommon to see a Facebook group appear at 9 a.m. and display more than 14,000 comments by noon. This is highly abnormal and should raise caution among internet users.

Remember, it’s not always necessary to comment on everything; restraint remains the best protection against manipulation!

Pub

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

1 × quatre =