Al-aalem Al-jadeed

A newspaper free from partisanship,
sectarianism and the influence of its owners

Biased Algorithms.. How digital Platforms Reinforce Abuse Against Female Politicians in Lebanon

This investigation reveals how gender-based discrimination in social media algorithms sabotage election campaigns of female politicians in Lebanon. It also exposes the circulation of misleading information, smear campaigns and online harassment which have hampered the success of female politicians and their ability to reach voters and engage in fair political competition.

“I received personal threats from fake accounts, which forced me to move away from my hometown for a while, so I wouldn’t be at risk of physical harm.” Those were the words of Dima Abou Daya, who was a candidate for the Shia seat on the “Zahle for Sovereignty” list in the Beqaa valley, backed by the Lebanese Forces Party, in the most recent parliamentary elections held in Lebanon in 2022. She had decided to take on the parties controlling her region, and as a result found herself amidst a fierce electoral battle.

There are at least three cases of Lebanese women who stood for election and experienced online violence and real life threats by internet groups backed by rival candidates.

Although over half a century has passed since Lebanese women won the legal right to vote and to run for office in 1952, and despite the signing of the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW), gender inequality remains blatant. Only eight candidates out of 115 Lebanese women won seats in the parliamentary elections in 2022, underlining that female representation in decision-making in Lebanon is a mere six percent (eight seats out of 128 parliamentary seats), and five percent in the municipality councils (663 out of 12,139 municipal seats).

Political divisions in the country are reflected in electoral alliances. During the time of this investigation, party lists in most constituencies were made up of those loyal to the dual alliance of Hezbollah and the Amal Movement,along with the Free Patriotic Movement, versus lists supported by the opposition. These included the Lebanese Forces and the Kataeb Party, as well as the “Change” lists, which formed a political front separate from the other two factions. This produced a new map of parties in parliament in which there was no winner and no loser. (I’m not sure I understand this sentence. How is it a new map of parties, and how is there no winner and no loser? We’re missing context).

Dima Abou Daya says that various websites conducted a systematic campaign against her, prompting her family to issue a statement disavowing her candidacy in the 2022 elections. Resultantly, Daya’s campaign became vulnerable to direct attack by other saboteurs through email or social media trolling.

The reporter of this investigation analysed the data of 20 accounts belonging to male and female candidates in the 2022 Lebanese parliamentary elections on social media, which were collected at random.

Data analysis showed that derogatory comments and smear campaigns on social media were focused on female candidates and included casting doubt on women’s eligibility and ability to be politically active, defaming female candidates by publishing personal photos or information.

Analysis of the data also revealed a lack of engagement with the accounts of female candidates after they announced their intention to stand for election. (Just to clarify – does this sentence mean there was a lack of engagement even after they announced that they were running for office, or that there wasn’t any engagement before?)

Online Targeting of Women

A joint study conducted in 2022 by the Maharat Foundation and Madanyat, entitled “Media and Gender Monitoring in the 2022 Elections – Violence against Women in Politics”, identifies the types of online violence practised against women working in politics.

By monitoring cases of gender-based violence against female candidates during the election period, this study uncovers that victims were subjected to all types of online abuse, including defamation, hate speech, psychological abuse, threats of physical harm and sexual violence, and bullying.

Behind this systematic campaign were “electronic armies” of fake accounts set up by political parties with the aim of silencing female candidates and preventing them from expressing opinions that ran counter to the interests of these parties.

The study concluded that a contextual and cultural approach was needed that would take gender into consideration when modifying content. It also called for information to be provided on the conduct of “electronic armies” and algorithms related to gender-based violence against women. The study also urged increased engagement by civil society organisations to strengthen reporting mechanisms and boost measures to reduce online abuse.

Social media algorithm expert John Kevin describes algorithms as a series of instructions designed to solve specific problems, perform tasks, make decisions, or give instructions. This is applicable in the case of female politicians in Lebanon.

“These algorithms fuel hate speech and spread misogynistic content, resulting in wider dissemination without any filtering or warning put on the content,” says Kevin.

How Algorithms Amplify Harassment and Hate Speech

So how do social media algorithms work? Are things like harassment, bullying, targeting of women, smearing, and misinformation an inherent part of these algorithms? And how do they amplify this type of discourse?

In answer to these questions, specialist in digital media and communication Bachir Al Teghrini says that each application on social media has its own algorithms to direct content. This is what Facebook has done, for instance, based on what we witnessed during past American elections and in the current Gaza war.

“In Lebanon, there are so-called electronic armies which are run by political parties. When they attack a male or female MP who has gone against the views of a particular group, a hashtag appears at the top of the page. Interaction with it starts off positively, but then the group seeking to influence opinion posts its own content. And then the algorithms begin to pick up this content and circulate it widely without analysing it.” says Teghrini.

The team at Meta commented: “We realise that, while some comments may be offensive, they may not violate our policies. That’s why we use technology to prioritise the most important content to be reviewed, whether it’s something that has been reported to us or picked up proactively by our systems.”

The Meta team added that this helps them detect harmful content and block it from hundreds or thousands of users: “Meta has invested in technology that can proactively detect content that violates our guidelines, helping us take proactive measures against 98 percent of hate speech on Instagram during the last quarter of 2023, before it was reported to us.”

In an attempt to plug this gap, companies have developed algorithms through artificial intelligence (AI) and machine learning, according to Abed Qataya, director of the media programme at the SMEX organisation, which promotes digital rights. He said that privacy and usage policies, which companies term “community guidelines”, have begun to consider subjects like harassment, bullying and defamation by using keywords as indicators informing algorithms to prevent specific content from being published.

Qataya said that current algorithms actually work towards amplifying harassment and hate speech against female candidate accounts. Leaks from Meta have proved that content flagged as harmful on Facebook has failed to be removed if it receives significant traction. He spoke of multiple postings which contain hate speech, defamation and fake news which Meta won’t take down, citing technical error as their reason.

Qataya asserted that matters of misinformation and disinformation are harder to deal with, although efforts are underway to tackle them.

إقرأ أيضا