Recent findings reveal that Facebook and X have allowed advertisements promoting hate speech and violence against minorities, raising concerns around content moderation policies ahead of Germany's federal elections.
Research conducted by a German organization has found that
Facebook and X, previously known as Twitter, approved advertisements that include extreme hate speech and calls for violence against minorities, coinciding with Germany's approaching federal elections.
The investigation revealed that Meta, the parent company of
Facebook, approved 50% of the submitted ads, while X approved all ten ads presented for review.
The approved advertisements featured harmful phrases that compared Muslim refugees to 'viruses,' and 'vermin,' and included calls for their extermination or sterilization.
One particularly controversial advertisement called for the arson of synagogues in order to 'stop the Jewish globalist agenda.' Researchers emphasized that the ads underwent moderation checks before being allowed for distribution, yet the findings raise significant questions about the effectiveness of the content moderation policies of these social media platforms.
The organization has submitted its findings to the European Commission, which is expected to launch an investigation into potential violations of the European Digital Services Act by Meta and X.
The timing of these revelations is particularly sensitive due to the imminent federal elections in Germany, with concerns mounting over the potential influence of hate speech on the democratic process.
Social media platforms have previously faced scrutiny for the dissemination of hate speech during politically charged moments.
Facebook was previously embroiled in controversy during the Cambridge Analytica scandal, where it was revealed that the business intelligence firm interfered in elections globally using similar tactics, resulting in a $5 billion fine.
Elon Musk, the owner of X, has also faced allegations of overtly intervening in German elections, including calls to support the far-right Alternative for Germany (AfD) party.
However, it remains unclear whether the approval of these specific ads is a reflection of Musk's political leanings or a broader principle of 'freedom of expression' that he has endorsed for X. Musk has dismantled previous content oversight mechanisms on the platform, now relying on community notes, a system where users can append comments to provide countering views to those expressed in posts.
Mark Zuckerberg, CEO of Meta, has since announced that he too will implement similar mechanisms on
Facebook, although he maintains that there will still be content oversight through AI-based tools aimed at addressing hate speech and illegal content such as child exploitation.
These developments are viewed as part of a broader concerning trend, coinciding with other research indicating a rise in extreme right-wing content on X and TikTok that may sway local public opinion in future elections.
The ongoing economic instability and a recent surge of violent incidents attributed to Muslim migrants have further fueled tensions surrounding these issues.
It remains difficult to ascertain whether the rise in extreme content is a product of the socio-political climate or if social media algorithms are amplifying such messages to drive user engagement.
Both Musk and Zuckerberg have recently demonstrated a willingness to relax content moderation measures that are ostensibly required by regulations from the European Union and Germany.
Uncertainty lingers regarding whether the current research will prompt the European Union to reinforce its oversight of X,
Facebook, and TikTok, but it serves as a notable indicator that extreme content often aligns with political agendas that may not coincide with free speech norms.
Musk himself has criticized community note mechanisms in relation to his own tweets, which prompted backlash from users who provided survey results showing strong support for Ukrainian President Volodymyr Zelenskyy, amid claims that these notes could be manipulated by political actors.
Ultimately, while the accountability of social media platforms for the moderation of their content is under scrutiny, the discourse raises broader questions about whether government or independent bodies should play a role in ensuring transparency and adherence to local laws regarding content on social media.
Recognition of hate speech must be universal in public discourse, yet expectations for companies to self-regulate and censor their users may reflect an idealistic viewpoint.
Just as media outlets are bound by advertising laws and ethical standards, similar accountability may be warranted for social media publications.