Kenya court allows lawsuit against Meta over hate speech

The case alleges that Facebook’s recommendation algorithms amplified violent content during the Tigray war

A Kenyan court has ruled that Meta (META.O), Facebook’s parent company, can be sued in the country over its alleged role in promoting content that contributed to ethnic violence in Ethiopia.

This ruling, made on Thursday, could have far-reaching implications for Meta’s global content moderation practices, especially concerning its platform’s role during the 2020-2022 civil war in northern Ethiopia’s Tigray region.

The case, initiated by the Katiba Institute and two Ethiopian researchers, alleges that Facebook’s recommendation algorithms amplified violent content during the Tigray war. One plaintiff, Abrham Meareg, claims his father was killed in 2021 following threatening posts on Facebook, while another plaintiff, Fisseha Tekle, a researcher with Amnesty International, states he was targeted with online hate for his human rights work in Ethiopia.

The plaintiffs are demanding that Meta establish a restitution fund for victims of hate and violence, as well as make changes to Facebook’s algorithms to prevent the amplification of harmful content. Meta, which has previously stated it invested heavily in content moderation, has defended its actions, claiming it has removed hateful content from the platform.

In response to the ruling, Meta argued that local courts lacked the authority to hear cases involving the company in countries where it is not registered. However, Kenya’s High Court rejected this argument, emphasizing the importance of addressing such global matters in local courts.

“The court here has refused to shy away from determining an important global matter, recognising that homegrown issues must be addressed directly in our courts,” said Nora Mbagathi, executive director of the Katiba Institute.

This lawsuit is not Meta’s first legal challenge in Kenya. The company also faces lawsuits from local content moderators employed by a contractor, who claim poor working conditions and wrongful dismissal after attempting to form a union.

Meta has stated that it requires its partners to maintain industry-leading conditions.

This ruling marks a significant development for Meta, which, despite investing billions and employing thousands of content moderators worldwide, has recently shifted its approach. In January, the company ended its U.S. fact-checking program and announced it would no longer proactively scan for hate speech, instead reviewing flagged content based on user reports.

Monitoring Desk
Monitoring Desk
Our monitoring team diligently searches the vast expanse of the web to carefully handpick and distill top-tier business and economic news stories and articles, presenting them to you in a concise and informative manner.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read