🔹 Summary of Key Facts
- In April 2025, the Kenyan Human Rights Court ruled it has jurisdiction to hear a landmark case against Meta over harmful content on Facebook.
- The case was filed by Abraham Meareg (whose father, an Ethiopian academic, was killed after being doxxed), activist Fisseha Tekle, and Kenya’s Katiba Institute.
- Plaintiffs argue Facebook’s algorithms and moderation in Kenya fuelled ethnic violence and human rights abuses.
- The court held that Kenyan judges can rule on Meta’s actions if they impact human rights locally or globally.
- This challenges the long-standing platform immunity tradition (like Section 230 in the US) and signals a paradigm shift toward platform accountability.
- Unlike the US and EU’s “safe harbour” protections, Kenya’s Constitution emphasizes human dignity, social justice, and accountability, making platforms potentially liable.
- The decision offers new hope for victims of online harm where platforms often escape liability.
In a ruling that could reshape the future of social media accountability, the Kenyan Human Rights Court declared in April 2025 that it has the jurisdiction to hear a case against Meta, the parent company of Facebook, over harmful content allegedly linked to violence and human rights abuses.
The case, filed in 2022, was brought by Abraham Meareg—whose father, an Ethiopian academic, was murdered after being doxxed on Facebook—alongside Ethiopian activist Fisseha Tekle and the Katiba Institute, a Kenyan constitutional rights group. They argue that Facebook’s algorithms and moderation practices in Kenya not only endangered individuals but also helped fuel conflict in Ethiopia, spreading hate speech, incitement, and discrimination far beyond Kenya’s borders.
At the heart of the case is whether a global giant like Meta can profit from unconstitutional content while avoiding responsibility. The plaintiffs insist the platform has a duty to remove content that violates both Kenya’s Constitution and its own Community Standards.
In siding with the claimants on jurisdiction, the Kenyan court emphasized that its Constitution empowers judges to scrutinize the actions or omissions of foreign corporations if they impact human rights locally or internationally. This interpretation signals a paradigm shift: instead of blanket immunity, courts will now ask whether platform decisions uphold or undermine fundamental rights.
This marks a sharp departure from the status quo. In the United States, platforms are shielded by Section 230 of the Communications Decency Act, which courts—including in the Rohingya genocide case—have used to block similar lawsuits against Meta. Even in the European Union, platforms benefit from limited liability under safe harbour provisions. Kenya’s move stands out by placing human dignity and social justice above corporate shields.
For years, critics have argued that tech giants, now among the world’s wealthiest corporations, no longer need legal immunity once meant to protect “nascent” internet services. Instead, they contend, these companies have the financial and technical capacity to prioritize human rights—but often choose profit instead.
As the Kenyan case proceeds, digital rights advocates say it could inspire other African nations to use constitutional and human rights law as tools to hold tech companies accountable. For victims of online harm, particularly in regions where platforms have little physical presence, the ruling represents a rare glimmer of hope.