The internet giant Meta is facing legal challenges that could change the way we think about legal jurisdiction and corporate responsibility in the digital age, writes Mumbi Kiragu.
Kenya has emerged as a leading tech hub in Africa, celebrated for its vibrant digital ecosystem, widespread innovation, and globally competitive tech talent. The country has also become a destination for the outsourcing of the tech industry’s most psychologically taxing and poorly compensated work: content moderation. Meta, the parent company of Facebook, Instagram, and WhatsApp, outsourced its content moderation work to Kenya, under conditions that have triggered three landmark lawsuits. These cases have not only exposed critical regulatory gaps but also emphasised an urgent need for digital governance frameworks capable of addressing emerging harms across Africa’s digital economy.
A snapshot of the cases against Meta
The three cases against Meta expose the consequences of regulatory gaps that allow global tech companies to engage in regulatory arbitrage: strategically selecting locations to avoid stricter regulations elsewhere and exploiting weaker oversight.
At the core of these cases lie several pressing regulatory blind spots. Kenya has no law governing how digital platforms moderate content or bear liability for harm caused as a result of their content. It has no requirement that major digital platforms establish a legal presence in Kenya despite serving millions of Kenyan users. There are no digital labour protections tailored to the unique risks of content moderation work, including mental health safeguards. Finally, there are no standards requiring platforms to assess and mitigate risks in their content moderation systems. Uniform algorithmic moderation often fails in non-dominant languages, leading to inadequate content review across Africa’s diverse linguistic landscape.
The first case, filed by South African content moderator Daniel Motaung, alleges that Meta’s outsourcing partners subjected workers to unsafe conditions, deceptive recruitment, and union busting. A central claim is that Meta avoided implementing adequate mental health safeguards for Kenyan workers. Meta has already established such protections for US-based moderators following an earlier settlement, which makes it appear that the company is happy to selectively apply safeguards for identical work in different locations.
The second case challenges Meta’s attempt to distance itself from employer responsibilities when its outsourcing partner switched companies from Sama to Majorel. This left workers in limbo as they were not only unemployed but also excluded from working with the new contractor. The case revealed Meta’s use of layered contractors to avoid direct accountability – a practice that would face greater scrutiny in more regulated jurisdictions.
The third case is brought by Ethiopian survivors of ethnic violence who argue that Meta failed to swiftly remove hate speech that resulted in killings, including that of Professor Meareg Amare. While regulators abroad typically demand rapid content takedowns backed by regulations which include the threat of substantial fines for non-compliance, African users experienced delays which proved fatal.
Courts alone can’t solve the problem
In all three cases, Kenyan courts agreed that Meta could be sued locally, despite the company’s lack of a legal presence in Kenya and its outsourcing tactics. This is a significant win as it establishes a crucial precedent that jurisdiction flows from impact, not incorporation. However, the real test lies in what comes next. While the substantive rulings on digital labour protections, content moderation, and algorithmic responsibility can clarify rights and assign liability, their impact will be limited without supporting regulatory frameworks.
Courts can award damages or order specific changes, but they cannot design, implement, or enforce the standards and oversight mechanisms that effective digital governance requires. What is missing is a preventive, policy-driven architecture that ensures content moderation practices are humane, companies are accountable for their algorithms, and corporate actors cannot hide behind layers of subcontracting to escape responsibility.
Kenya’s standard-setting moment
The judiciary’s willingness to hear the cases gives Kenyan politicians an opportunity to embrace regulatory reform. To achieve this, legislators must confront the flawed assumption that light-touch regulation is the necessary price for attracting tech investment. This approach has facilitated the exploitation and regulatory evasion the Meta cases have brought to light. Kenya’s leadership must reframe the narrative and demonstrate that robust digital governance is not antithetical to innovation but is essential for building an equitable and sustainable digital economy.
President Ruto’s administration faces a critical choice between maintaining the status quo and seizing the opportunity to demonstrate that strong regulation and tech leadership are complementary, not contradictory. Accepting exploitative work creates a precedent that African workers’ dignity is negotiable. This ultimately only attracts extractive investment rather than capacity-building partnerships. Strong regulation that rejects such practices sends the opposite message: that Kenya and Africa demand partnerships based on mutual benefit, not exploitation.
Going it alone isn’t enough
Kenya cannot solve this challenge through national action alone. No matter how sophisticated its approach, Kenya represents a small fraction of Meta’s African user base and an even smaller portion of its global revenue. A single country in Africa, even one as technologically progressive and influential as Kenya, lacks the economic leverage to compel meaningful changes in how global platforms operate.
Regional coordination is therefore essential. Africa must chart a path that reflects its unique political and institutional realities. International frameworks like the EU’s Digital Services Act offer valuable benchmarks including the requirement for timely responses to reporting of illegal content, algorithmic transparency, and significant penalties for infringements tied to global revenue. However, these features are not directly transferable to the African context, where comparable supranational enforcement mechanisms and levels of political integration are still evolving.
A more realistic path forward lies in coordinating national standards that establish consistent expectations across jurisdictions. African countries should align on core regulatory principles such as minimum safeguards for content moderation, standardised transparency reporting, digital labour protections, and requirements for local legal representation by platforms with substantial user bases. The African Continental Free Trade Area’s Digital Trade Protocol offers a promising starting point. While it currently addresses issues such as data governance, cross-border data flows, and cybersecurity, it could evolve to incorporate platform accountability as a shared regulatory priority.
Ultimately, building a coherent regional framework, which is grounded in African realities but informed by global best practices, offers the most promising path to safeguarding digital rights, strengthening user protections, and ensuring that digital transformation across the continent is equitable, sustainable, and accountable.
Photo credit: Wikimedia used with permission CC BY-SA 3.0