Unchained from the algorithm: Global developments in social media platform liability

Opinion|Published

The jury answered yes to all seven questions on verdict forms for both companies, finding that Meta and YouTube were negligent in the design and operation of their platforms and that their negligence was a substantial factor in causing harm to the plaintiff.

Image: Supplied

 By Dario Milo, Neo Moerane and Ayesha Hokee

IT’S been a challenging period for social media platforms. The past year has seen numerous developments to the laws that govern social media platforms globally.

 First, Australia has led the way in introducing age-related legislation to regulate the use of social media, with other countries adopting the same approach. Then, just over two weeks ago, Meta and Google were on the receiving end of adverse judgments with profound implications for their business models.  

The verdicts issued in the United States cases of Kaley G.M. v Meta and Google, and the State of New Mexico v Meta and Others, if upheld on appeal, could dramatically change the way platforms operate.

 In New Mexico v Meta Platforms, the New Mexico Department of Justice sought to hold Meta accountable in terms of the New Mexico Unfair Practices Act for its platform design choices, which were alleged to harm child-users of Meta's platforms and its alleged failure to safeguard child-users from bad actors online.

 It was argued that Meta’s platform design features created and sustained an environment where predators can locate, contact, and exploit minors at scale, and that Meta was repeatedly warned by internal employees and external child safety experts that its recommendation systems, messaging tools, and inadequate safeguards were being used for child sexual exploitation. 

Meta's disregard for these warnings was said to reflect a prioritisation of profit over proliferation of such content.  

Furthermore, testimony from witnesses, together with expert evidence, was led to demonstrate that Meta engineers its platforms to maximise youth engagement through addictive design features such as the infinite scroll design, ephemeral content (including the 24-hour story function) and short-form video content.

 These deceptive acts misled users regarding the safety of Meta's platforms, further evidenced by Meta concealing its internal assessments, which reveal safety and addiction risks to its users.  The jury found in favour of New Mexico and applied penalties of USD 375 million, comprised of several thousand violations, capped at USD 5,000 per violation.

 Just a day after the verdict in the New Mexico case, a California based woman achieved a David versus Goliath style victory over both Meta and YouTube.  The addictive nature of social media, and whether it is intentionally designed as such, was the subject of a six-week trial in Los Angeles in K.G.M v Meta and Others.  

The complainant, known as Kaley, argued that Meta and YouTube's features contributed to her anxiety, body dysmorphia, and suicidal ideation.  

The platforms denied causation and maintained that her mental‑health challenges are attributable to personal factors unrelated to the platform's design.

 The jury was required to decide whether the platforms were intentionally designed to be addictive, particularly to children; and, if so, whether the platforms should be held liable for the claimant’s harms. The verdict of the case was highly anticipated, not least as a number of similar claims are in the pipeline.

 Facebook co-founder and Meta CEO, Mark Zuckerberg, was extensively questioned on engagement targets, and whether particular design choices encourage compulsive use.  

Zuckerberg emphasised that teenage users contribute a minute share of advertising revenue and that the company has moved away from targets focused on time spent in-app and more towards a focus on utility and value. 

Instagram head, Adam Mosseri, expressed the view that Kaley's Instagram screen time, including a peak of 16 hours in a single day, was indicative of problematic use rather than evidence of addiction.  

YouTube submitted that it should not bear duties of care comparable to Meta as its features are akin to those of a streaming service such as Netflix or Disney+.

 The jury was not swayed, concluding that the platforms were negligent in their design and caused harm to the claimant’s mental health.  Kaley was awarded compensatory damages of USD 3 million with a 70/30 split between Meta and YouTube and a further USD 3 million in punitive damages.

 The above cases illustrate that the umbrella of near-absolute immunity for harms caused by platforms is collapsing. Growing concerns around social media addiction have prompted many countries to shift their focus toward platforms themselves. The product liability strategy appears to have drawn inspiration from the landmark cases over two decades ago involving the tobacco industry, where the focus was on the inherent danger posed by the product.  

For instance, in the 2002 case of Williams v Phillip Morris Inc, the jury found the tobacco companies involved liable for deceptively marketing its products and concealing the harms of tobacco.

 South African courts have yet to be confronted with a similar scenario. Currently, the common law position recognises liability of platforms as secondary publishers of content, liable once they have been notified of the unlawful third-party content, they are hosting.  There are also various statutes namely, the Electronic Communications and Transactions Act, the Cybercrimes Act and the Protection of Personal Information Act, but they generally target the user rather than the platform itself.

 A product liability focus may also not yield results in South Africa.  Section 61(1) of the Consumer Protection Act imposes strict liability on producers, importers, distributors and retailers for any harm caused by unsafe goods or by a product failure, defect or hazard. 

The Act’s definition of “goods” expressly includes information, data, software, code and other intangible products embodied in any medium, which could arguably extend to social media platforms. 

Furthermore, section 61(5) defines harm to include death, injury or illness, damage to property, and related economic loss without reference to psychological harm.  This omission is not necessarily decisive, however, as section 61(1) imposes liability for “any harm” without limiting harm to the categories listed in section 61(5). 

However, until a superior court clarifies this position, consumers may be subject to wayward and inconsistent applications of this section.

 Even if the definitional issues are overcome, the question of causation would remain.  Section 61(1) contemplates claims for harm sustained wholly or partly due to the defect but provides no guidance on how causation is to be established. Courts must therefore rely on common law principles of delict, exercised with judicial discretion.  

Where harm arises from the use of a social media platform, factual causation may unlikely pose difficulty.  Applying the “but for” test, it may be possible, depending on the evidence, to conclude that, had the individual not engaged with the platform, the harm would not have occurred.  

However, establishing legal causation may be problematic. Users rarely have access to the platform’s design specifications needed to identify the feature responsible for the harm, and the blurred boundaries between the platform, its users and user generated content may complicate efforts to establish the clear and decisive causal link required for liability.

 South Africa is not blind to the changing social media landscape and the harms that accompany it, as reflected in the 2025 Draft White Paper On Audio and Audiovisual Media Services and Online Safety.  The White Paper extends to social media platforms by virtue of their distribution of audio and video content to the public over electronic communications networks, aligning with the definition of broadcasting in the Electronic Communications Act. 

It proposes a regulatory framework that allows for licensing where appropriate, or alternatively, registration or notification supported by codes of conduct and co-regulation or self-regulation.  

The White Paper contemplates expedient measures such as an online safety ombudsman for accessible dispute resolution, alongside practical platform duties such as user reporting tools, age-appropriate safeguards, and transparency about actions taken on flagged content.

 South Africa's proposal is influenced by the measured regulatory approaches adopted in the United Kingdom through its Online Safety Act 2023 and in the European Union through its Digital Services Act.

 The Online Safety Act 2023 is centred on providing protection from online harms, with a particular focus on safeguarding minors.  Social media companies must determine whether minors can access their service and, if so, carry out a children’s risk assessment. 

They must also implement measures to prevent the exposure of minors to coercive behaviour, extreme pornography and the proceeds of criminal offences; and to mitigate the risk of “harm” to minors.

 The EU's Digital Services Act also contains a range of protections for minors, including a duty on online platforms to adopt and maintain safeguards that deliver a high level of privacy and safety for those users.  Online platforms are also prohibited from displaying advertisements based on profiling when they are reasonably certain that the user is a minor.

 Taken together, the recent jury verdicts in the US, and the UK and EU statutes signal a clear shift from platform immunity toward accountability for design choices that cause or contribute to harming children.  The jury is still out on whether South Africa will take a clear legislative stance in recognising product liability of social media platforms.  The White Paper marks the beginning of a very complex engagement on these important issues.

Dario Milo (Partner), Neo Moerane (Associate) and Ayesha Hokee, Webber Wentzel.