What Is Next For Section 230, Meta, and Google?
A California jury found Meta and Google liable for harm caused by their apps, but the most important tool in their box still has work to do.
A California jury handed a 20-year-old woman (referred to as “KGM”) a key victory in a suit against Meta and Google. An NPR article summarizes things nicely, but here’s the bottom line:
[The jury] found that Meta and Google were to blame for the depression and anxiety of a woman who compulsively used social media as a small child, awarding her $6 million…
Meta and Google were found liable under a theory of negligence—they were found negligent in how they designed the apps, and they failed to warn about the dangers of the apps.
The award included punitive damages, which require a finding that the companies acted with “malice, oppression or fraud.”
Why were Meta and Google targeted here?
The thrust of the woman’s argument was that she became addicted to Instagram, owned by Meta, and YouTube, owned by Google, at a young age. Snap and TikTok were also defendants, but they settled (their interests at the appellate level are still protected by their competitors, after all).
Why products liability?
This case was brought as a product liability case, the basic theory being that the apps at issue were designed in such a way that harm resulted. Part of the reason this is such a big deal is because it’s a victory in a pretty fundamental, well-developed area of law.
By contrast, Meta also just lost a case in New Mexico for willfully violating the state’s unfair practices act. That case will cost them $375 million (pending appeals, of course), and the facts are arguably even more disturbing than the California case (the New Mexico attorney general created fake facebook profiles of kids and encountered a variety of explicit solicitations).
But the New Mexico case—however important it is—is tied to a pretty specific set of circumstances. Meta obviously needs to improve some of its child protection features, but that’s a far cry from the more fundamental issue at stake in the California case—can social media companies be held liable when the basic design of their product contributes to anxiety and depression?
Why only $6 million?
It was ultimately up to the jury how much to award, and they went with the amount of $6 million. That’s a minuscule amount for these companies, but I don’t think it’s something we need to fret over.
First of all, I think it’s a reasonable amount for an individual to win in a case like this. Second, the possibility of more lawsuits and class action suits means the deterrent impact on companies will be real (pending some of the stuff I’ll discuss below).
Finally, we shouldn’t underestimate the value of the moral victory here. A jury of regular Californians sat in a box, heard from Meta, Google, KGM, and experts, and found Meta and Google liable.
It’s easy to be cynical about the power of these companies and the hold they have over us, but the win reminds us that, to some extent, it’s still juries (and the law) that will have the final say.
What comes next in the case?
This is why I had to hedge so much above. First, Google and Meta plan to appeal. In a statement, Meta said:
Teen mental health is profoundly complex and cannot be linked to a single app. We will continue to defend ourselves vigorously as every case is different, and we remain confident in our record of protecting teens online.
Checking The Boxes For An Appeal
Appeals test at least three things:
Was this a good legal theory to begin with?
Was this a good test case for the legal theory?
Did the lawyers do their jobs in handling the case?
Let’s quickly work our way back up this list.
If the lawyers made some fundamental mistake (like misrepresenting evidence or allowing a significantly flawed jury instruction to be used) which seems unlikely at this point), the appeal isn’t likely to have broad significance. The case would be resolved on those narrow grounds.
Based on the evidence presented at trial, this looks to be a good test case. That is, it turns on pretty broadly applicable facts—KGM seems like a pretty typical user, and a lot of the evidence actually focused on corporate decision-making that would apply in any similar case.
If the lawyers did their jobs and this turns out to be a good test case, the appellate process will tell us if this is a good legal theory to begin with. There are several nuanced theories Meta and Google will likely pursue, but arguably the most significant will be related to Section 230.
Is This A Good Section 230 Case? A Bad Section 230 Case? Not A Section 230 Case?
Section 230 states, in part:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. 47 U.S.C. § 230(c)(1)
Section 230 essentially prevents Meta and Google from being liable for the things users say on their platforms, including in state courts. Google and Meta attempted to get the KGM case dismissed before trial on Section 230 grounds, but the judge denied their motions on the grounds that the challenge was to the design of the apps, not the content presented to KGM.
This is the crux of the legal theory at play here—Meta and Google aren’t being sued because the content on their sites caused anxiety and depression, they’re being sued because the decisions they made designing the apps put users at risk of anxiety and depression.
This is a narrow path to tread for the plaintiffs and courts. Consider four questions (and “answers”):
Can a typical web forum be held liable if one user posts defamatory content about another? No. (Section 230)
Can a social media company be held liable if it intentionally promoted defamatory content (generated by users) about a rival CEO onto all its users feeds? Almost certainly yes.
Can a social media company be held liable if it knew its algorithm would lead to addictive behavior that can cause anxiety and depression? Maybe, we’ll have to see what the courts say.
Can a social media company be held liable if it knew its algorithm was designed in a way that it some users would be fed content that could lead to anxiety and depression? Maybe, we’ll have to see what the courts say.
The crux-within-the-crux of this case is whether we’re dealing with Question 3 or Question 4. The distinction between the those questions is pretty significant to the Section 230 issue.
If Social Media Apps Are Bad Simply Because They’re Addictive, What Next?
If the answer to Question 3 is “yes” and social media companies can be held liable for creating an addictive product regardless of what content users see, then the Section 230 issue is avoided entirely but the case would reach far beyond its apparent scope.
There is what we might call a burgeoning field of “addiction law”, as lawyers and regulators seek to hold peddlers of a wide range of addictive or allegedly addictive products—from opioids and sugary foods to gambling platforms and video games—liable for the harms that result from those addictions. If content issues are discarded entirely, the KGM case may turn on questions that arise in this nascent area of law.
Of course, the reason that area of law is underdeveloped even though addiction is a tale as old as time is because regulators tend to step in and deal with addictive products in a way that preempts traditional tort law. Gambling, drugs, and food all have robust regulatory regimes, and a finding that social media companies can be liable for the addictiveness of their products might be just another indication that broader social media regulation is overdue.
If The Content Is Contributing To The Harm, What Next?
Handling the case as presenting Question 4 brings Section 230 into play, and it accords more with how the case was actually handled at trial. Specific harms KGM claimed, like body dysmorphia, are inextricably tied to the content she saw.
If Section 230 bars all reliance on that content—or if it bars juries from considering the content as much as they did in this case—then we could see Meta and Google handed a victory at the appellate level.
This outcome would also be likely to lead to calls for reform. Even two Supreme Court justices have criticized social media platforms for using Section 230 as a “get-out-of-jail-free” card. Especially with the rise of AI, which allows for both more addictive platforms and more addictive and harmful content, it’s unlikely a Section 230 that overly restricts consideration of content on a platform will be long for this world.
Finally, there’s the possibility that Section 230 allows content to be considered in conjunction with underlying platform design. This would probably require the Supreme Court to eventually hear the case and hand down one of its much-beloved (/s) balancing tests. Courts will need to balance how much the harm comes from content users will inevitably (or intentionally) come across, and how much of it comes from the algorithm feeding this content to foster addiction. A court could theoretically handle a case of, say, body dysmorphia differently based on whether a user first (or repeatedly) searched for beauty content or whether the algorithm just started pushing it.
This brings us briefly back to one issue passed over above—the award of punitive damages. Because punitive damages in California require “malice, oppression or fraud,” the case may already be teed up with weight on once side of the balancing test scale. It would be reasonable for a balancing test to consider something like malice, and a balancing test that allowed liability where malice is found would leave many cases in the hands of juries.
Something like this—a balancing of content and platform considerations—may be a likely outcome, and perhaps the long-term question is whether such a reading of Section 230 would avoid some sort of legislative and regulatory overhaul.
I suspect not. No one likes balancing tests. Google and Meta do not want to wonder how every algorithmic change will be weighed by a balancing test. Users aren’t comforted hopping onto a platform knowing companies will only show them just the right amount of harmful content.
Conclusions
As we learn more and more about how technological addictions are impacting society, we’re going to want clear rules that define these relationships. We need to mature past the “grey area” stage of lawmaking. Many people don’t like the seeming dissonance in how, for example, drugs and gambling are regulated. But it has to be said that, for a large majority of use cases, the rules are clear.
KGM’s victory at the trial level may be a bellwether for larger reform. Courts will struggle to deal with all these issues—what facts to consider, whether Section 230 matters at all, and how much it matters. Whatever the case, users, parents, lawyers, and advocates should be prepared for reform and keep their eyes on the road ahead.

