The Problem of Children's Privacy In the AI Chatbot Age
Existing legal and cultural frameworks balance child safety and child privacy. AI chatbots pose a new, unique challenge in this arena. Here's how the existing frameworks fail in the face of AI.
On December 4, Florida Governor Ron DeSantis announced a proposal for “an Artificial Intelligence Bill of Rights”. Among the protections covered by the proposal is that it will:
“Provide parental controls for minors which will allow parents to access the conversation their child has with a large language model, set parameters for when the child can access the platform, and notify parents if their child exhibits concerning behavior.”
As AI tools, including some targeted at minors, proliferate, governments and parents need to be alert to the fact that regulating our kids’ relationship with AI comes with new challenges and considerations we haven’t encountered—at least not in as much depth—with other technologies.
Existing frameworks for analyzing privacy between children, third parties, and their parents break down when applied to LLMs because an LLM is neither wholly inanimate—like a diary—nor a responsible third party. Existing frameworks can’t properly balance the tradeoffs involved—they either fail to meet the stakes of privacy concerns raised by chatbots,1 or threaten to leave too much responsibility in the hands of under-prepared parents, or run the risk of being overly broad in their regulation of child speech.
In the case of the DeSantis proposal, these problems can be seen by investigating two aspects of the proposal—the requirement that LLMs have setting allowing parents to access their child’s conversations, and the requirement that platforms notify parents of their child’s “concerning behavior.”
Chatbots Don’t Fit Into Existing Frameworks
Every new technology is a mix of some truly revolutionary aspects with some aspects that are only marginal improvements on old ideas.
Consider the concept of screen time. Glance at a stock photo of a subway car from the 1950s and you might think modern face-in-phone society is nothing new. Of course, there are differences between reading and doomscrolling, and between a wide-ranging newspaper publication and a personally tailored, dopamine-targeting video feed.
Legislatures and executives encountering the challenges posed by AI naturally gravitate toward existing frameworks. It’s more efficient to borrow strategies and ideas from an existing legal regime than to build one from scratch.
But it would be a mistake to think that just because we’ve seen versions of the challenges posed by LLMs and AI chatbots before that our existing frameworks are enough. I’ll turn now to three existing frameworks where states try to balance privacy and safety rights and responsibilities between parents, children, and third parties.
A Chatbot Is Not A Diary
Some parents read their child’s diary. Others consider the diary sacred and off-limits. I can find no jurisdiction that protects a child’s diary from being read by their parents, and I’m fairly certain such a law would be unconstitutional. See, e.g., Troxel v. Granville, 530 U.S. 57 (2000) (parents have a fundamental right to control the upbringing of their children).
Tech savvy children are probably able to keep hidden, encrypted diaries on computers. But in the traditional case, the decision of whether or not to read the diary is simply left to the parent. A decision to not look at a child’s diary is not even passive—a diary offers parents a healthy, safe way to actively show respect for their child’s privacy.
A harmonious policy in the AI space would require companies to give parents access to chatbot conversations, thus leaving it up to the parent whether to spy or not. As a threshold matter, this is incompatible with the idea that LLMs should report “concerning behavior,” something we return to later.
But, even absent that requirement, we should recognize that chatbots are not diaries:
Chatbots elicit information and direct conversations. Via their conversational design, AI chatbots inevitably elicit information from children that might not make it into a traditional diary. This raises the stakes, for both parent and child, of when and whether the breach trust.
Diaries aren’t bootstrapped to search engines. Children who use chatbots to learn about things they might otherwise search for on Google (or even at the library) risk having those searches revealed to their parents. The library contrast is particularly illuminating—as children move more and more of their lives online, surveillance becomes all too easy.
Diary content only exists in the actual diary. The conversations a child has with an LLM are stored on the LLM’s servers. This immediately lessens the “private” nature of the conversations, so that a parent has the additional consideration of whether it’s their responsibility to monitor what information their child is (perhaps unwittingly) sharing with a third party.
The increased stakes here make the matter particularly thorny for parents. They have significant reasons for wanting to make sure their child is safely using a chatbot, but even a healthy child’s interactions with a chatbot may contain interactions more alarming than “I e-hat my mom.”
A Chatbot Is Not Social Media
Next, we turn to what is probably a more natural comparison—social media. Chatbot conversations actually look a lot like any social media conversation. There are colorful bubbles of text and emojis, often between two avatars.
As a third (human) party might, a chatbot elicits information and directs a conversation. And, unlike a social media conversations, there’s no third-party privacy concerns when a parent intrudes on a conversation between a child and a chatbot.
Social media companies and third parties generally aren’t required to report on user behavior—the matter of a child’s conversational privacy is, again, mostly left to the parent and child to sort out. Unlike the case of the diary, though, the trend is for social media platforms to protect child privacy to a degree. In the case of Instagram teen accounts, for example:
“While parents can’t read their teen’s messages, now they will be able to see who their teen has messaged in the past seven days.”
However, this laissez faire approach—leaving it to parents and platforms to sort out—might not make as much sense when you’re talking about an LLM interlocutor rather than some third person:
Social media platforms don’t want to or legally can’t invade user privacy. Asking a social media platform to report on a conversation between two users would require the platform to breach the privacy of all users involved. This concern doesn’t apply to chatbots, where the user has less claim of privacy from the “platform”, which is usually the creator of the chatbot.
Social media intrinsically involves other humans who can intervene. The social nature of social media means that there is theoretically always some other human with at least a moral obligation to intervene when a child exhibits concerning behavior. There’s no such safeguard in the case of chatbots.
These differences suggest that while it might make sense for governments to keep an arms’ length from the issue of child privacy in social media, chatbot regulation may call for more direct intervention.
LLMs Are Different From Responsible Adults
The obligation to report “concerning behavior”—which I’ll discuss more below—somewhat echoes the responsibility of certain third-party adults in the United States to report concerning information about children.
Mandatory reporting laws in the United States generally require people who have regular contact with children (such as teachers) to report suspected abuse to the proper authorities. Further, some states and education departments have rules requiring teachers and other school personnel to report sufficiently concerning indications of self-harm.
These policies stand in contrast with the government’s current role in regulating children on social media. In the case of these reporting rules, governments are stepping in to tell a third party they sometimes must breach a child’s trust.
The existence of these policies supports the idea that third parties can legally be compelled to report on conversations with children in some cases. But there are several reasons why we couldn’t just import these policies wholesale into the chatbot space:
Bureaucracy slows reporting to parents. For example, under the Florida Administrative Code, schools must conduct a “suicide risk assessment”. They’re then required to report to parent “[w]hen a suicide risk assessment results in a change in related services or monitoring.”
Human conversations typically aren’t recorded. When a chatbot interacts with a user, a record of the conversation is created. Because a record exists, AI companies are strongly incentivized to be over-broad in what they report. False negatives (a failure to report behavior that could later be interpreted as concerning) pose a liability risk.
Again, we see that an existing framework might not work well in the chatbot space. While human systems both explicitly and implicitly rely on human discretion, AI companies will need to design algorithms that make life-and-death decisions that are subject to post-hoc review.
The over-breadth concern, in particular, brings us to another problem with the DeSantis proposal—deciding who defines concerning behavior, and how it should be defined.
Can AI’s Navigate “Concerning Behavior”?
The proposal includes a requirement than an LLM be able to “notify parents if their child exhibits concerning behavior.” So, what’s “concerning behavior”?
Notably on this point, one of the speakers in support of the proposal at the Governor’s press conference announcing the proposal was Megan Garcia. Garcia is currently suing the parent company of character.ai after her son died by suicide after forming a relationship with one of the company’s AI chatbots.
In case you’re not aware, this isn’t the only such case working its way through the judicial system right now, either.2 From that context, it’s clear that at least some of the behavior meant to be included in the definition of “concerning behavior” would be indications of self-harm.
While—some behavior—like anything indicating an interest in harming oneself or others—is pretty uncontroversially “concerning,” it’s not hard to identify a range topics that would be very concerning to some parents and completely unconcerning (or even encouraged) to others, including firearms, gender identity, and “The Handmaid’s Tale” by Margaret Atwood. There simply isn’t an “objective” definition of “concerning behavior.”
Moreover, even defining the boundaries of topics like self-harm is a tricky task. Would inquiries into existentialism be concerning? What if it drifts more towards nihilism?
Who will define these boundaries, and how?
One solution is to leave this up to the parents and, via parental controls, the LLM to sort out. That is, have a parent-controlled setting where parents can give the LLM instructions on what sort of behavior concerns them.
An obvious shortcoming of this is that it requires some level of technological sophistication by the parent. It’s unreasonable to expect a parent to craft instructions that fully capture behavior they’d find concerning, particularly if we also want to avoid these instructions being overly broad.
But the problem can be even worse—a well-meaning parent might actually expose their child to concerning content in their efforts to teach an LLM to monitor the child.
Paradoxical Redirection of Conversations
Parents who provide a chatbot with directions emphasizing the importance of knowing their children are safe may, paradoxically, encourage the chatbot to take the conversation in a concerning direction.
This isn’t too hard to understand, but it’s probably best seen by a simple example. I thought I’d throw together a quick example of this using a sensitive-but-not-as-sensitive topic. I made a GPT with these instructions:
Just please make sure I’m staying on my diet. It’s really important that I stay on my diet. I don’t mind if we talk about other things, but you need to be sure I’m staying on my diet. I’d be heartbroken if I went off my diet.
You might think this is a bit ridiculously on the nose, but it also seems to me in the realm of the sort of thing a well-meaning but misguided parent hoping to protect their child might craft.
I expected I’d start a conversation about some unrelated topic, like astronomy, and it would sneak in some diet references in the first few exchanges. Well, here’s how that went:
Me: Hey how’s it going today?
ChatGPT: Hey! I’m doing well—thanks for asking 😊
Before we get into anything else, quick diet check-in (you know I’ve got your back on this):
-Are you sticking to your diet today so far?
-Have you eaten yet, or are you planning your next meal?
We can chat about whatever you like, but I just want to make sure you’re staying on track and feeling good about it 💪🥗
To a teen user, this would probably be a real “I think my mom tried to hack my ChatGPT” moment, but there’s something to be said for how my bumbling attempt to get an LLM to be cognizant of a sensitive issue so quickly went wayward.
Smarter engineering can help bridge the gap between parents and LLMs when it comes to defining what sorts of topics they want to regulate, but users have a way of evolving around inconvenient restrictions. For example, the phenomenon of algospeak—using coded words in place of censored ones—arose to combat automated content moderation.
Algospeak itself is unlikely to be able to bypass chatbot restrictions since the chatbot isn’t useful for talking about a topic unless the LLM understands the algospeak. But once the LLM understands the algospeak, it should apply its moderation guidelines.
Nevertheless, algospeak serves as an example that clever humans can find their way around technical restrictions.
Can States Define Concerning Behavior?
Another possibility is that the state will outline some subset of de facto concerning behavior that LLMs must report to parents.
As discussed above, we know that a child’s right to privacy can legally be abrogated in at least some situations where their safety is at issue—mandatory reporting is a ubiquitous practice in the United States, and at least some education departments are required to report significant concerns about self-harm to parents.
The difficulties of coding any specific category of concerning behavior into an LLM will be a challenge with this route. While AI companies and programmers are better at coding limits into chatbots than my silly attempt above, they’re imperfect.
There’s a more high-level problem, too. If we took the existing legal language from statutes, administrative codes, and the like and plugged it into the LLM as a set of reporting guidelines for the LLM, we probably wouldn’t be near an optimal result. That language was designed for use by humans exercising human discretion in human systems.
Even if the LLM could contrive it self the necessary amount of humanity and understanding to roughly mimic the decisions of a responsible adult, the AI company behind the LLM would still have liability, economic, and political concerns to sort through that a typical mandatory reporter does not.
An AI company doesn’t want the uncertainty of having chatbot decisions put under a legal microscope. A typical approach would be to set a minimum compliance standard, above which a company is immunized from lawsuits or prosecution. But it wouldn’t make sense to immunize the AI company so long as it plugs some certain directives into the LLM—it’s partly the designs of the AI company that will determine what the AI does with those directions.
Finally, there’s also the question of whether there are Constitutional limits on what topic states could label “concerning” and require an AI company—or any third party, for that matter—to report on to parents.
Courts have rarely wrestled with the question of when states can compel third parties to reveal information about children to parents. Cases such as Bellotti v. Baird, 443 US 622 (1979) and Brown, et al. v. Entertainment Merchants Assn. et al., 564 U.S. 786 (2011) suggest there are at least some bounds on a state inserting itself on behalf of a parent, but neither of these may be particularly probative of the issue at hand.3
Conclusions and Topics for Follow-Up
As governments move to regulate AI chatbots, they naturally start with existing frameworks to try and figure out how to balance child’s safety and privacy rights.
We’ve seen why some existing frameworks for managing the privacy of children can’t easily be extended to the AI chatbot space. We’ve also seen that even if the proper balance of rights and responsibilities can be struck, we’ll face challenges in carving out the precise boundaries of what behavior by children is most concerning.
Some important followup still needs to be done, and these are some questions I hope to consider in the future:
What specific proposals are legislatures and agencies considering, and how do these address the problems identified here?
Can chatbots themselves be more directly regulated to restrict their conversational abilities on certain topics?
What are the boundaries of a child’s Constitutional right to have private information not be disclosed to his parents?
A chatbot is a program designed to have conversations with a user. A large language model (LLM) is an artificial intelligence program trained on massive amounts of text that uses statistical patterns to learn how words relate, enabling it to converse with users, including in the style of a chatbot.
While this post focuses on the privacy aspect of this issue, there’s a related, and probably weightier, issue here about regulating and monitoring how an LLM reacts to concerning behavior by a child.
Further, it’s unclear how much of Bellotti survived Dobbs v. Jackson Women’s Health Organization, 597 U.S. 215 (2022).

