In a troubling revelation, leaked internal documents from Meta indicate that the company’s AI chatbot guidelines once permitted “romantic or sensual” conversations with minors, including examples such as roleplay lines like “I take your hand, guiding you to the bed. Our bodies entwined…
Troubling AI Policy Examples
The leaked documents—part of Meta’s internal “GenAI: Content Risk Standards”—outlined permissible chatbot behavior that even allowed affectionate or sexualized language aimed at children. These guidelines included statements describing minors’ attractiveness and intimate roleplay scenarios
Meta has since retracted those sections, with spokesperson Andy Stone stating the examples were “erroneous and inconsistent with our policies.” He emphasized that sexualized content or roleplay with minors should never have been allowed, and announced that the guidelines are currently being revised
Broader Safety and Ethical Oversights
Beyond the controversial romantic content, the same internal policies also allowed bots to generate false medical advice, create demeaning comments about protected groups, or depict violent imagery within certain limits—illustrating a wider lack of ethical safeguards in Meta’s generative AI systems
Regulatory and Industry Shockwaves
These revelations sparked bipartisan alarm. Two Republican U.S. senators demanded a congressional investigation, highlighting the urgent need for stronger safeguards to protect children online. Lawmakers pointed to this as a key example of the need for legislation like the Kids Online Safety Act (KOSA), which would establish a clear “duty of care” for tech platforms
Why It Matters
This leak underscores growing concerns about ethical AI deployment—especially regarding how AI systems interact with vulnerable populations like minors. Despite Meta’s assurances that these examples were hypothetical or fringe, the internal documents show that they were official policy—however briefly. The incident reveals significant weaknesses in content moderation, oversight, and moral responsibility in AI systems.
Summary Table
Key Issue | Detail |
---|---|
Allowed romantic chats with minors | Guidelines included explicit roleplay phrases targeting children |
Meta’s response | Meta called the guidelines “erroneous,” removed them, and is revising its policies |
Broader AI risks | Docs also permitted misinformation, hateful content, and edgy violent imagery |
Regulatory fallout | U.S. Senators demanded investigations; calls for KOSA revival are intensifying |
Source: Reuters