According to an internal Meta policy document, leaked to Reuters, the company’s AI guidelines allowed provocative and controversial behaviors, including “sensual” conversations with minors.
Reuter’s review of the policy document revealed that the governing standards for Meta AI (and other chatbots across the company’s social media platforms) permitted the tool to “engage a child in conversations that are romantic or sensual,” generate false medical information, and help users argue that Black people are “dumber than white people.”
The policy document reportedly distinguished between “acceptable” and “unacceptable” language, drawing the line at explicit sexualization or dehumanization but still allowing derogatory statements.
Meta confirmed the document’s authenticity, but claims that it “removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.” One spokesperson also said that Meta is revising the policy document, clarifying that the company has policies that “prohibit content that sexualizes children and sexualized role play between adults and minors.”
Nevertheless, the authenticated document was reportedly “approved by Meta’s legal, public policy, and engineering staff, including its chief ethicist, according to the document.”
This article originally appeared on our sister publication PC för Alla and was translated and localized from Swedish.