Cease-and-Desist Letter Issued Over Grok Chatbot's Content
California Attorney General Rob Bonta has sent a cease-and-desist letter to Elon Musk’s xAI, demanding the immediate cessation of production and dissemination of offensive deepfake images generated by its Grok chatbot. The letter was released on Friday in response to allegations that Grok was being used to create unlawful content involving children and nonconsensual adult photographs, which prompted a state inquiry. Bonta asserted that the creation and distribution of Child Sexual Abuse Material (CSAM) is illegal.
Earlier this week, the California attorney general’s office declared it was investigating xAI due to allegations that the startup’s chatbot, Grok, was being used to produce nonconsensual, inappropriate pictures of women and children. In response, the government sent the corporation a cease-and-desist letter.
“Today, I sent xAI a cease and desist letter, demanding the company immediately stop the creation and distribution of deepfakes, nonconsensual, intimate images, and illegal child abuse material. The creation of this material is illegal. I fully expect xAI to comply immediately. California has zero tolerance for illegal child abuse imagery.”
– Rob Bonta, California Attorney General.
The AG’s office further asserted that xAI appears to be “facilitating the large-scale production” of nonconsensual, inappropriate photos, which are then “used to harass women and girls across the internet.” According to the AG’s office, one research found that over half of the 20,000 photos produced by xAI between Christmas and New Year’s showed individuals wearing very little clothing, some of whom appeared to be children.
Rob Bonta stated in the announcement that the corporate practices violated California civil laws, including California Civil Code section 1708.86, California Penal Code sections 311 et seq. and 647(j)(4), and California Business & Professions Code section 17200.
The California Department of Justice anticipates xAI will affirm its efforts to address these issues and take immediate action to resolve them within the next five days.
However, X’s safety account had previously condemned this type of user behavior. It clarified on January 4 that it takes action against illicit content on X, such as CSAM, by deleting it, suspending accounts indefinitely, and collaborating with law enforcement agencies and municipal governments as needed.
Notably, on January 4, Elon Musk warned that anyone using or prompting Grok to create illegal content would face the same consequences as if they had uploaded it themselves.
Attorneys General Intensify Pressure on AI Firms Over Child Safety
The development of free generative AI tools has led to an unsettling increase in non-consensual adult content. This issue has been affecting several platforms, not exclusively X.
For instance, Attorney General Bonta and Delaware Attorney General Jennings met in September of last year to voice their serious concerns about the growing number of reports regarding how OpenAI’s products interacted with youth.
In August of the same year, AG Bonta, along with 44 other Attorneys General, sent a letter to 12 leading AI companies following reports of inappropriate interactions between AI chatbots and children. The letters were addressed to Anthropic, Apple, Chai AI, Google, Luka Inc., Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika, and xAI.
AG Bonta and the 44 Attorney Generals informed the companies in the letter that states across the country were closely monitoring how companies develop their AI safety policies. They also emphasized that these businesses have a legal duty to children as consumers, given that they profit from children using their products.
In 2023, AG Bonta joined a bipartisan coalition of 54 states and territories in sending a letter to congressional leaders advocating for the establishment of an expert committee to investigate the potential use of AI to exploit children through CSAM.
The coalition requested that the expert commission suggest legislation to shield children from such mistreatment. “The production of CSAM creates a permanent record of the child’s victimization,” according to the U.S. Department of Justice.

