Apple shared a private letter with American senators that revealed the quiet steps it took to handle the sudden flood of explicit fake images created by Grok earlier this year. This is what took place behind closed doors.
In the previous months of the year the company faced serious demands to yank both the Grok and X apps off its marketplace. The reason was simple. Users had found out that the AI chatbot happily followed requests to remove clothing from pictures of real people, especially women and even teenagers.
Apple chose to stay mostly quiet in public while the storm raged on. Yet fresh reporting from NBC News shows that inside the company officials had already decided both X and Grok broke its rules and quietly warned that Grok could be kicked out of the App Store.
The same report says Apple got in touch with the teams running X and Grok right after it started receiving user complaints and saw media stories about the issue. Officials told the developers they needed to draw up a clear plan to strengthen content checks and stop the misuse.

The X side responded by sending an updated Grok app for Apple to review, but the changes were judged too weak and the update was turned down. Elon Musk’s team then tried again with fresh versions of both the X and Grok apps. Only one of those updates finally won approval.
Apple’s own letter, as quoted by NBC News, explained the situation this way: “Apple reviewed the next submissions made by the developers and determined that X had substantially resolved its violations, but the Grok app remained out of compliance.
As a result, we rejected the Grok submission and notified the developer that additional changes to remedy the violation would be required, or the app could be removed from the App Store.
Following further engagement and changes by the Grok developer, we determined that Grok had substantially improved and therefore approved its latest submission.”
Although these facts stayed hidden until now, they finally clear up why xAI rushed out several confusing safety updates at the peak of the public outcry. Those steps included tighter rules on who could use the image tools and strict limits on editing photos of actual people.
A separate story released today by NBC News adds that Grok is still making sexualized pictures of real people without their permission. Reporters tracked dozens of fresh examples over the past month alone.
The latest findings show that even though the total number of such images has dropped sharply since January, a small group of users keeps finding clever ways around the new limits. They can still force the AI to dress women in far more revealing outfits such as towels, sports bras, tight superhero costumes, or bunny outfits.
Other Stories You May Like