Leaked: Slack Is Training AI Models Using Users Data Without Permissions

Organizations are deeply worried about the security and compliance implications of using internal data to train AI models. Shockingly, Slack has been secretly using messages, files, and data for its AI features without users’ knowledge or consent, raising serious concerns.

Slack Is Training AI Models Using Users Data Without Permissions

This week, a disclosure surfaced online after being pointed out by a user on X/Twitter, sparking frustration among many individuals who feel Slack should have been more transparent about this from the start.

Corey Quinn, a Duckbill Group executive, stirred controversy with a passionate post questioning, “I’m sorry Slack, what exactly are you doing with user DMs, messages, files, etc.?”

Quinn mentioned a passage from Slack’s Privacy Principles that states, “In order to create AI/ML models, our systems examine Customer Data (such as messages, content, and files) provided to Slack, along with Other Information (including usage details) as outlined in our Privacy Policy and in your customer agreement.”

In response to the same post, Slack promptly confirmed that it utilizes customer content for training specific AI tools within the application.

Proof That Slack Is Training AI Models Using Users Data Without Permissions

However, they made it clear that this data is not utilized in their premium AI service, which is presented as completely separate from user data.

Many users were surprised by the fact that Slack’s primary AI functionalities are based on accessing everyone’s private conversations and files.

Some users suggested that there should have been a clear warning beforehand, allowing individuals to choose not to participate before any data gathering began.

However, opting out of this process can be quite cumbersome. Users are not able to opt-out individually; instead, the organization’s administrator must request by sending an email with a very specific subject line, as detailed in the previous post.

Several prominent figures expressed their disapproval, adding to the criticism. Meredith Whittaker, who heads the Signal messaging app, subtly pointed out that they don’t gather user data, hence there’s no need for AI data mining.

This incident underscores the growing concerns over the intersection of AI and privacy as businesses compete to create more advanced software.

The disparities in Slack’s policies are not beneficial. On one hand, a section states that the company is unable to access core content for AI model development.

Contrarily, a different page promoting Slack’s advanced generative AI tools reassures users by stating, “Work without concerns. Your data remains yours. We do not utilize it for training Slack AI. All processes occur on Slack’s secure infrastructure, adhering to the same compliance regulations as Slack.”

Yet, the inclusion of user data mining in the “privacy principles” appears to be at odds with these declarations.

A Slack engineer on Threads attempted to provide clarification, explaining that the privacy guidelines were initially crafted in relation to the search and recommendation projects that predate Slack AI. They acknowledged the necessity of updating these rules.

However, the main concern lies in the opt-in-by-default method. Although widely used in the tech industry, it contradicts data privacy principles that advocate for individuals to have a clear choice in determining how their information is utilized.

Related Stories:

Help Someone By Sharing This Article