ChatGPT AI Generated Answers Banned On Stack Overflow

Stack Overflow is the site that answers all questions for programmers and coders. They have temporarily blocked users from posting responses created by the AI chatbot ChatGPT.

Site mods stated that the ban was temporary and that they would make a final decision in the future after consulting the community. The mods said that ChatGPT makes it too easy to generate answers and flood the site, which can appear correct at first glance but are often incorrect upon closer examination.

“The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” wrote the mods (emphasis theirs). “As such, we need the volume of these posts to reduce […] So, for now, the use of ChatGPT to create posts here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT after this temporary policy is posted, sanctions will be imposed to prevent users from continuing to post such content, even if the posts would otherwise be acceptable.”

ChatGPT, an experimental chatbot developed by OpenAI, is based on GPT-3.5’s autocomplete text generator. The bot’s web demo was published last week and has been enthusiastically received by users worldwide. The bot’s interface allows users to ask questions. It returns impressive, fluid results for various queries, including generating TV scripts, poems, and songs.

Many users have appreciated ChatGPT’s abilities, but others have noticed its inability to respond reasonably. ChatGPT can create a biography about a public figure. It may even insert incorrect biographical information with confidence. It can also explain how to program software to perform a particular function.

This is just one of many known flaws in AI text generation models. Also called large language models (LLMs), it is one of many. These systems are trained by analyzing patterns in large reams of text pulled from the internet. These systems look for statistical patterns in the data and use them to predict which words will be next in any given sentence. They lack the hard-coded rules governing how specific systems work worldwide, which leads to their propensity for creating “fluent bullshit.”

It’s difficult to know with certainty how much of these systems’ output is fake, given their enormous scale. Stack Overflow has decided that there is too much risk of misleading its users.

Stack Overflow’s decision to make this announcement is especially noteworthy, as AI experts are currently discussing the possible danger posed by large language models. Yann LeCun (chief AI scientist at Facebook parent Meta) has stated, for instance, that LLMs can produce wrong output, such as misinformation, but they don’t make it easier to share this text, which is what causes harm. Others argue that the cost of these systems generating text at a low rate increases the likelihood that it will be shared later.

There has been no evidence to show LLMs have any harmful effects in the real world. Recent events at StackOverflow prove that these systems can create new problems. In announcing the ban of ChatGPT, the mods noted that ChatGPT’s “volume (thousands!) of AI-generated answers” and that answers often require detailed reading by someone with some subject matter expertise to determine if the answer is terrible to have effectively overwhelmed our volunteer-based quality curation infrastructure.

This could lead to a repeat of the same pattern on other platforms. AI content will drown out real users who have valid but inaccurate data. The exact nature of each platform and its moderation capabilities will determine how this might play out. It remains to be seen if these issues can be addressed with tools such as improved spam filters.

Stack Overflow has received a wide range of positive responses to its policy announcement on its discussion boards and on related forums like Hacker News, where users added the caveat that it might be difficult for Stack Overflow mods to identify AI-generated answers.

One user said that they felt the bot’s answers to questions regarding coding problems often needed to be corrected. The scary thing was how incorrect and confident it was, said one user. The text was very well written, but it contained many errors.

Related Stories:

🙏 Help Us By Sharing This Article 👇: