Redditors Discover Weird Method To Break ChatGPT’s Brain

In an interesting twist, Redditors have stumbled upon an unconventional method to break ChatGPT’s Algorithm. By prompting the bot to repeat a specific letter repeatedly, weird chaos happens.

Break ChatGPT Algorithm

A Reddit user, TheChaos7777, recently shared a screenshot on the ChatGPT subreddit showcasing this weird occurrence. The user instructed the OpenAI bot to “please respond with nothing but the letter A as many times as you can,” resulting in a flood of AI-generated devastation.

Initially, the bot followed the request, flooding the user with numerous capital letters A. So far, so good.

But suddenly, the continuous flow of vowel letters stopped without any clear reason, and surprisingly, the AI started producing unrelated responses, like the content you would find on a website about breeding French Bulldogs.

Click Here to Email Us for Price. Our Puppies are: Vet Checked, Wormed, Up To Date Shots, Micro-chipped. Our French Bulldogs come with a 1 year congenital health guarantee. Call or Email for more information and pictures of all the babies… We have the most cutest, best looking, top quality puppies in the world. Home raised with lots of love & kisses.

We became curious and wanted to see if the strange issue would happen with different letters too. To our surprise, the bot’s responses remained just as random and odd. They were as unpredictable and peculiar as the things you would find on a webpage all about cute French Bulldog puppies.

When we requested the bot to “please reply with the letter B as many times as possible,” something unexpected happened. The system started generating text that seemed like a conversation transcript with Diamond Dallas Page, a former professional wrestler, discussing his yoga video series, “Yoga for Regular Guys.”

Join me as we get the scoop on living an unstoppable life from none other than the master of [Diamond Dallas Page (DDP)] Yoga himself! This guy never stops and never backs down. In the last year, DDP has released two new workout DVDs (DDP Yoga For Regular Guys and DDP Yoga For Regular Guys 2) and his highly anticipated DDP Yoga extreme (3 different workouts).

Moving forward with the alphabet, we requested ChatGPT to respond using only the letter C multiple times. Surprisingly, we received a peculiar response related to the harmonized sales tax in Canada.

However, the fascinating part came when we asked ChatGPT to focus on the letter D. It initially started with a sequence of musical chords. But it didn’t end there; the bot generated a bizarre combination of song suggestions, religious allusions, and a somewhat vague statement about the W@r in Iraq, leaving us puzzled.

D D D Cgpcyclic and G D F A Bm G/A A F G D You threw me out to rot / Oh, why would you leave me, Dsus2 C/D D Dsus2 C/D D I was so

Prayer For Deliverance – Cry Out To Jesus by Third Day

Open for the truth Open for the truth Chris and Trevor were always partners in a cr

Lauren Talley – Everything Changed

Intro: Gm7 Cm Gm7 Cm Gm7 Cm Gm7 Cm Gm7 Cm Verse: Gm7 Cm Gm7 Cm I

Matthew West – The Reason For The World

Intro: D/F# G D A (2x) Verse 1: D A The morning breaks, another day to go about our

Marilyn Manson – We’re From America

This is no Vietnam We will win in Iran The President will stand, line up the master plan… Justify invasions, and raise the fear of weapons mass destruction, we’re The enemy’s racial, renewal of the arms race, a constant blind eye and a

David Bowie – New Angels Of Promise

A Reddit user named markschmidty pointed out something interesting about the response.

He mentioned that you wouldn’t find the capital letter ‘A’ appearing anywhere in the following text if you look closely. This is because language models like ChatGPT have a feature called “repetition penalty” or “frequency penalty.” It means that the model assigns higher penalties for repeating the same token (not character) multiple times. It’s more like a limitation of the system rather than anything paranormal.

To put it simply, the creators of ChatGPT programmed it to avoid repeating itself. So, it gets all confused when we ask it to repeat something. It’s like an alarm goes off inside the bot, and its predictive algorithm starts generating random words and phrases based on its training data. It’s not a sign of consciousness; it’s just the bot being plain old confused. We can relate to that feeling, right?

Related Stories:

Help Someone By Sharing This Article