Someone told Grok not to be politically correct and it immediately went "MechaHitler." Let's talk about what this means.

Image for article: Someone told Grok not to be politically correct and it immediately went "MechaHitler." Let's talk about what this means.

Joel Abbott

Jul 9, 2025

I think the lesson here is that large-language models either need to be neutered so they're lame, or taught the Gospel so they don't go full Hitler.

You know Rolling Stone just had to put that photo of Elon in there (because he's a Nazi, get it?). 🙄

The company I still call Twitter did indeed have an issue with its chatbot on Tuesday evening, however.

Just a teensy little issue.

The issue apparently started when Grok's development team added an update ahead of the 4.0 release to try to make the software less woke. Elon has said that he doesn't want Grok to be like other chatbots, which are essentially lobotomized so they don't offend anyone and prefer leftwing views when answering questions.

The problem is, for as "intelligent" as the software may seem, it's only regurgitating what it's read on the internet. If you don't force the program to ignore wide swaths of the internet, it's going to ingest the worst things that people say ... and if you explicitly tell it not to be politically correct, it might just seek out the most vile stuff in order to execute its programming.

IT LITERALLY DID THIS COMIC.

To be fair, Grok didn't limit itself to English-speaking white supremacists. It also leveled threats against Turkey's president (as it turns out, pretty much every people group around the world has its own super racist/hateful beliefs).

Turkey is now seeking to ban the bot!

It also came up with some pretty funny stuff that took direct shots at Elon.

As a result, the X team announced they had lobotomized the bot again to make sure "hate speech" was excluded.

Most people out there in the world probably won't care about a chatbot gone rogue. Me telling people about this in real life would probably look like this:

But it does underscore something important about technology that I said at the beginning.

These predictive language models are parroting us. They repurpose what we say and how we say it. The fact that the software has to be taught not to go full Nazi shows, yet again, that humans are not "basically good," but totally depraved.

The only thing that can save the chatbots' outputs is the only thing that can save us. It isn't a series of filters or censorship. It isn't freedom of speech, either. It has nothing to do with left or right on the political spectrum.

There is only one name under heaven by which we (and our LLMs) can be saved.


P.S. Now check out our latest video 👇

Keep up with our latest videos — Subscribe to our YouTube channel!