After an update of X's own AI Chatbot grok, it became more problematic than ever before and contributed to the wider trend of misinformation on the internet.

Elon Musk is no stranger to headlines about his descent into right-wing ideology. After supporting the German far-right party AfD and his Nazi-salute on stage, he cemented his standpoint. When talking about AI, he stated that politically correct AI would be dangerously misleading. Musk wanted a "truth-seeking" AI instead of a politically correct one, falsely portraying them as opposites.
When xAI's Grok came out in 2023, it just proved to be another generative AI. The most notable difference to others of its kind were the lack of – often times necessary – censoring on Grok. The AI would permit prompts and generate pictures where other AI's would immediately block them. Over time Musk also adjusted the AI's political stance more and more.
The Most Unhinged AI
Recently, xAI launched version Grok-4 with a few changes. Musk's goal was to "fix" responses where Grok seemed to be too "woke" or liberal. The AI was given instructions to "not shy away from making politically incorrect claims" and to "assume subjective viewpoints sourced from the media are biased". These changes led to Grok spewing hate and misinformation.
Suddenly, Grok used a lot of antisemitic dog whistles, praised Adolf Hitler, endorsed another Holocaust, and attacked the Polish Prime Minister among many other things. xAI deleted a lot of the posts but the damage was done – Grok spread a lot of political hate and conspiracy theories to a huge audience.
So Grok is literally just a Nazi now pic.twitter.com/Nx9H5naJ7w
— Sick Sad World (@YesYoureRacist) July 8, 2025
Grok's Political Shift
While the AI seemed to be progressive at first – like acknowledging climate change and supporting trans rights – Musk apparently wanted this to change. The issue is that progressive talking points are mostly, if not all, scientifically proven. The more Grok leaned towards the right, the more it opened itself up to relaying false information.
xAI explained that Grok's hateful behaviour stems from the fact that after the update, the AI "mirrored" its users too closely, which seems like shifting blame. And while this was the worst incident, it certainly wasn't the only one. Even in May of 2025 – months before the Grok-4 update – the AI shared the "white genocide" conspiracy theory. Days later it expressed skepticism about the number of jews that were killed during the Holocaust.
Elon Musk seems to turn a blind eye to all the valid criticism that comes his or the AI's way. Only a day after the Grok Chatbot spewed its hate and propaganda, Musk announced that all Tesla cars will come with Grok-4 implemented. He even called Grok-4 a good model without addressing the elephant in the room.
Grok4 is indeed a good model, ranking #1 on every major benchmarkBlogpost: https://t.co/5JY56r3V0o
— sehoonkim (@sehoonkim418) July 13, 2025
The Age Of Misinformation
While generative AI gets more and more popular, I remain skeptical and critical of them. While they seem to be a great tool for some smaller trivial tasks, many use them as their source of information. Whether it's trivia, politics or even medical advise, more and more seek the convenient solution to ask an AI at hand. And AI is relaying information it got mainly from the internet.
Google AI says that eating one rock per day can be a good source of vitamins and minerals. by in shitposting
For a long while, if you googled "how many stones should I eat per day", the Google AI responded with "one". That's because it took the information from a reddit post where someone gave a funny answer instead of a serious one. The problem is that AI always seems capable and knowledgeable when answering questions, no matter how wrong the answer may be.
In an age where misinformation on the internet increases every day and it gets harder and harder to find a definitive answer to a question, people look at AI and see a tool that's doing the research for them, while they are only left with the answer they sought and a lot of time saved. This lack of own research is – as a recent study has shown – even bad for your cognitive abilities. You don't actively "think" or familiarize yourself with the topic. And even though AI sometimes cites sources, have you always checked them?