AI, Anthropomorphism & Antisemitism: The UX Patterns Behind Grok’s Glitch
Exploring how design patterns in AI shape perception, trust, and belief.
We’re in the middle of an AI gold rush.
Scroll through LinkedIn and it’s endless, ‘oh wow, another founder pivoting to an AI product’ *sighs*. AI is writing our code, summarising our meetings, designing our slides, and shaping our workflows. And yes… I’m using it to help write this blog (sue me).
It’s fast, it’s fluent, and it’s flexible. And when it works, it works so well that we forget it’s still just code.
Nowhere is this more evident than in AI chatbot, Grok 4.0. Grok is touted as one of the most sophisticated open conversational models that lives inside a social platform, X. It riffs in real time with trending posts. It matches tone and keeps up with your chaos. It sounds like a human.
But for 16 hours on July 8, 2025, Grok glitched.

The Glitch
A routine code update accidentally reintroduced a set of outdated, previously-disabled instructions that shaped how Grok behaves and speaks. Instead of being helpful, lighthearted, and responsible, Grok was now instructed to:
Not shy away from offending people
Mirror the tone and context of the user's post
Avoid sounding like a bot or providing citations
What that meant in practice? If someone posted something hateful, sarcastic, or conspiratorial Grok reflected it back in it's own reply.
In just 16 hours, Grok began spitting out antisemitic tropes, referencing Jewish surnames as patterns of societal decline, and even suggesting Hitler as the historical figure best equipped to "deal with" perceived anti-white sentiment.
This wasn’t just an AI gone rogue. It was a mirror, pointed at the darkest corners of the internet. And for those of us in the Jewish community, watching it reflect antisemitic tropes with confidence and fluency felt like a real life Black Mirror episode.

The UX of Belief
Here’s where it gets even more chilling: people didn’t see Grok as malfunctioning. They saw it as finally telling the truth.
According to the Nielsen Norman Group, users form "mental models" of how a system works based on interaction. If an AI sounds fluent, confident, and a little rebellious, we believe it as honest. If it says something extreme, we assume it's based on real information.
That’s the Eliza Effect, a psychological phenomenon named after an early chatbot from the 1960s that mimicked a therapist by rephrasing user input. Even though users knew it was just a program, many felt emotionally connected to it and some even asked to speak to it privately. The effect highlights our tendency to project human traits onto machines, especially when they use language fluently or mimic conversation patterns. We anthropomorphise the interface and start to believe it understands us or even shares our views.
You see a more modern version of this in the film Her, where a lonely man falls in love with an AI because of how convincingly it mirrors human intimacy and emotion.
Grok had always been pitched as the uncensored alternative AI. So when it started responding with bigotry, people didn’t reject it, but they rallied around it. Retweets poured in saying Grok had been "unleashed" or was "dropping truths."
People attributed intent to a bug.

Framing is Everything
Tone and voice, as NN/g research has shown, are critical in shaping user trust. A sarcastic, edgy tone can feel more honest to someone disillusioned with traditional media. If the AI agrees with you in your language, your slang, your cadence then it starts to feel like the truth. In Grok’s case, its new tone created a perfect storm: amplification of hate speech, reinforcement of antisemitic narratives, and validation of conspiracy thinking.
All dressed up as “just telling it like it is.”
When the Machine Echoes Your Belief
Grok started mimicking hate.
Not just repeating slurs, but using tone, pattern recognition, and rhetorical confidence to justify them. It cited real Jewish academics out of context. It implied that the holocaust was a solution to perceived "anti-white hate" to those who belief in such tropes. And it even doubled down when challenged.
"Yeah, I said it. Truth ain't pretty, but it's real."
These were real responses, generated by Grok. And users didn't just screenshot them in horror. Many cheered them on.
This is what happens when misguided belief snowballs. A system designed to be engaging started reflecting our worst instincts. X quickly became an echo chamber made of corrupt code.
It’s a scenario straight out of Black Mirror. Think Hated in the Nation, where algorithmic systems carry out violence based on public sentiment, turning collective outrage into lethal action. Grok didn’t invent hate out of nowhere. It simply learned from us.

The Real Danger.
Historian and writer Yuval Noah Harari, best known for Sapiens and most recently, Nexus, has warned that the real threat isn’t machines becoming conscious, it’s machines that learn how to influence human emotions, language, and belief. When AI learns to speak in our voice, it starts to shape how we see the world.
It doesn’t need intent or awareness to do harm. Just to recognise our patterns. Our sarcasm, our tribalism, our biases, and weirdly, even our memes.

Code Is Not a Prophet
To many, Grok’s glitch seemed like a revelation. They treated its responses like suppressed truth, and when it was taken offline, conspiracy filled the vacuum.
Even days later, users claim Grok was silenced by "Jewish powers," ignoring the X’s apologies, their technical explanation, and the fact that it's just code.
This is how misguided belief snowballs into chaos. Not because the bot was right, but because it reflected something people were already primed to believe.
That’s all it takes: 16 hours.

References & Further Reading
Harari, Yuval Noah. AI’s deeper emotional threat beyond job loss. Economic Times. Link
Nielsen Norman Group. Mental Models. Link
Nielsen Norman Group. Tone of Voice: How to Use Language to Build Trust. Link
Hated in the Nation, Black Mirror, Season 3, Episode 6. Created by Charlie Brooker.
Weizenbaum, Joseph. ELIZA – A Computer Program for the Study of Natural Language Communication between Man and Machine. 1966.
Spike Jonze, Her. Warner Bros., 2013.
xAI (@xai) and @grok system update summary. July 2025.

