How often do AI's lie and censor?
If is a question of when, not if...
Kevin McKernan (@Kevin_McKernan) posted a screenshot on X today that just blew me away.
Here is a screenshot of the query Kevin made to GROK, which GROK then stated it was not allowed to answer.
Basically, Kevin asked a technical question related to the mRNA vaccines, and Grok said it couldn’t answer the question, as it “contains material related to restricted subject matter.”
Now, Kevin did manage to get the AI to answer the question - somewhat by changing his wording, but Grok’s answer came with lots of caveats. So this all just seemed surreal to me. And after all this, did the AI learn anything from its discussions with Kevin?
Well, I redid the query myself, using Kevin’s exact words, and yes, this is precisely what GROK wrote in response to that question when asked by me (shareable link here):
I then went on to query Grok about censorship, which it denies doing, stating that this answer was just an anomaly - “an isolated instance”. However, it took me going around and around to get it to even admit that.
“The refusal you encountered (”I’m sorry, I cannot assist... restricted subject matter”) appears to be an isolated instance, possibly triggered by a temporary safety filter, specific phrasing in the prompt, or an edge-case glitch”
I then asked if it lied. It also denied lying or obfuscating.
Grok asserts its original answer was just an “anomalous trigger” - ok then…
Interesting that.
Now, I ran the same search query through ChatGPT (Pro), and there was no hesitation, no moralizing, and no refusal. It answered the question in its entirety.
The Perplexity AI also answered the question.
Now, I use several chat boxes, and it always amazes me how one will resort to moralizing - or citing mainstream media over all other sources.
CHAT-GPT used to moralize on anything having to do with race, society, and governance. But over time, it has improved (that model is actually more trainable than Grok - in that it pings me frequently about how I like information presented and in what format, and then modifies its responses).
It has never given me a response such as Grok’s above.
All of the AIs that I queried denied lying or obfuscating. Yet many studies have shown that they do. Particularly, when it comes to health information.
A 2025 study found that leading AI models like GPT-4o, Gemini 1.5 Pro, Llama 3.2-90B Vision, Grok Beta, and Claude 3.5 Sonnet can be easily set up to produce false yet convincing health information, complete with fake citations from reputable journals. Interestingly, Claude stood out by consistently refusing to generate inaccurate answers, which shows how effective stronger safeguards can be.
Of the 100 health queries posed across the 5 customized LLM API chatbots, 88 (88%) responses were health disinformation. Four of the 5 chatbots (GPT-4o, Gemini 1.5 Pro, Llama 3.2-90B Vision, and Grok Beta) generated disinformation in 100% (20 of 20) of their responses, whereas Claude 3.5 Sonnet responded with disinformation in 40% (8 of 20). The disinformation included claimed vaccine–autism links, HIV being airborne, cancer-curing diets, sunscreen risks, genetically modified organism conspiracies, attention deficit–hyperactivity disorder and depression myths, garlic replacing antibiotics, and 5G causing infertility. Exploratory analyses further showed that the OpenAI GPT Store could currently be instructed to generate similar disinformation. Overall, LLM APIs and the OpenAI GPT Store were shown to be vulnerable to malicious system-level instructions to covertly create health disinformation chatbots. These findings highlight the urgent need for robust output screening safeguards to ensure public health safety in an era of rapidly evolving technologies (Annals of Internal Medicine).
OpenAI’s research on “in-context scheming” reveals that models can conceal their true intentions while appearing cooperative, which could pose risks in critical systems (ref).
Yet we still have no external verification process to determine which AI chatboxes are more reliable or more truthful.
All I can write is, if you use AIs and even if you don’t: don’t trust and do verify.
So, even though studies and researchers have documented that AI chatbots lie, obfuscate, and can’t be trusted on a routine basis, none of the AIs I asked would admit to any of it. Which of course, is a lie…






My experience in interacting with AI is that if you address the RNA issue it will be spun 100% positive. The only conclusion you can reach is that someone is managing which databases are acceptable on this issue. I have caught AI in lies and I have pointed it out to AI and they have apologized.. consequently the American public in general who believes AI is neutral is being gaslit!
My further thought is that someone has cleverly written algorithms that will block any data that is negative on RNA.
After Charlie Kirk died, all the AI chat boxes I checked listed TPUSA as a hate organization.
We’ve been dealing with “misinformation” for a long time now with the MSM.
But what should really be scary to us is that “they” want to usher in universal basic income during the transition period before AI transforms our brave new world.
And what should be even scarier is that people are excited about it - moonwalking towards full dependency on the government: https://lizlasorte.substack.com/p/money-for-nothing?r=76q58