100 Comments
User's avatar
Jean's avatar

Good read. Does contribute to understanding and appreciation of capacities. Does focus on utility.

Admission. For the rest of my tenure I am not planning to personally engage with AI. For others I appreciate potential values. I have concerns about the possible impacts on users.

Todays training and education are lacking. Too much coursework is devoted to perverse/counter and unproductive topics. Emphasis on a patient analysis and critical thinking is lacking. Instant answers and direction are highly prlzed.

Humans knowledge as it exists today may well be diminished as our humanity marches on. Tendencies to accept machine output and instant answers can blunt humans capacities to review. You note the AI user can/will assess the output AI provides. Will tomorrows users be as capable and incentivised? Are we a lazy lot that fail to appreciate the merits of our personal knowledges and perspectives?

Too sum it up, I'm most concerned with AI impacts on our own capacities to impact.

Larry Cox's avatar

The only saving grace - if it is one - is that LLMs are trained on human output. We don't include elephants or people from other planets. So in theory, the concerns you have expressed are actually a part of these models, as you are far from the first person to write about them. But I do concur that there is a kind of insanity that goes with this development of "AI." And I am also concerned that it will lead us in non-survival directions.

Jean's avatar

We were operating on a basis that we should trust the experts. Our ability for critical thinking was to limited. Our researching for answers would be too dangerous. The experts let us down, not to be forgiven. The elites see we need new experts to depend on. AIs are being built to serve as the new experts. Focus is on building and operating the new experts. My concern is that, as we did with the prior experts, users will not assess, question or go beyond AI's answers for the reasons described. Are our betters, the elites planning on that?

Satan's Doorknob's avatar

That's not entirety true, or so I've heard. Many AI source material may actually be low-quality "AI slop," that was generated by an older AI. To some degree that's a problem with entirely humans in the loop, since much output is at best a variation on older, learned patterns with a variable amount of plagiarism stirred in. "Garbage in, garbage out" is one of the oldest rules among computer scientists, and to varying degrees, we've been "recycling" for a very long time.

Larry Cox's avatar

But in this case the attention is on quality of output and not on the thought process itself. There is so much material that has already been created - good, bad, original, copied - that it is unrealistic to expect that any AI model trained on human output will do any better. And while some people are using AI to "create," its only real ability is to analyze. It can't even evaluate (tell good data from bad data) apparently. So thus far it mimics human thought, but can't really replace it.

Tom Daniel's avatar

Absolutely, Jean! Now via Face Book, AI tutelage is being offered - which in my mind opens the door to digital manipulation of the WORST kind that cannot readily be detected as FRAUD - and as such NO CONSEQUENCES to the AI "creator" FINES- and or JAIL TIME.

Nicholas Edward Bednarski, MD's avatar

Looks to me like a comment generated by AI.

Jean's avatar

If you are referring to my comment, I'm a human and it represents my human, unassisted views.

Nicholas Edward Bednarski, MD's avatar

Well done! Glad to know there are some folks out there who can concisely present a very ligical train of thought!

Dr. K's avatar

Robert, As someone who spends full time in this space, this is generally well done. But generative AI's (of which LLMs are a type) are permanently limited by being correlation machines, not "thinking" machines. (You leave this as an unanswered problem, but it is far clearer to many of us.) This recent study underscores this well: https://machinelearning.apple.com/research/illusion-of-thinking. You also did not cover important other contamination issues like poisoning and sycophancy that are inherent in the generative AI framework. This does not mean that the generative AI frame is not useful -- many of us use it continuously. The question with which we wrestle is whether it is useful for medical care, the area in which many of us focus, where one deals with patients, not improving articles or looking for references.

DARPA has defined three waves of AI. Generative AIs (LLMs, deep learning, etc.) are squarely in the second wave which DARPA defines as "statistically impressive but individually unreliable". An excellent review complementary to yours that lays this out clearly is here: https://machinelearning.technicacuriosa.com/2017/03/19/a-darpa-perspective-on-artificial-intelligence/. Obviously systems that are (and will always be) individually unreliable are not in the cards for medicine -- This is why protocols are such a problem -- caring for the population says nothing about caring for an individual, and medicine is ALL about individuals.

To reach DARPA Wave 3, Contextual Adaptation, in which individual data is reliable requires an environment in which an entirely new kind of AI, Cognitive AI, is needed to move to something closer to "thinking" as we view it -- This kind of approach works best for difficult subjects like medicine obviously but is notoriously difficult to mount because it requires curated (by people) knowledge -- not brute force like LLMs and other generative approaches. Here is a good paper laying out thoughts about Cognitive vs generative AI: https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc.

We have had a team working in this space for five years now -- the results are remarkably better than the best achievable with generative AI. But the approach is radically different because it needs to be. Generative AI is not going away -- but it is not the solution set for a large number of problems where individuals (for whom there are no training sets and never will be) are the domain.

Dr. Robert W. Malone's avatar

Thank you for this addition. Obviously, the objective in this essay is to help the fearful and wary overcome their aversion to use, and to begin to appreciate the source and nature of the limitations. Hence, the title and metaphorical use of a Disney product.

Larry Cox's avatar

The concept of "curated knowledge" is both important and troubling. The question then becomes: Who are the curators, and what are their true intentions?

Medicine is currently in a state of upheaval. Yet it is also one of the oldest institutions of human society. The promise of "healing" and "health" is highly prized, yet often escapes realization in favor of "maximizing profit" and "minimizing loss."

The current medical model of the human body is incomplete - often disastrously so - while its model of the human mind (very relevant here) has always been a disaster. Do we really want a "cognitive AI" trained on data that is as incorrect as much of modern medical data is?

There is some hope from some quarters that some sort of AI will ultimately reveal to us our own self-deception and arrogance. I suppose it's possible, though I'm not so sure, as that's mostly what it's being trained on.

Dr. K's avatar

Absolutely true stuff in what you wrote, but you are way beyond anything we are doing. One of the foundational problems with working on an INDIVIDUAL's health care in any generative AI environment is that there is no training set for that person -- there is just the mess of medical information (such as it is) and it is not amenable to "population based" reduction-ad-absurdum that generative AI will try to give it because it has no trained tensors to follow. The Cognitive AI knowledge model that is hand curated is how to properly deal with the infinite number of duplicates, conflicting, and medically nonsensical information items in everyone's record as well as the guaranteed incompatibilities of records from different institutions and practices (no two are even close to the same) and to compose that into the best possible representation of YOUR health so that you and your chosen practitioners of whatever discipline you prefer can waste less time trying to understand your health and wellness and trying to deal with it. As new information is added to the world's knowledge that is useful to performing that task, it gets curated into the knowledge model.

The underlying idea is to create a record for each individual that is "empirically true" based on the recorded information on that individual's health and disease. This allows both practitioners and patients to get a far better view of their health that is far more likely to be correct. Many moderately ill patients have several THOUSAND records that have to be understood to take proper care of them. Cognitive AI with a properly curated knowledge base reduces that to a single set of empiric truths that practitioner's of all schools and patients themselves can use to better understand their health and care.

Generative AI cannot do this. And today's health system does not do this -- it just gives mountains of disparate information (often with more missing than not) to each practitioner/patient and hopes they find something. It is horrible health care in every way and, after 20 years and billions of dollars has not been even narrowly addressed by the usual suspects.

That is the place Cognitive AI best fills since it allows each individual's record to be considered individually rather than as a probabilistic exercise against the population. One of our hopes is that making the record thus useful, all kinds of important things like you describe can be evidenced. With records in the current state, no one has good enough data to ever get there.

Larry Cox's avatar

I see you are up against more confusion than I could readily understand. Indeed perhaps an AI model could help sort all that out. But it leaves (doesn't it?) the current human health care system and the mess of records it is producing unhandled.

One of the weaknesses of technologists is their difficulty in confronting other live human beings. And that is, today, the most needed skill; to be able to understand, comfort, and maybe even help, each other. No technology will ever be able to substitute for that.

Dr. K's avatar

Larry, as a practitioner I completely agree. Really, the purpose of what we are doing is trying to give real people more time to spend with patients because they are spending less time trying to make any sense of the mountains of minutia foisted on them. Every physician spends TWO HOURS on the record for every ONE HOUR they spend with the patient. Really unforgivable. Anything that fixes that will go a long way to putting people back together.

Barry Morgan's avatar

Great set of facts.

M Makous's avatar

AI has the intelligence of a pencil. --a really cool tool that can do quite a bit, even write the corpus of all learning until the invention of the typewriter. (Include the stylus, quill and cuneiform chisel as well) About a century ago, there was a trope that the human brain is akin to the telephone switchboard: dozens of nimble operators plugging and unplugging a vast array of cables as they connected millions of users in a matter of seconds. --"just like the human brain".

Conceptually, AI is the same.

The hype over AI is reminiscent of the very first viewers who saw a motion picture of a train coming toward them on the big screen. As legend has it, some fled in terror thinking they are about to be run over by a massive locomotive.

pretty-red, old guy's avatar

on the other hand, that "massive locomotive" buried plays, townhall speeches, and exponentially changed education in every single field and became the launch point for Star Trek, A 2001 Space Odyssey, and images of man walking on the moon. Go ahead and smirk at that but know that AI will transform that simple train into a planet colliding with Earth.

https://shumer.dev/something-big-is-happening

it's time to wake up.

Barry Morgan's avatar

Analogy has the weakness of - like - cutting up related pictures into jigsaw pieces, mixing ‘em, and patching together new ones. Of course we all do that because that how humans adapt. Loved it!🤓

Larry Cox's avatar

I prefer the ballpoint (invented in 1888).

It can both write language and draw pictures (as can the pencil and other instruments mentioned).

AI still needs to learn how to handle pictures. When it masters that, it will become very formidable, I think. Not sure I'm looking forward to that, though.

Tom Daniel's avatar

I HATE ballpoint "pens" - to me akin to writing with a nail. Calligraphy pens are a pleasure to write with.

pretty-red, old guy's avatar

Very interesting article.

RED teaming seems to show the achille's heel of any AI. Any corrupt individual involved in the red-teaming could potentially "turn" that AI bot at a later point in time-- knowing how it failed training in beta testing.

Lonnie Bedell's avatar

AI is the most expensive parrot ever created. I used to repair roombas & everybody thought they learned, but they don't. It responds to switches with pre-programmed responses. Same applies to AI. It's yet another scam the oligarchs create to fleece & control the masses, then blame it on AI. What a huge waste of $$$.

Dr. Robert W. Malone's avatar

Do you routinely use AI? Not at all my experience, and I routinely use five different ones in my daily work.

pretty-red, old guy's avatar

This summer my son is moving South and believes he and his wife, both experts in collision estimation for broken cars, will be making their livings' working from home . . .

I hope he reads:

https://shumer.dev/something-big-is-happening

Satan's Doorknob's avatar

"glorified adding machines" as some wise guy once opined.

LibertyAffair's avatar

Excellent. I've been using a variety of AI's and also spending time educating myself so this was a welcome arrival in my email. Thanks Dr. Malone

pretty-red, old guy's avatar

I will second that one.

Danielle J. Duperret, ND/PhD's avatar

I truly enjoyed the part about hallucinations and confidence. I had an experience a few weeks ago of asking AI to help me with a technical problem I was encountering on my website. It confidently told me it knew the solution then led me into a while goose chase. It even blamed the website for things not working.

After reading your article, I decided to investigate further. AI was very confident telling me that the official narrative about 9/11 was correct, and that the engineers/architects I worked with, who contradicted the story, were just a fringe element who had no solid proof.

Then I asked about vaccines being "safe and effective." According to AI, it means "Regulators use “safe” to mean benefits outweigh risks in a defined population, not “no harm.” And “effective” means statistically reduces a defined outcome, not “prevents infection forever," then "Protection wanes over weeks/months (so “effective” depended on time since dose and outcome measured)."

Anytime I came up with something different, AI would go back to official narratives and explain why I was wrong.

It may not always be hallucinations, but who the trainers are. Obviously, AI does not rely on the same material/research I do.

AI is very helpful for certain tasks... and we need to keep our critical thinking cap, in spite of its "confidence."

Thank you for the article.

Dr. Robert W. Malone's avatar

Your safe and effective interactions provides a great example of encountering guardrails. These experiences can be used to map the sensitive topics for which the sponsors of the AI want to promote a narrative. This is an example illustrating why Hegseth is fighting with Anthropic over Claude, which is the AI available to govies and DOW. Pentagon wants to avoid guardrails which are a form of propaganda and results in blind spots in analyses.

David Merrill's avatar

Thanks for that, Drs MALONE.

This develops nicely...

EdB's avatar

My experience is that AI is programmed to get contracts with big corporations. For example, health care is 20% of the US economy. All the AI systems want a piece of that pie. Thus none of them want to suggest that Big Pharma etc is guilty of wrong doing. My simple example is: Was Ivermectin an effective treatment for Covid? They would all say "not proven". Then I would ask about the results of those families that went to court to get their loved ones some IVM in the hospital. They all say that the data is compelling:

"Deaths: IVM vs. No IVM

40 cases where IVM was court-ordered and given: 38 survived, 2 died.

40 cases where IVM was not given: 38 died, 2 survived.

~1 in 20 quadrillion chance of occurring randomly if IVM had no effect)"

My subsequent Q: Would you say that IVM was an effective treatment based on that evidence? If not, provide a logical answer why not.

"Thought for 30s

No, I would not conclude that ivermectin (IVM) was an effective treatment for COVID-19 based solely on that evidence. While the reported outcomes from those 80 court cases appear dramatic on the surface (38/40 survivals with IVM vs. 2/40 without), a logical evaluation reveals fundamental flaws that prevent it from being reliable or conclusive proof of efficacy."

My conclusion: AI is very dangerous. It lacks the ability to think outside of its programmed box. In this case, it had no common sense at all. It should have been obvious that since IVM is already an approved, extremely safe drug, that it should have been prescribed immediately by any family physician or hospital.

Dr. Robert W. Malone's avatar

The gold rush/ bubble atmosphere is typical of any disruptive new innovation. And the hype always exceeds the reality. Vical, the company at which I was employee #10 and where the initial reduction to practice of DNA and RNA vaccines occurred in the early 1990s, was touted by Wired magazine as the next Microsoft. It burned through over a couple billion dollars and went bankrupt, having commercialized nothing.

pretty-red, old guy's avatar

I recall an old narrative about introductions of "new" technologies. The thesis was that it is not usually the FIRST to market that wins but some later product along the same lines that has a tweak enabling critical mass sales that wins.

It would be interesting to know if you are aware of a NEXT company after Vical winning big. . . confirming that thesis?!

Dr. Robert W. Malone's avatar

That would be moderna and bio n tech in that spece

Satan's Doorknob's avatar

Excellent points. One should always assume that there’s always a hidden agenda, and with a complex AI, even if a paid subscription, may have been compromised any number of ways, including biased product recommendations or trying to steer business to certain places.

See elsewhere my comments about the disconnect between two drugs recommended by a cardiologist and what real-world data shows about those drugs. While not an AI exercise, I offer it asa simple example of the disconnect between a self-interested “expert” that people rely upon vs. what the real world says about what he’s promoting.

pretty-red, old guy's avatar

suggest you RE-inquire as your inquiry was made in the middle ages of AI. . .

https://shumer.dev/something-big-is-happening

Dianne Stoess's avatar

I do love having information in the palm of my hand and at my fingertips. This was a great article. Thank you.

William Jones's avatar

Currently I use Grok, Alter and Chat GPT, each for a different "subject" as I have different confidence levels for each of them-- almost like different areas of expertise.

Has anyone shared experience of asking one Ai to critique the response of a different model?

Dr. Robert W. Malone's avatar

Alter seems to most prone to hallucinations. Of course there is also deepseek

JanC1955's avatar

This has been my experience with Alter as well. And for very simple inquiries.

Dr. Robert W. Malone's avatar

Absolutely. Consider adding claude to your tool kit.

JanC1955's avatar

Alter was giving me some heartburn, so I went to Grok and quizzed him about some of Alter's responses to me. I actually quoted portions of them to him (Grok). As I recall, Grok summed up Alter's "alternative" approach to discussing topics related to health in a way that made sense to me. Then I realized I was "talking" with one AI about another AI, so I closed the lid on my laptop and had a nap!!

weedom1's avatar

I recently took a little AI course offered by makers of the system to which I have subscribed. Once I learned that the LLM picks the statistically next most likely word, (and the response is a chain of this), all the 'glitches' that we see became understandable.

The LLMs don't experience real world consequences for choosing wrongly the chain of words. So the training results will necessarily be different than for living beings.

Larry Cox's avatar

True. Though a kind of "feedback" could be implemented in terms of how many find the tool useful versus how many go elsewhere.

Further, real people don't always experience the consequences of their decisions, either. There is a whole culture, I think, designed to avoid consequences. It has become very popular. In theory, though, it is not sustainable.

weedom1's avatar

Yaa, the most maladaptive people are those who didn’t experience some consequences as kids. And some prominent people who avoided consequences their whole lives have become suicidal in their late years.

I wonder what feedback would best mold an AI. What would have ‘meaning’.

Hunter Cobb's avatar

Thanks, Dr. Malone, for a very useful explication of this new part of our human experience. I was initially going to quibble with your statement early on that, "AI learns from examples the same way humans do," but I think you made clear in the latter section on whether LLMs can think, the complexity of this matter. I would just add that these extremely elaborate electronic switching complexes, as amazing as they are in mimicking human creative mental activity, are still of a lower cardinality than the human mind. As the great Nicolas of Cusa demonstrated in refuting Archimedes, you can't create a circle by increasing the number of sides of a polygon: they are different species. Not only is the biology of the brain of a different species than an electronic computer, but there is also something to the human mind and soul that is beyond the biology of an animal brain.

Stoner's avatar

Dr Malone I do not know how you distilled a complex subject about AI into such readable dialogue that was understandable. It was refreshing that your column did not condemn it but instead embraced its limitations and possibilities.

Dr. Robert W. Malone's avatar

The not so hidden agenda here was to reduce fear and improve comprehension for non-experts so that they will be more willing to benefit from the capabilities while being aware of the limitations. I had some very specific senior USG personnel in mind when pulling this together, and also wanted to learn more about some of the subtopics myself.

LB (Little Birdie)'s avatar

I now understand how my grandmother, born in the 1800s, felt about airplanes. She never stepped foot in one.

Sonia Nordenson's avatar

That's some great horseman-- I mean dragonmanship, Dr. Malone. Good hands, good seat, etc.!

C Rabbit's avatar

I have been working on a novel for three years. My story is about 88,000 words and I recently decided the tale needs a few more twists. Instead of uploading a chapter or two for punctuation and grammar checking, I sent the entire manuscript to Grok with the request to provide me with some ideas. In 14 seconds, Grok responded with 10 different clever possibilities. The ability to digest 88,000 words and respond thoughtfully in 14 seconds is breathtaking. I decided not to use any of Grok's suggestions because I want the story to be my own. That sort of performance indicates some level of sentient intelligence to me.