Discussion about this post

User's avatar
Lonnie Bedell's avatar

AI is the most expensive parrot ever created. I used to repair roombas & everybody thought they learned, but they don't. It responds to switches with pre-programmed responses. Same applies to AI. It's yet another scam the oligarchs create to fleece & control the masses, then blame it on AI. What a huge waste of $$$.

Dr. K's avatar
2hEdited

Robert, As someone who spends full time in this space, this is generally well done. But generative AI's (of which LLMs are a type) are permanently limited by being correlation machines, not "thinking" machines. (You leave this as an unanswered problem, but it is far clearer to many of us.) This recent study underscores this well: https://machinelearning.apple.com/research/illusion-of-thinking. You also did not cover important other contamination issues like poisoning and sycophancy that are inherent in the generative AI framework. This does not mean that the generative AI frame is not useful -- many of us use it continuously. The question with which we wrestle is whether it is useful for medical care, the area in which many of us focus, where one deals with patients, not improving articles or looking for references.

DARPA has defined three waves of AI. Generative AIs (LLMs, deep learning, etc.) are squarely in the second wave which DARPA defines as "statistically impressive but individually unreliable". An excellent review complementary to yours that lays this out clearly is here: https://machinelearning.technicacuriosa.com/2017/03/19/a-darpa-perspective-on-artificial-intelligence/. Obviously systems that are (and will always be) individually unreliable are not in the cards for medicine -- This is why protocols are such a problem -- caring for the population says nothing about caring for an individual, and medicine is ALL about individuals.

To reach DARPA Wave 3, Contextual Adaptation, in which individual data is reliable requires an environment in which an entirely new kind of AI, Cognitive AI, is needed to move to something closer to "thinking" as we view it -- This kind of approach works best for difficult subjects like medicine obviously but is notoriously difficult to mount because it requires curated (by people) knowledge -- not brute force like LLMs and other generative approaches. Here is a good paper laying out thoughts about Cognitive vs generative AI: https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc.

We have had a team working in this space for five years now -- the results are remarkably better than the best achievable with generative AI. But the approach is radically different because it needs to be. Generative AI is not going away -- but it is not the solution set for a large number of problems where individuals (for whom there are no training sets and never will be) are the domain.

29 more comments...

No posts

Ready for more?