Discussion about this post

User's avatar
Jean's avatar

Good read. Does contribute to understanding and appreciation of capacities. Does focus on utility.

Admission. For the rest of my tenure I am not planning to personally engage with AI. For others I appreciate potential values. I have concerns about the possible impacts on users.

Todays training and education are lacking. Too much coursework is devoted to perverse/counter and unproductive topics. Emphasis on a patient analysis and critical thinking is lacking. Instant answers and direction are highly prlzed.

Humans knowledge as it exists today may well be diminished as our humanity marches on. Tendencies to accept machine output and instant answers can blunt humans capacities to review. You note the AI user can/will assess the output AI provides. Will tomorrows users be as capable and incentivised? Are we a lazy lot that fail to appreciate the merits of our personal knowledges and perspectives?

Too sum it up, I'm most concerned with AI impacts on our own capacities to impact.

Dr. K's avatar

Robert, As someone who spends full time in this space, this is generally well done. But generative AI's (of which LLMs are a type) are permanently limited by being correlation machines, not "thinking" machines. (You leave this as an unanswered problem, but it is far clearer to many of us.) This recent study underscores this well: https://machinelearning.apple.com/research/illusion-of-thinking. You also did not cover important other contamination issues like poisoning and sycophancy that are inherent in the generative AI framework. This does not mean that the generative AI frame is not useful -- many of us use it continuously. The question with which we wrestle is whether it is useful for medical care, the area in which many of us focus, where one deals with patients, not improving articles or looking for references.

DARPA has defined three waves of AI. Generative AIs (LLMs, deep learning, etc.) are squarely in the second wave which DARPA defines as "statistically impressive but individually unreliable". An excellent review complementary to yours that lays this out clearly is here: https://machinelearning.technicacuriosa.com/2017/03/19/a-darpa-perspective-on-artificial-intelligence/. Obviously systems that are (and will always be) individually unreliable are not in the cards for medicine -- This is why protocols are such a problem -- caring for the population says nothing about caring for an individual, and medicine is ALL about individuals.

To reach DARPA Wave 3, Contextual Adaptation, in which individual data is reliable requires an environment in which an entirely new kind of AI, Cognitive AI, is needed to move to something closer to "thinking" as we view it -- This kind of approach works best for difficult subjects like medicine obviously but is notoriously difficult to mount because it requires curated (by people) knowledge -- not brute force like LLMs and other generative approaches. Here is a good paper laying out thoughts about Cognitive vs generative AI: https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc.

We have had a team working in this space for five years now -- the results are remarkably better than the best achievable with generative AI. But the approach is radically different because it needs to be. Generative AI is not going away -- but it is not the solution set for a large number of problems where individuals (for whom there are no training sets and never will be) are the domain.

98 more comments...

No posts

Ready for more?