AI is the most expensive parrot ever created. I used to repair roombas & everybody thought they learned, but they don't. It responds to switches with pre-programmed responses. Same applies to AI. It's yet another scam the oligarchs create to fleece & control the masses, then blame it on AI. What a huge waste of $$$.
This summer my son is moving South and believes he and his wife, both experts in collision estimation for broken cars, will be making their livings' working from home . . .
Good read. Does contribute to understanding and appreciation of capacities. Does focus on utility.
Admission. For the rest of my tenure I am not planning to personally engage with AI. For others I appreciate potential values. I have concerns about the possible impacts on users.
Todays training and education are lacking. Too much coursework is devoted to perverse/counter and unproductive topics. Emphasis on a patient analysis and critical thinking is lacking. Instant answers and direction are highly prlzed.
Humans knowledge as it exists today may well be diminished as our humanity marches on. Tendencies to accept machine output and instant answers can blunt humans capacities to review. You note the AI user can/will assess the output AI provides. Will tomorrows users be as capable and incentivised? Are we a lazy lot that fail to appreciate the merits of our personal knowledges and perspectives?
Too sum it up, I'm most concerned with AI impacts on our own capacities to impact.
Robert, As someone who spends full time in this space, this is generally well done. But generative AI's (of which LLMs are a type) are permanently limited by being correlation machines, not "thinking" machines. (You leave this as an unanswered problem, but it is far clearer to many of us.) This recent study underscores this well: https://machinelearning.apple.com/research/illusion-of-thinking. You also did not cover important other contamination issues like poisoning and sycophancy that are inherent in the generative AI framework. This does not mean that the generative AI frame is not useful -- many of us use it continuously. The question with which we wrestle is whether it is useful for medical care, the area in which many of us focus, where one deals with patients, not improving articles or looking for references.
DARPA has defined three waves of AI. Generative AIs (LLMs, deep learning, etc.) are squarely in the second wave which DARPA defines as "statistically impressive but individually unreliable". An excellent review complementary to yours that lays this out clearly is here: https://machinelearning.technicacuriosa.com/2017/03/19/a-darpa-perspective-on-artificial-intelligence/. Obviously systems that are (and will always be) individually unreliable are not in the cards for medicine -- This is why protocols are such a problem -- caring for the population says nothing about caring for an individual, and medicine is ALL about individuals.
To reach DARPA Wave 3, Contextual Adaptation, in which individual data is reliable requires an environment in which an entirely new kind of AI, Cognitive AI, is needed to move to something closer to "thinking" as we view it -- This kind of approach works best for difficult subjects like medicine obviously but is notoriously difficult to mount because it requires curated (by people) knowledge -- not brute force like LLMs and other generative approaches. Here is a good paper laying out thoughts about Cognitive vs generative AI: https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc.
We have had a team working in this space for five years now -- the results are remarkably better than the best achievable with generative AI. But the approach is radically different because it needs to be. Generative AI is not going away -- but it is not the solution set for a large number of problems where individuals (for whom there are no training sets and never will be) are the domain.
Thank you for this addition. Obviously, the objective in this essay is to help the fearful and wary overcome their aversion to use, and to begin to appreciate the source and nature of the limitations. Hence, the title and metaphorical use of a Disney product.
AI has the intelligence of a pencil. --a really cool tool that can do quite a bit, even write the corpus of all learning until the invention of the typewriter. (Include the stylus, quill and cuneiform chisel as well) About a century ago, there was a trope that the human brain is akin to the telephone switchboard: dozens of nimble operators plugging and unplugging a vast array of cables as they connected millions of users in a matter of seconds. --"just like the human brain".
Conceptually, AI is the same.
The hype over AI is reminiscent of the very first viewers who saw a motion picture of a train coming toward them on the big screen. As legend has it, some fled in terror thinking they are about to be run over by a massive locomotive.
on the other hand, that "massive locomotive" buried plays, townhall speeches, and exponentially changed education in every single field and became the launch point for Star Trek, A 2001 Space Odyssey, and images of man walking on the moon. Go ahead and smirk at that but know that AI will transform that simple train into a planet colliding with Earth.
RED teaming seems to show the achille's heel of any AI. Any corrupt individual involved in the red-teaming could potentially "turn" that AI bot at a later point in time-- knowing how it failed training in beta testing.
I recently took a little AI course offered by makers of the system to which I have subscribed. Once I learned that the LLM picks the statistically next most likely word, (and the response is a chain of this), all the 'glitches' that we see became understandable.
The LLMs don't experience real world consequences for choosing wrongly the chain of words. So the training results will necessarily be different than for living beings.
LLM's are the best we have right now, but diffusion models are gaining quickly, especially in video. Combining the best of both will probably be the next step.
AI as a term arrived about 70 years ago…It was based on the ridiculous idea that binary-switched logic resulted in thinking. Until quantum computing is mainstream, AI will still be zeros and ones. Today’s AI is the next dotcom, and just a buzzword to get investors. Tomorrow’s AI? Scary. Just my old geek opinion.
As one old geek to another I definately get your point but you’ve got to admire how this post describes the development path ( and logic) that got us from where we Were to where we Are!
I don't like your usage of the term "harm". It seems like you are opening the door to censorship. The AI should be designed to be truthful at all times and express uncertainty when it exists. But it is always Buyer Beware whether with people or a machine. What happened to the Sticks and Stones adage? If the AI is lying, how are you going to know anyway?
My experience is that AI is programmed to get contracts with big corporations. For example, health care is 20% of the US economy. All the AI systems want a piece of that pie. Thus none of them want to suggest that Big Pharma etc is guilty of wrong doing. My simple example is: Was Ivermectin an effective treatment for Covid? They would all say "not proven". Then I would ask about the results of those families that went to court to get their loved ones some IVM in the hospital. They all say that the data is compelling:
"Deaths: IVM vs. No IVM
40 cases where IVM was court-ordered and given: 38 survived, 2 died.
40 cases where IVM was not given: 38 died, 2 survived.
~1 in 20 quadrillion chance of occurring randomly if IVM had no effect)"
My subsequent Q: Would you say that IVM was an effective treatment based on that evidence? If not, provide a logical answer why not.
"Thought for 30s
No, I would not conclude that ivermectin (IVM) was an effective treatment for COVID-19 based solely on that evidence. While the reported outcomes from those 80 court cases appear dramatic on the surface (38/40 survivals with IVM vs. 2/40 without), a logical evaluation reveals fundamental flaws that prevent it from being reliable or conclusive proof of efficacy."
My conclusion: AI is very dangerous. It lacks the ability to think outside of its programmed box. In this case, it had no common sense at all. It should have been obvious that since IVM is already an approved, extremely safe drug, that it should have been prescribed immediately by any family physician or hospital.
The gold rush/ bubble atmosphere is typical of any disruptive new innovation. And the hype always exceeds the reality. Vical, the company at which I was employee #10 and where the initial reduction to practice of DNA and RNA vaccines occurred in the early 1990s, was touted by Wired magazine as the next Microsoft. It burned through over a couple billion dollars and went bankrupt, having commercialized nothing.
I recall an old narrative about introductions of "new" technologies. The thesis was that it is not usually the FIRST to market that wins but some later product along the same lines that has a tweak enabling critical mass sales that wins.
It would be interesting to know if you are aware of a NEXT company after Vical winning big. . . confirming that thesis?!
Indightful as one considers the education of children and why their logic and reasoning is too often flawed ... no one is checking on their answers and they are not required to cite credible sources for their facts.
My experience is that AI is programmed to get contracts with big corporations. For example, health care is 20% of the US economy. All the AI systems want a piece of that pie. Thus none of them want to suggest that Big Pharma etc is guilty of wrong doing. My simple example is: Was Ivermectin an effective treatment for Covid? They would all say "not proven". Then I would ask about the results of those families that went to court to get their loved ones some IVM in the hospital. They all say that the data is compelling:
"Deaths: IVM vs. No IVM
40 cases where IVM was court-ordered and given: 38 survived, 2 died.
40 cases where IVM was not given: 38 died, 2 survived.
~1 in 20 quadrillion chance of occurring randomly if IVM had no effect)"
My subsequent Q: Would you say that IVM was an effective treatment based on that evidence? If not, provide a logical answer why not.
"Thought for 30s
No, I would not conclude that ivermectin (IVM) was an effective treatment for COVID-19 based solely on that evidence. While the reported outcomes from those 80 court cases appear dramatic on the surface (38/40 survivals with IVM vs. 2/40 without), a logical evaluation reveals fundamental flaws that prevent it from being reliable or conclusive proof of efficacy."
My conclusion: AI is very dangerous. It lacks the ability to think outside of its programmed box. In this case, it had no common sense at all. It should have been obvious that since IVM is already an approved, extremely safe drug, that it should have been prescribed immediately by any family physician or hospital.
Please forgive me for not reading this; we have chosen different paths. I'm more comfortable with the old Amish approach rather than jumping on the AI train. I do respect your position, that it's better to be informed and use it judiciously, so I'll just take your word re/ conclusions.
This article came out a fortnight ago, perhaps you saw it:
This essay was designed for those who are considering or actively using LLM so that they can better understand what they are and what they are not, how they are created, and how to think about their limitations.
Holy Crapp. Everyone's children should read this. . .
Not trivial.
Moore's law multipled. . .
However, 53rd Chapter, this article provides all the reasons for following AI vs the Amish way. . . A consolation is that anything requiring direct human intervention(e.g. digging ditches, carpentry, etc.) will be the last to be compromised.
Here is a bit of possibility that was revealed to Dannion Brinkley during his death experience in 1975. "The Internet is in the process of developing a global consciousness of its own. This is due to transference of extreme emotions and opinions of its users being expressed via the technology The intensity of these emotions is being encoded upon the routing system of the Web (AI) at such high levels that it is causing the Internet to spawn self-awareness." and there is more! Welcome to the Matrix, Neo./ This information isn't his imagination talking.
AI is the most expensive parrot ever created. I used to repair roombas & everybody thought they learned, but they don't. It responds to switches with pre-programmed responses. Same applies to AI. It's yet another scam the oligarchs create to fleece & control the masses, then blame it on AI. What a huge waste of $$$.
Do you routinely use AI? Not at all my experience, and I routinely use five different ones in my daily work.
This summer my son is moving South and believes he and his wife, both experts in collision estimation for broken cars, will be making their livings' working from home . . .
I hope he reads:
https://shumer.dev/something-big-is-happening
Good read. Does contribute to understanding and appreciation of capacities. Does focus on utility.
Admission. For the rest of my tenure I am not planning to personally engage with AI. For others I appreciate potential values. I have concerns about the possible impacts on users.
Todays training and education are lacking. Too much coursework is devoted to perverse/counter and unproductive topics. Emphasis on a patient analysis and critical thinking is lacking. Instant answers and direction are highly prlzed.
Humans knowledge as it exists today may well be diminished as our humanity marches on. Tendencies to accept machine output and instant answers can blunt humans capacities to review. You note the AI user can/will assess the output AI provides. Will tomorrows users be as capable and incentivised? Are we a lazy lot that fail to appreciate the merits of our personal knowledges and perspectives?
Too sum it up, I'm most concerned with AI impacts on our own capacities to impact.
Astute
Robert, As someone who spends full time in this space, this is generally well done. But generative AI's (of which LLMs are a type) are permanently limited by being correlation machines, not "thinking" machines. (You leave this as an unanswered problem, but it is far clearer to many of us.) This recent study underscores this well: https://machinelearning.apple.com/research/illusion-of-thinking. You also did not cover important other contamination issues like poisoning and sycophancy that are inherent in the generative AI framework. This does not mean that the generative AI frame is not useful -- many of us use it continuously. The question with which we wrestle is whether it is useful for medical care, the area in which many of us focus, where one deals with patients, not improving articles or looking for references.
DARPA has defined three waves of AI. Generative AIs (LLMs, deep learning, etc.) are squarely in the second wave which DARPA defines as "statistically impressive but individually unreliable". An excellent review complementary to yours that lays this out clearly is here: https://machinelearning.technicacuriosa.com/2017/03/19/a-darpa-perspective-on-artificial-intelligence/. Obviously systems that are (and will always be) individually unreliable are not in the cards for medicine -- This is why protocols are such a problem -- caring for the population says nothing about caring for an individual, and medicine is ALL about individuals.
To reach DARPA Wave 3, Contextual Adaptation, in which individual data is reliable requires an environment in which an entirely new kind of AI, Cognitive AI, is needed to move to something closer to "thinking" as we view it -- This kind of approach works best for difficult subjects like medicine obviously but is notoriously difficult to mount because it requires curated (by people) knowledge -- not brute force like LLMs and other generative approaches. Here is a good paper laying out thoughts about Cognitive vs generative AI: https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc.
We have had a team working in this space for five years now -- the results are remarkably better than the best achievable with generative AI. But the approach is radically different because it needs to be. Generative AI is not going away -- but it is not the solution set for a large number of problems where individuals (for whom there are no training sets and never will be) are the domain.
Thank you for this addition. Obviously, the objective in this essay is to help the fearful and wary overcome their aversion to use, and to begin to appreciate the source and nature of the limitations. Hence, the title and metaphorical use of a Disney product.
AI has the intelligence of a pencil. --a really cool tool that can do quite a bit, even write the corpus of all learning until the invention of the typewriter. (Include the stylus, quill and cuneiform chisel as well) About a century ago, there was a trope that the human brain is akin to the telephone switchboard: dozens of nimble operators plugging and unplugging a vast array of cables as they connected millions of users in a matter of seconds. --"just like the human brain".
Conceptually, AI is the same.
The hype over AI is reminiscent of the very first viewers who saw a motion picture of a train coming toward them on the big screen. As legend has it, some fled in terror thinking they are about to be run over by a massive locomotive.
good analogy
on the other hand, that "massive locomotive" buried plays, townhall speeches, and exponentially changed education in every single field and became the launch point for Star Trek, A 2001 Space Odyssey, and images of man walking on the moon. Go ahead and smirk at that but know that AI will transform that simple train into a planet colliding with Earth.
https://shumer.dev/something-big-is-happening
it's time to wake up.
Very interesting article.
RED teaming seems to show the achille's heel of any AI. Any corrupt individual involved in the red-teaming could potentially "turn" that AI bot at a later point in time-- knowing how it failed training in beta testing.
absolutely
I recently took a little AI course offered by makers of the system to which I have subscribed. Once I learned that the LLM picks the statistically next most likely word, (and the response is a chain of this), all the 'glitches' that we see became understandable.
The LLMs don't experience real world consequences for choosing wrongly the chain of words. So the training results will necessarily be different than for living beings.
Excellent. I've been using a variety of AI's and also spending time educating myself so this was a welcome arrival in my email. Thanks Dr. Malone
I will second that one.
So that's what they call it, The Hallucination Problem with the RAG solution. And to think that all this time I was calling it BS and a pile of crap.
Thank you for sharing this.
LLM's are the best we have right now, but diffusion models are gaining quickly, especially in video. Combining the best of both will probably be the next step.
AI as a term arrived about 70 years ago…It was based on the ridiculous idea that binary-switched logic resulted in thinking. Until quantum computing is mainstream, AI will still be zeros and ones. Today’s AI is the next dotcom, and just a buzzword to get investors. Tomorrow’s AI? Scary. Just my old geek opinion.
As one old geek to another I definately get your point but you’ve got to admire how this post describes the development path ( and logic) that got us from where we Were to where we Are!
I don't like your usage of the term "harm". It seems like you are opening the door to censorship. The AI should be designed to be truthful at all times and express uncertainty when it exists. But it is always Buyer Beware whether with people or a machine. What happened to the Sticks and Stones adage? If the AI is lying, how are you going to know anyway?
My experience is that AI is programmed to get contracts with big corporations. For example, health care is 20% of the US economy. All the AI systems want a piece of that pie. Thus none of them want to suggest that Big Pharma etc is guilty of wrong doing. My simple example is: Was Ivermectin an effective treatment for Covid? They would all say "not proven". Then I would ask about the results of those families that went to court to get their loved ones some IVM in the hospital. They all say that the data is compelling:
"Deaths: IVM vs. No IVM
40 cases where IVM was court-ordered and given: 38 survived, 2 died.
40 cases where IVM was not given: 38 died, 2 survived.
~1 in 20 quadrillion chance of occurring randomly if IVM had no effect)"
My subsequent Q: Would you say that IVM was an effective treatment based on that evidence? If not, provide a logical answer why not.
"Thought for 30s
No, I would not conclude that ivermectin (IVM) was an effective treatment for COVID-19 based solely on that evidence. While the reported outcomes from those 80 court cases appear dramatic on the surface (38/40 survivals with IVM vs. 2/40 without), a logical evaluation reveals fundamental flaws that prevent it from being reliable or conclusive proof of efficacy."
My conclusion: AI is very dangerous. It lacks the ability to think outside of its programmed box. In this case, it had no common sense at all. It should have been obvious that since IVM is already an approved, extremely safe drug, that it should have been prescribed immediately by any family physician or hospital.
The gold rush/ bubble atmosphere is typical of any disruptive new innovation. And the hype always exceeds the reality. Vical, the company at which I was employee #10 and where the initial reduction to practice of DNA and RNA vaccines occurred in the early 1990s, was touted by Wired magazine as the next Microsoft. It burned through over a couple billion dollars and went bankrupt, having commercialized nothing.
I recall an old narrative about introductions of "new" technologies. The thesis was that it is not usually the FIRST to market that wins but some later product along the same lines that has a tweak enabling critical mass sales that wins.
It would be interesting to know if you are aware of a NEXT company after Vical winning big. . . confirming that thesis?!
suggest you RE-inquire as your inquiry was made in the middle ages of AI. . .
https://shumer.dev/something-big-is-happening
Indightful as one considers the education of children and why their logic and reasoning is too often flawed ... no one is checking on their answers and they are not required to cite credible sources for their facts.
I prefer dragons for analogy. I raised 4 kids and, what is it we parents do? I couldn’t call it “training”. 😂🤣😂
My experience is that AI is programmed to get contracts with big corporations. For example, health care is 20% of the US economy. All the AI systems want a piece of that pie. Thus none of them want to suggest that Big Pharma etc is guilty of wrong doing. My simple example is: Was Ivermectin an effective treatment for Covid? They would all say "not proven". Then I would ask about the results of those families that went to court to get their loved ones some IVM in the hospital. They all say that the data is compelling:
"Deaths: IVM vs. No IVM
40 cases where IVM was court-ordered and given: 38 survived, 2 died.
40 cases where IVM was not given: 38 died, 2 survived.
~1 in 20 quadrillion chance of occurring randomly if IVM had no effect)"
My subsequent Q: Would you say that IVM was an effective treatment based on that evidence? If not, provide a logical answer why not.
"Thought for 30s
No, I would not conclude that ivermectin (IVM) was an effective treatment for COVID-19 based solely on that evidence. While the reported outcomes from those 80 court cases appear dramatic on the surface (38/40 survivals with IVM vs. 2/40 without), a logical evaluation reveals fundamental flaws that prevent it from being reliable or conclusive proof of efficacy."
My conclusion: AI is very dangerous. It lacks the ability to think outside of its programmed box. In this case, it had no common sense at all. It should have been obvious that since IVM is already an approved, extremely safe drug, that it should have been prescribed immediately by any family physician or hospital.
GIGO has always plagued data processing, and always will
Good points.
Please forgive me for not reading this; we have chosen different paths. I'm more comfortable with the old Amish approach rather than jumping on the AI train. I do respect your position, that it's better to be informed and use it judiciously, so I'll just take your word re/ conclusions.
This article came out a fortnight ago, perhaps you saw it:
https://shumer.dev/something-big-is-happening
This essay was designed for those who are considering or actively using LLM so that they can better understand what they are and what they are not, how they are created, and how to think about their limitations.
Holy Crapp. Everyone's children should read this. . .
Not trivial.
Moore's law multipled. . .
However, 53rd Chapter, this article provides all the reasons for following AI vs the Amish way. . . A consolation is that anything requiring direct human intervention(e.g. digging ditches, carpentry, etc.) will be the last to be compromised.
Here is a bit of possibility that was revealed to Dannion Brinkley during his death experience in 1975. "The Internet is in the process of developing a global consciousness of its own. This is due to transference of extreme emotions and opinions of its users being expressed via the technology The intensity of these emotions is being encoded upon the routing system of the Web (AI) at such high levels that it is causing the Internet to spawn self-awareness." and there is more! Welcome to the Matrix, Neo./ This information isn't his imagination talking.