Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

LLMs don’t do formal reasoning and that is a HUGE problem

Marcus on AI: “A superb new article on LLMs from six AI researchers at Apple who were brave enough to challenge the dominant paradigm has just come out. Everyone actively working with AI should read it, or at least this terrific X thread by senior author, Mehrdad Farajtabar, that summarizes what they observed. One key passage:

“we found no evidence of formal reasoning in language models …. Their behavior is better explained by sophisticated pattern matching—so fragile, in fact, that changing names can alter results by ~10%!”

One particularly damning result was a new task the Apple team developed, called GSM-NoOp… [see also the comments on the bottom of the article…one of which states: “I see people increasingly finding that LLMs and other genAI are useful in ways that don’t require reasoning.”]

Sorry, comments are closed for this post.