New paper shows that LLMs trained on "A is B" can't infer that "B is A"... Which was a known failure mode of "neural networks" back in 1988.
https://garymarcus.substack.com/p/elegant-and-powerful-new-result-that
"In math, when one make a conjecture, a simple counterexample suffices. If I say all odd numbers are prime, 1, 3, 5, and 7 may count in my favor, but at 9 the game is over... In neural network discussion, people are often impressed by successes, and pay far too little regard to what failures are trying to tell them."