Conversation
Notices
-
Why AI can't spell "strawberry".
techcrunch.com/2024/08/27/why-ai-cant-spell-strawberry/
I've said before that what we are currently calling AI models, which is really just Large Language Models or LLMs, don't understand anything at all except language. They're language models. That's what they do, and it's all they do.
Except that's not quite true, because they don't understand language in any real way either.
>The failure of large language models to understand the concepts of letters and syllables is indicative of a larger truth that we often forget: These things don't have brains. They do not think like we do. They are not human, nor even particularly humanlike.
>Most LLMs are built on transformers, a kind of deep learning architecture. Transformer models break text into tokens, which can be full words, syllables, or letters, depending on the model.
>"LLMs are based on this transformer architecture, which notably is not actually reading text. What happens when you input a prompt is that it's translated into an encoding," Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, told TechCrunch. "When it sees the word 'the,' it has this one encoding of what 'the' means, but it does not know about 'T,' 'H,' 'E.'"
Typeahead with delusions of grandeur.
- xianc78 repeated this.
-
@Aether I would be better to call them brute force pattern recognition models.