I wish we had a good metaphor for what LLMs do that doesn't involve pretending it has some intent
like a lot of the harsher criticism is like "it makes things up" which implies malice or at least intent, which... there isn't
there's "it draws from a distribution fitted on its input data" which is true but also not really illustrative in any way