Conversation
Notices
-
I remember us learning about that in middle school. There have probably been a lot of content in text/pamphlet/book form of the dangers. These models likely have all that input as training data .. and in their attempt to make "new" recipes ... the scoring/word prediction turns those old phrases into content.
It a shining example of the fact that all this generative AI is not, and never will be, smart or intelligent. It kludges shit together that approximates things human have ranked as well worded paragraphs. It's almost always wrong answers only and always.