Is anyone surprised? By definition LLM can’t be 100% correct and LLM hallucination poses significant challenges in generating accurate and reliable responses. ChatGPT Answers Programming Questions Incorrectly 52% of the Time: Study. To make matters worse, programmers in the study would often overlook the misinformation. https://gizmodo.com/chatgpt-answers-wrong-programming-openai-52-study-1851499417
@nixCraft GitHub has started adding recommended fixes to security issues found by CodeQL and they're apparently generated by GitHub Copilot.
The one it found recently in my repo was an unsanitized string being logged with NLog. The recommendation was to do String.Replace("\n", "") which is fine, but it didn't really explain why that specific fix was appropriate or sufficient.
@nixCraft Finally some numbers to back up what I've been feeling all along. I genuinely find very little use for LLMs in my work as a software dev, it often spits out complete bullshit, and/or sends my own unaltered code back at me with "i made these changes".
@nixCraft It's sad to see so many people trying to justify the "toy" by blaming the people who asked the question for the wrong answer provided by the AI.