I'm not CS/IT, but I'm adjacent to both. I haven't been impressed with the code I've asked chatgpt to write. Often it produces code that won't even compile, it doesn't know APIs so it just makes shit up that seems like it should be right, and it seems to spit out exactly what I'd expect from an algorithm designed to make stuff that looks like it should be correct but doesn't need to be.
If this level of work represents the base level from a CompSci student in 2023, then most CompSci students should have been failed out of their respective programs.
@sj_zero@dave I think if you trained it specifically to generate valid executable code it would be different from trying to predict what humans expect to see as a response to a prompt. You'd still have problems making sure that it does what it's supposed to do and not something else entirely. Google had an AI that was training to do something with structures in map data and it basically cheated to pass the tests. The only thing that set them looking for how it was cheating was the fact that it was so perfect and fast, and if I remember correctly it basically used steganography to hide the answers it figured out it was going to need later.
@thatguyoverthere@dave@sj_zero Idea I just had: why are researchers trying to make a text prediction engine into an oracle? Isn't that like telling people's fortunes using tarot cards or crystal balls, or dice?