Taking a look at ChatGPT answers to Stack Overflow questions - a devil's brew of epistemic closure if I ever saw one - researchers found that ChatGPT had no better than a 50% chance of being correct, and for the best and most authoritative answers - the ones that were accepted by the questioner as definitive - ChatGPT was wrong 77% of the time.
I've said before that the entire model of ChatGPT is building an artificial shitlib, but really it seems to be building an artificial confidence trickster.