@arstechnica I'm very curious where this discussion goes, because I don't see any way to reliably and always remove it. Perhaps if we tried to give the model some "judgement" by parsing its own output and classifying it, but even that is going to have significant error.