> OpenAI should be well aware that LLMs are not able to accurately report *why* they did something. They are only able to make up post-hoc rationalizations based on their context window including their output.
That actually sound pretty human-like to me