LLMs *could* be cool, if people stopped focusing on applications that fall apart the moment you apply system 2 thinking. It might look like a duck and quack like a duck, but it's actually just a wooden duck with a speaker in it, which can be fun to have around the house and all but you'll be disappointed if you expect it to eat bread.
Why do so many people think it's acceptable to not have a system 2? Basically because system 2 shuts down in response to stress or low resources, and we live in a world that keeps everybody who does real work under a high level of artificial stress while denying them resources -- capitalism is keeping people from performing the kinds of mental operations that purely-statistical systems like LLMs can't perform at all.
In other words: if you find that github copilot actually helps you code, that doesn't mean you should start using copilot -- it means you're so burnt out that you can't think straight, & you need to take a long vacation.
Compare chatgpt to, like, Watson (which is like a wooden duck that has a cadbury creme egg machine in it -- a prolog-like formal inference engine).
Not that you should trust Watson, but at least it *theoretically* can *try* to check whether or not the things it says might be true.
Purely statistical models are purely system 1, so even if one somehow got human-level intelligence, the form of that intelligence would be like an extremely drunk & sleep-deprived person with severe untreated ADHD.