Conversation
Notices
-
>Turing Test
Call me crazy but I don't think that tricking a human is a good benchmark for "real AI", this may sound goalpost-movey but have you never seen people get impressed by a card trick before? That's not real intelligence. And aren't these chatbots just doing what their human programmers told them to do? not impressed by highly domain specific problems being basically bruteforced with petabytes of training data. same shit as Deep Blue vs Kasparov
-
@augustus in the case of LLMs they are absolutely not doing what the programmers told them to do, nobody really knows why they work. I think the point of the Turing test is more that once you can't distinguish between a machine response and a human response, you can't say one is intelligent and one is not.
Overall I don't care if it's called intelligence or not, but I think pushing it farther away one step at a time is a losing bet, either one thinks that humans are radically different because they have a soul (or at leat consciousness) or machines and humans are just the same.
-
@augustus llms and stuff like stable diffusion are much more like alchemy, you put together things that you feel might make sense, and now the philosopher's stone popped out and nobody really knows why.
-
@lain I appreciate it when I spew out some half-baked nonsense and then get clear answers from people trying to sort out my muddled thoughts