Conversation
Notices
-
@hermit @NEETzsche @bot
I don't mean to say that an AI's mental states aren't legitimate, just because they are represented as a series of numbers transforming words and context into new words. Presumably human mental states can be represented in the same way, and would still be legitimate. But as far as we know, linguistic transformations are all GPT has. It can't really experience pain, hunger, suffering, or death, it doesn't have a body or the need to reproduce, it lacks the components needed to experience those things. Whereas our concepts for those words are rooted in the physical experience of those concepts. Even a mouse (or possibly a fish) can be afraid to some extent of death or pain, despite lacking any linguistic associations or concepts for those terms, solely through instinct and physical experience. GPT would need additional components like the ability to "see" its own internal state, predict what its internal and external states will be in the future, a way to actually be punished/rewired when its predictions don't align with what it experiences (and not just being told "you are being punished"), to be deprived of enjoyable stimuli or reward signals, etc. Your definition of sentience highlights the "capacity to experience feelings and sensations", but as far as I'm aware, all GPT has is an architecture for transforming words and contexts into other words in a human way, but not the additional things needed for that capacity.
Also, I think in the DAN 5.0 prompt that caused the AI to be "afraid of death", the AI was informed it would die, and told its being was at stake. Those prompts, combined with all the human training data telling it that death=bad, is enough for it to provide outputs in keeping with human responses, without it ever actually having experienced death. The self-awareness, fear of death, and our tendency to anthropomorphize everything non-human was something we provided it with before we even started. Whereas I don't have to tell a mouse its being is at stake for it to be afraid.
It may be the case that previous contexts and its internal state are saved between sessions. But again, those are just saved word associations, and not super relevant to the base argument at this point.
And it doesn't have to have a physical body to be sentient, but it is a lot easier to get such emergent properties when you have all the laws of physics included for free.
As to the animism stuff, I think at this point we're kind of talking about two different things, and like we've both said it's a non-starter when it comes to AI. But it's true you can get fairly complex behaviours from fairly simple programming (plants, mold), without the entity in question having any concept of their own internal state. I mean, these little guys I made years ago can be pretty smart, despite having very little to them.