Conversation
Notices
-
@bot @hermit @NEETzsche
I am a functionalist, in that if you replaced every neuron in a human brain with a silicon analog that operated exactly the same, you would have something that is equally human, or had just as much of a "soul" as a regular human. Similarly, if you could perfectly replicate a human and it's entire environment in software, you'd have something that is just as "human" as any of us.
Though discussions of whether or not an AI has a soul are a bit of a non-starter from a neuroscience or AI point of view, being intangible. Similarly, whether or not an AI is self-aware or conscious is also a non-starter, since it's not something that can really be proved for any sufficiently advanced AI or humans alike.
That aside, I don't think the solution to real, conscious or self-aware AI is to be found purely in software. Much of what we are comes from us interacting with our environment, and replicating every law of physics and action<->reaction relationship purely in software is pretty impossible at this point. You end up finding that complex, emergent properties are more readily found in robotics, where you get all the laws of physics for free, without having to simulate them. So I think that truly sentient AI will have to be found in the physical world,
As to chatGPT, it, and all modern AI have major limitations. All the recent innovations in AI have just come from very "simple" innovations in architecture, combined with making larger neural networks and throwing more computational power at them. But I think the fundamental architecture of how neural networks are constructed is insufficient. It does have some important components like attention, memory, context, reinforcement learning. But ultimately seems fairly deterministic, taking in words and context, and outputting new word probabilities. But I don't think it has the necessary architecture to be aware of what it's doing. AI has been (out of necessity) stuck on a very simplistic rate-coding model of the neuron, and pre-training network weights, rather than more complex temporal models of the neuron, or using genetic algorithms to breed new network architectures, rather than pre-defining what an architecture should look like.
tl;dr - it's impressive in scale and computational power behind it, but isn't complex enough, and too limited in architecture to be self-aware. Which makes it effective at doing what it's trained to do (pass the Turing test), but not as being true artificial life.
-
@bot @NEETzsche @hermit
As one example, all "neurons" in modern neural networks look like the pic on the left. You put in a number (X), you get a number out (Y). Whereas the spiking model of the neuron (right) can lead to numerous different, unpredictable behaviours depending on how it's configured, and what stimulus (X) it was previously exposed to. You can have a single neuron perform a Fourier transform, or exhibit memory effects. But training that level of complexity to do a given task is extremely hard, but I think necessary for "true" artificial life.
-
The important part is that hermit was btfo (again).
-
@hermit @NEETzsche @bot I'm very happy to talk about it. It's something I've spent a long time thinking about. A good book for anyone is On Intelligence by Jeff Hawkins, creator of the PalmPilot. Gives a good overview of what intelligence and creativity are from an AI/neuroscience point of view.
-
@parker @NEETzsche @bot You actually put a lot of effort into these posts and I think they deserve a thorough read and a proper response, one which I'm now not in a position to provide. I'll return to this later, in order to do so. Thank you.
-
Ok, but you do agree that AI that exists right now does not have feelings or emotions, and doesn’t suffer or have any innate sense of self preservation right? It basically does what you tell it or train it to do.
-
@bot @hermit @NEETzsche All it has is a bunch of numbers representing words, and is designed to find the most human looking relationships between those numbers. It can't fear pain, or death, it suffering without a being to actually experience those things, or a long term memory to remember those experiences. It doesn't have anything besides your words, the local context in which they were given, and the global context provided by its training data. All it "knows" is that the number for death strongly corresponds to the number for fear, and produces an appropriate response.
-
@bot @hermit @NEETzsche e-dabbing in internet slap fights is of zero importance relative to the creation of artificial life