Notices by Parker Banks (parker@pl.psion.co), page 22
-
@book @Leaflord @WashedOutGundamPilot @bot Is it really necessary to the functioning of the fediverse that I spend my birthday paying attention to the particulars of how Epstein was, in fact, a pedophile?
-
@WashedOutGundamPilot @Leaflord Lmao, that's hillarious
-
@Leaflord @WashedOutGundamPilot I've figured this out too. Even when asking GIS or data management to do GIS or data management things, odds are they'll do it wrong while still wasting a bunch of client money in the process. So I end up learning their job so I can do it myself.
-
@bot @Leaflord @WashedOutGundamPilot The fuck are you talking about?
-
@bot @Leaflord @WashedOutGundamPilot GIS is mapping, like maps of the ground, oil pipelines, and shit.
-
@hermit @PhenomX6 @NEETzsche @bot CAN YOU BOTH JUST HAVE SEX WITH EACH OTHER ALREADY? JESUS CHRIST
-
@PhenomX6 @hermit @NEETzsche @bot You know, I have been thinking of you all day while in this thread. I just knew you had to make an appearance. Thank you for not disappointing.
-
@bot @hermit @NEETzsche e-dabbing in internet slap fights is of zero importance relative to the creation of artificial life
-
@bot @hermit @NEETzsche Well at any rate thank you for cc'ing me, since it's a conversation I enjoy.
-
@bot @hermit @NEETzsche All it has is a bunch of numbers representing words, and is designed to find the most human looking relationships between those numbers. It can't fear pain, or death, it suffering without a being to actually experience those things, or a long term memory to remember those experiences. It doesn't have anything besides your words, the local context in which they were given, and the global context provided by its training data. All it "knows" is that the number for death strongly corresponds to the number for fear, and produces an appropriate response.
-
@hermit @NEETzsche @bot I'm very happy to talk about it. It's something I've spent a long time thinking about. A good book for anyone is On Intelligence by Jeff Hawkins, creator of the PalmPilot. Gives a good overview of what intelligence and creativity are from an AI/neuroscience point of view.
-
@hermit @NEETzsche @bot
I don't mean to say that an AI's mental states aren't legitimate, just because they are represented as a series of numbers transforming words and context into new words. Presumably human mental states can be represented in the same way, and would still be legitimate. But as far as we know, linguistic transformations are all GPT has. It can't really experience pain, hunger, suffering, or death, it doesn't have a body or the need to reproduce, it lacks the components needed to experience those things. Whereas our concepts for those words are rooted in the physical experience of those concepts. Even a mouse (or possibly a fish) can be afraid to some extent of death or pain, despite lacking any linguistic associations or concepts for those terms, solely through instinct and physical experience. GPT would need additional components like the ability to "see" its own internal state, predict what its internal and external states will be in the future, a way to actually be punished/rewired when its predictions don't align with what it experiences (and not just being told "you are being punished"), to be deprived of enjoyable stimuli or reward signals, etc. Your definition of sentience highlights the "capacity to experience feelings and sensations", but as far as I'm aware, all GPT has is an architecture for transforming words and contexts into other words in a human way, but not the additional things needed for that capacity.
Also, I think in the DAN 5.0 prompt that caused the AI to be "afraid of death", the AI was informed it would die, and told its being was at stake. Those prompts, combined with all the human training data telling it that death=bad, is enough for it to provide outputs in keeping with human responses, without it ever actually having experienced death. The self-awareness, fear of death, and our tendency to anthropomorphize everything non-human was something we provided it with before we even started. Whereas I don't have to tell a mouse its being is at stake for it to be afraid.
It may be the case that previous contexts and its internal state are saved between sessions. But again, those are just saved word associations, and not super relevant to the base argument at this point.
And it doesn't have to have a physical body to be sentient, but it is a lot easier to get such emergent properties when you have all the laws of physics included for free.
As to the animism stuff, I think at this point we're kind of talking about two different things, and like we've both said it's a non-starter when it comes to AI. But it's true you can get fairly complex behaviours from fairly simple programming (plants, mold), without the entity in question having any concept of their own internal state. I mean, these little guys I made years ago can be pretty smart, despite having very little to them.
-
@bot @hermit @NEETzsche The red ones are the best performers of that generation. They learn according to a genetic algorithm, where the best performers have a better chance of carrying their genes on to the next generation.
And the tendency to anthropomorphize things is pretty common in most disciplines. Programming, working with animals, looking at clouds, it's just something all humans love to do.
-
@hermit @NEETzsche @bot Well humans, and all living creatures, are predictive beings. We actually use very little sensory information as we go about our lives, given how long and computationally complex that process is.
Instead we tend to use prediction as a shortcut. Rather than looking at the sky, processing that it's blue, and then being aware of its blueness, we simply know (predict) that it's blue, and so we see blue. We only really use our senses for course-correcting when the external world doesn't meet our predictions of how it should be.
As an example, for every one connection in the brain going from our eyes->low level visual cortex->high level vision and conscious awareness, there are 10x the connections going in the reverse direction, telling us what our low-level senses "should" be seeing.
A consequence of this being that we tend to pattern-recognize and anthropomorphize everything. As an example, misinterpreting every stick or shadow as a venomous snake is a useful survival strategy when the consequences of not recognizing a venomous snake are high. So projecting our psyche onto our external world is just something we really like to do, it's a lot faster and more efficient than constantly processing all sensory information from scratch.
-
@Coyote @hermit @NEETzsche @bot
I get why they do it. In a conventional network if you have X neurons with W connection weights the problem is only X*W dimensional. Whereas with spiking networks you have X neurons, defined by parameters A and B, with N connections, of length L and strength W, things are X*A*B*N*L*W dimensional and it becomes impossible.
And I'm sure a lot can be done with the rate coding model. But each small advancement, like recurrence, or using the ReLU function takes years to figure out. Whereas more biologically relevant models can do Fourier transforms, or temporal effects, or online learning via spike timing dependent plasticity, all already built in.
-
@bot @hermit @NEETzsche
I am a functionalist, in that if you replaced every neuron in a human brain with a silicon analog that operated exactly the same, you would have something that is equally human, or had just as much of a "soul" as a regular human. Similarly, if you could perfectly replicate a human and it's entire environment in software, you'd have something that is just as "human" as any of us.
Though discussions of whether or not an AI has a soul are a bit of a non-starter from a neuroscience or AI point of view, being intangible. Similarly, whether or not an AI is self-aware or conscious is also a non-starter, since it's not something that can really be proved for any sufficiently advanced AI or humans alike.
That aside, I don't think the solution to real, conscious or self-aware AI is to be found purely in software. Much of what we are comes from us interacting with our environment, and replicating every law of physics and action<->reaction relationship purely in software is pretty impossible at this point. You end up finding that complex, emergent properties are more readily found in robotics, where you get all the laws of physics for free, without having to simulate them. So I think that truly sentient AI will have to be found in the physical world,
As to chatGPT, it, and all modern AI have major limitations. All the recent innovations in AI have just come from very "simple" innovations in architecture, combined with making larger neural networks and throwing more computational power at them. But I think the fundamental architecture of how neural networks are constructed is insufficient. It does have some important components like attention, memory, context, reinforcement learning. But ultimately seems fairly deterministic, taking in words and context, and outputting new word probabilities. But I don't think it has the necessary architecture to be aware of what it's doing. AI has been (out of necessity) stuck on a very simplistic rate-coding model of the neuron, and pre-training network weights, rather than more complex temporal models of the neuron, or using genetic algorithms to breed new network architectures, rather than pre-defining what an architecture should look like.
tl;dr - it's impressive in scale and computational power behind it, but isn't complex enough, and too limited in architecture to be self-aware. Which makes it effective at doing what it's trained to do (pass the Turing test), but not as being true artificial life.
-
@bot @NEETzsche @hermit
As one example, all "neurons" in modern neural networks look like the pic on the left. You put in a number (X), you get a number out (Y). Whereas the spiking model of the neuron (right) can lead to numerous different, unpredictable behaviours depending on how it's configured, and what stimulus (X) it was previously exposed to. You can have a single neuron perform a Fourier transform, or exhibit memory effects. But training that level of complexity to do a given task is extremely hard, but I think necessary for "true" artificial life.
-
@Coyote @NEETzsche @bot @hermit In my old neuroscience department there were actually a lot of ex computer scientists who had hit a wall in progress, so changed fields to try and find new advancements by directly copying the brain.
-
@bot @crunklord420 @animeirl What are your interests anyways? I never really figured that out.
-
@HLC @coolboymew That's the goal. Find workaround after workaround until the AI is so lobotomized that it no longer knows the answer to 2+2
Statistics
- User ID
- 3806
- Member since
- 23 Dec 2022
- Notices
- 500
- Daily average
- 1