My daughter, who has had a degree in computer science for 25 years, posted this observation about ChatGPT on Facebook. It's the best description I've seen:
Conversation
Notices
-
Drew Kadel (drewkadel@social.coop)'s status on Saturday, 08-Apr-2023 01:24:06 JST Drew Kadel -
Moon (moon@shitposter.club)'s status on Saturday, 08-Apr-2023 01:24:00 JST Moon @PCOWandre @DrewKadel people who think ChatGPT thinks have failed the Reverse Turing Test. Disinformation Purveyor :verified_think: likes this. -
Andre (pcowandre@jauntygoat.net)'s status on Saturday, 08-Apr-2023 01:24:02 JST Andre @DrewKadel No introspection and only waiting to generate the next line of a conversation. Sounds like quite a few humans!
-
Disinformation Purveyor :verified_think: (thatguyoverthere@shitposter.club)'s status on Saturday, 08-Apr-2023 01:25:26 JST Disinformation Purveyor :verified_think: @Moon @PCOWandre @DrewKadel what is thinking though? I am starting to wonder if it isn't just predicting the next event. I walk down the stairs and my cat "thinks" I am going to the living room so he barrels down to beat me to the sliding doors. He doesn't realize I'm going to the kitchen so his actions are wrong, but he thought he knew what I was doing. -
Disinformation Purveyor :verified_think: (thatguyoverthere@shitposter.club)'s status on Saturday, 08-Apr-2023 01:26:00 JST Disinformation Purveyor :verified_think: @Moon @DrewKadel @PCOWandre disclaimer: not sayin chatgpt things -
Moon (moon@shitposter.club)'s status on Saturday, 08-Apr-2023 01:30:48 JST Moon @thatguyoverthere @DrewKadel @PCOWandre LLMs more or less just string language tokens together using a mathematical model that people find pleasing. thinking is more closely related to reasoning rather than consciousness. LLMs mostly don't reason although there is some research that suggests that a small amount of reasoning is an emergent property, and there's a couple modified LLMs that can recognize and do math problems.
Math problems actually help distinguish the two concepts. LLMs will give you an answer that sounds pleasing but may or may not be correct because it's not actually reasoning. If it's correct it's because it consumed enough data related to that specific problem that the correct answer was statistically likely.Disinformation Purveyor :verified_think: likes this. -
Roland Giersig :vftrek: (roland@giersig.net)'s status on Sunday, 09-Apr-2023 23:18:54 JST Roland Giersig :vftrek: @Moon
And how does ChatGPT differ in that kind of "mathematical" thinking from the way most people think about mathematics? 🤔
Yes, ChatGPT is built as a text predictor. That's very similar in how humans think and act. We also are association machines. We predict the future all the time. 🤷♂️Disinformation Purveyor :verified_think: likes this. -
Disinformation Purveyor :verified_think: (thatguyoverthere@shitposter.club)'s status on Sunday, 09-Apr-2023 23:27:33 JST Disinformation Purveyor :verified_think: @roland @Moon @PCOWandre @DrewKadel There was a paper on the effects of GPS on the human brain that I read a while back I wish I could find again. It described the process of determining a route in our brain as heavily reliant on our ability to predict the future. According to the paper, things like traffic, construction, speed limits, etc are taken into account to try to calculate each potential path, and each time our prediction doesn't align with the actual future the brain recalculates to try and assess whether a change in route is required. -
Roland Giersig :vftrek: (roland@giersig.net)'s status on Monday, 10-Apr-2023 05:08:02 JST Roland Giersig :vftrek: @thatguyoverthere
Well, most human sports heavily rely on our capabilities to predict the future. We have to mentally keep track of the spacial position of several objects and predict where those objects will be in the next few seconds.
If you want to feel that prediction engine in our heads in action, just close your eyes while walking and notice that you can go on for several seconds without the urge to open your eyes.
@PCOWandre @Moon @DrewKadelDisinformation Purveyor :verified_think: likes this. -
ec670@pawoo.net's status on Monday, 10-Apr-2023 08:54:14 JST ec670 @thatguyoverthere @roland @PCOWandre @Moon @DrewKadel
INCREDIBLE! Scientists spend millions to prove the obvious
-
Disinformation Purveyor :verified_think: (thatguyoverthere@shitposter.club)'s status on Monday, 10-Apr-2023 08:54:14 JST Disinformation Purveyor :verified_think: @ec670 @roland @PCOWandre @Moon @DrewKadel yeah I'm sure it wasn't cheap. It was interesting, but the larger point of the study was that using GPS may have a negative impact on gray matter in certain regions of the brain. -
dew_the_dew :verified: (dew_the_dew@nicecrew.digital)'s status on Monday, 10-Apr-2023 10:23:33 JST dew_the_dew :verified: LLMs can't think but neither can HR catladies or medical billing specialists or chief diversity officers Disinformation Purveyor :verified_think: likes this.
-