Conversation
Notices
-
LS (lain@lain.com)'s status on Wednesday, 02-Aug-2023 23:31:26 JST LS Somewhat related thought:
Being able to explain all the the observations is not what is used to decide whether its true or not. For example, many weird things about the moon (its size, its distance from the earth, its density) can be explained very well by positing that the moon is an alien spacecraft.
Similarly, nearly everything can be fully explained by claiming that it's the will of god. Still, these explanations are unlikely to be seen as the truth by most, and the reason is, i think, that people think the prior probabilities of these explanations is so low that they can be dismissed out of hand.
But can one get those priors from? It might be that this road of thinking leads to the thought that knowledge is actually impossible.
I wonder if this is just an informal way to get to Wolpert's "No Free Lunch" theorem, which leads to similar conclusions.-
cell (cell@pl.ebin.zone)'s status on Wednesday, 02-Aug-2023 23:47:00 JST cell @lain somewhat tangentially related: for the past year or so i've been detaching any preconceived meanings onto things, and started (i hope) evaluating and perceiving them just as they are. no more "good" or "bad", no more kneejerk judgements or whatnot. still a long journey but its leading to some interesting changes in my perception...funnily enough the preconceptions were originally gained by applying models based on prior knowledge onto things i observe
...can i ever be neutral and unbiased if this is the case?LS likes this. -
cell (cell@pl.ebin.zone)'s status on Wednesday, 02-Aug-2023 23:49:55 JST cell @lain okay i'd rather phrase it like this much easier on the brain:
grug see cave painting, grug learn from shaman. grug take knowledge and apply to what grug see, but grug have bias due to grug mental model. grug brain divide into categories, "good" "bad" "yummy" "bleh", but due to grug a posteriori from cave painting and shaman grug never able to not have bias, grug forever have model on model on model on model... -
LS (lain@lain.com)'s status on Wednesday, 02-Aug-2023 23:49:55 JST LS @cell that's not easier at all -
cell (cell@pl.ebin.zone)'s status on Wednesday, 02-Aug-2023 23:52:40 JST cell @lain i was supposed to eat chips, drink beer and watch comfy slice of life tonight, why are you doing this to me LS likes this. -
l (lebronjames75@shitposter.club)'s status on Thursday, 03-Aug-2023 00:20:35 JST l @lain
"Will of God" - i think the "will of god" explanation is equivalent to the matrix explanation of the world. maybe its true, and i cannot disprove it - but it's not an in-universe explanation. It's not an explanation that is fitting with observable phenomena in-universe, and doesnt explain the in-universe "why did it come to be there?"
"There's a lemon on the table" "God willed it to be so" ok, but I physically put it there in-universe. The in-universe explanation is that i wanted to put that lemon to display on the table. I think this is the basis that god/matrix explanations could be dismissed by. By distancing/specifying the scope the explanations are operating in. Any miracle by god, temporary change in code of the matrix, comes from "out" of the universe. But this can be used to explain away the history (rather, signs of history, aka craters for example), but i dismiss this. "10000s of miracles just to pretend there is an observable history aftermath, just to prank you". [coincidentally, having just relistened to IHNMAIMS, this behaviour seems very AM-like, lol]
I think No free lunch theorem is a bit unrelated, because it makes an example based on a digital understanding of the world (valid for maths and similar: Today is either 0 or 1, exactly 24 hour later its either 0 or 1 randomly). In an analog reality, it's not "my prediction was 100% wrong" or "100% right", it's most often a "i was Y% (this number does not exist) in the right ball park, but off here by X amount", or "fuck i was Z% correct". I'd have to ponder on this a bit to figure out if im even remotely correct in this paragraph's implications. (aka literally talking out of my ass)
I can't phrase this final paragraph well enough, something's wrong with it; but i'm still keeping it. If you can fix or denounce my thought (patterns) there into something more coherent, i'd be happy :
"Moon is an Alien Spacecraft" - We have lots of data to counter this assumption: The existance of every other moon, having been at our Moon, having seen no alien activity from the moon. There's lots of strong evidence to credit against this claim, but very little weak evidence for this claim. BUT - "What if absolute knowledge is unknowable?" => creates another scale. Where can i have absolute knowledge? Inside my head, where i create my knowledge, i have absolute knowledge. So, the scales would be "In my head", "Observable In-universe", "Not observable in-universe", and each one of these has different implications. And they overlap too lmfao! In my head, is everything I think and observe and deduct. This is the only absolute.
Observable in-universe, is everything i observe(d) and have shoved into my head.
Not observable in-universe. From god, to the teapot in space. Both go in here, as they are never-provable-to-me in this current state. If i become schizophrenic, the teapot may go into the observable-in-universe and in-my-head scaleLS likes this. -
受不了包 (shibao@misskey.bubbletea.dev)'s status on Thursday, 03-Aug-2023 00:38:53 JST 受不了包 @lain@lain.com pretty sure this is just postmodernism, for any limited set of data, there are an infinite amount of interpretations for the exact same set of data. you can say that only one interpretation is true, or multiple interpretations are true (the earth was created by evolution and it was also god's will by using evolution, etc) or even all interpretations are equally true (true postmodernism).
i think the way out of postmodernism seems difficult but is pretty straightforward, once you start introducing functional criteria the number of possible interpretations dramatically shrinks. the functional criteria matters a lot in this case, but it always matters so i don't think there's any big change there
i think the no free lunch theorem makes a different claim, that any single claim/interpretation/algorithm can never account for infinite data, but i think it depends on what that data actually is, it might totally be able to account for it even if infinity if it's bounded, etcLS likes this.
-