I keep seeing academics very excited for some bright and shiny #AI future where like all information self assembles and you can know anything instantly and you write papers by just gesturing at a thought with a couple of sentences and I have no idea how they don't see a) that playing out as them getting further trapped onto extractive platforms, b) not actually going to work even remotely like that and c) that sounds pretty bleak to me even if it did work 100% as intended
Conversation
Notices
-
jonny (good kind) (jonny@neuromatch.social)'s status on Saturday, 11-Mar-2023 12:58:45 JST jonny (good kind) - Adrian Cochrane repeated this.
-
Scott, Drowning in Information (vortex_egg@ioc.exchange)'s status on Saturday, 11-Mar-2023 12:58:44 JST Scott, Drowning in Information @jonny One of the funny parts is that statistical generative language models are dynamically opposed to the production of knowledge at a logical and mathematical level.
Another funny part is what is even the point of knowledge if nobody actually enacts it by doing the work to know it in the first place? There’s some kind of weird conception of the nature of knowledge and science (as commodity?) going on here that is intensely anti-epistemic.
-
Adrian Cochrane (alcinnz@floss.social)'s status on Saturday, 11-Mar-2023 12:59:49 JST Adrian Cochrane @vortex_egg @jonny I've heard this discrepancy as "dataism" vs "sciencism".
Though "dataists" generally consider themselves "scientists".