so, some of my music has been in circulation on radio free fedi for many months now. has anybody caught it? should i submit more? i have a lot, so i won't submit more unless somebody actually wants it.
hot take but i think ISPs should return to giving every user a couple hundred megs of space on a public facing web server & encourage them to build little homepages. the corporatization of the web really hit overdrive when the persistent web presence of the average user stopped being a bunch of handwritten HTML and random files they wanted to share and instead became a profile template on a social media site
the fact that some people find LLMs useful for writing code is not a credit to LLMs but an indictment of the average signal to noise ratio of code: it means that most code is confusing boilerplate -- boilerplate because a statistical model can only reliably reproduce patterns that reoccur many times across its training corpus, and confusing because otherwise-intelligent people capable of convincing a hiring manager that they are competent programmers find it easier to ask a statistical model to produce an incorrect prototype they must debug than write the code themselves. we all know from experience that most code is bad, but for LLMs to be able to write more-or-less working code at all indicates that code is much worse than we imagine, and that even what we consider 'good' code is from a broader perspective totally awful. (in my opinion, it is forced to be this way because of the way we design programming languages and libraries.)
@feld what part of it isn't true? that some professional programmers are trying to use LLMs, or that LLMs produce incorrect code? because I can verify both from experience.
LLMs *could* be cool, if people stopped focusing on applications that fall apart the moment you apply system 2 thinking. It might look like a duck and quack like a duck, but it's actually just a wooden duck with a speaker in it, which can be fun to have around the house and all but you'll be disappointed if you expect it to eat bread.
Compare chatgpt to, like, Watson (which is like a wooden duck that has a cadbury creme egg machine in it -- a prolog-like formal inference engine).
Not that you should trust Watson, but at least it *theoretically* can *try* to check whether or not the things it says might be true.
Purely statistical models are purely system 1, so even if one somehow got human-level intelligence, the form of that intelligence would be like an extremely drunk & sleep-deprived person with severe untreated ADHD.
Why do so many people think it's acceptable to not have a system 2? Basically because system 2 shuts down in response to stress or low resources, and we live in a world that keeps everybody who does real work under a high level of artificial stress while denying them resources -- capitalism is keeping people from performing the kinds of mental operations that purely-statistical systems like LLMs can't perform at all.
In other words: if you find that github copilot actually helps you code, that doesn't mean you should start using copilot -- it means you're so burnt out that you can't think straight, & you need to take a long vacation.
@thedextriarchy It's a direct side effect of the political struggle between the statistical/fuzzy side & the symbolic/GOFAI side of AI research.
Minsky shat on perceptrons in the 60s so from then until the 90s when backprop got really good for computer vision, all pop culture depictions of AI were influenced by expert system research (with the exception of stuff like Deadly Friend).
But backprop made neural nets more useful, & in the 90s they were getting to the point where they could run on commodity hardware; meanwhile, high-profile symbolic logic projects like cyc were failing. So statistical methods got a lot of hype.
Thing about statistical methods is they can't do reasoning except by poorly and expensively simulating reasoning they've observed somebody else do (which is why human beings are so bad at it), and the statistical simulations of reasoning are so overcomplicated that nobody can understand them. In other words, statistical methods are *only* really good for bullshit!
Technical people have known this forever -- it's obvious from first principles -- but pop culture is slow to catch up because pop culture is primarily shaped by writers half-listening to marketing people who half-listened to technical people 20 years ago.
@ajroach42 This last bit isn't exactly true: making computers accessible to and usable by students specifically was a big deal (and recieved enormous amounts of government funding), and it's exactly this research (rather than the defense research) that became the basis for the most important stuff in computer history.
BASIC came out of an attempt at Dartmouth to give non-technical users access to a computer they had obtained -- first, Dartmouth students, then later, undergraduates at other universities along the east coast, and then ultimately, high school students and prisoners across the north-eastern united states. The process of making these time-sharing systems usable molded BASIC into a state where, years later, putting it on home computers was a no-brainer.
PARC's Alto & Smalltalk programs grew out of the same project at the ARPA level (though ARPA got rid of it not long after), hence the focus on catering to school children.
A small tight-knit circle of developers and computer scientists were absolutely responsible for the bulk of the progress made during that period, but many of them cared deeply about non-technical users; the people who didn't (and still don't) care about non-technical users are not the researchers but the corporate devs, since in many ways business protects software from needing to become usable & functional.
A pig in a cage on antibiotics. Ex-Xanadu, resident hypertext crank. "Under electronic conditions, there is no escape" -McLuhanElsewhere:@enkiv2@niu.moe @enkiv2 @nkiv2 @enkiv2