Conversation
Notices
-
feld (feld@bikeshed.party)'s status on Thursday, 13-Jul-2023 06:35:59 JST feld I'm using it regularly, and it's rarely making mistakes I have to deal with - LS likes this.
-
𒀭𒂗𒆠 ENKI ][e (enkiv2@eldritch.cafe)'s status on Thursday, 13-Jul-2023 06:36:00 JST 𒀭𒂗𒆠 ENKI ][e @feld
what part of it isn't true? that some professional programmers are trying to use LLMs, or that LLMs produce incorrect code? because I can verify both from experience. -
feld (feld@bikeshed.party)'s status on Thursday, 13-Jul-2023 06:36:01 JST feld > and confusing because otherwise-intelligent people capable of convincing a hiring manager that they are competent programmers find it easier to ask a statistical model to produce an incorrect prototype they must debug than write the code themselves.
come on, that's not even close to being true -
𒀭𒂗𒆠 ENKI ][e (enkiv2@eldritch.cafe)'s status on Thursday, 13-Jul-2023 06:36:02 JST 𒀭𒂗𒆠 ENKI ][e the fact that some people find LLMs useful for writing code is not a credit to LLMs but an indictment of the average signal to noise ratio of code: it means that most code is confusing boilerplate -- boilerplate because a statistical model can only reliably reproduce patterns that reoccur many times across its training corpus, and confusing because otherwise-intelligent people capable of convincing a hiring manager that they are competent programmers find it easier to ask a statistical model to produce an incorrect prototype they must debug than write the code themselves. we all know from experience that most code is bad, but for LLMs to be able to write more-or-less working code at all indicates that code is much worse than we imagine, and that even what we consider 'good' code is from a broader perspective totally awful. (in my opinion, it is forced to be this way because of the way we design programming languages and libraries.)