Another day, another insufficiently introspective companies-are-abusing-the-generosity-of-FOSS-programmers post that doesn't attempt to address the role/culpability that individual developers who are employed at these company have in this system.
See also, the aforementioned Photopea, which similarly eschews with the conventional wisdom espoused by the self-appointed keepers of the sanctity of Modern JS "best" practices: <https://www.photopea.com/>
And finally, the prior art of the entire corpus of the JS powering Firefox (at least in the 1.x/2.x/3/4 era).
There's no comparison. One of these sets contains folks who know how to write full-featured browser-based applications that just so happen to not also compromise the user experience, and one set that insists on clinging to the false promises of associated with today's most popular frameworks+development methodologies, on the basis that this constitutes industry best practice. Nuts!
Something is wrong in webdev. Why did this have to be reiterated? Why is it unfathomable? There is a whole class of developers working in the industry today who are completely convinced that the trendiest of toolchains and frameworks amount to irreducible complexity. I've seen them say so, and then—bizarrely—turn around and mention how something minifies to ~24K or whatever. There's a basic lack of rigor in the conventional wisdom and conversational standards applied within this milieu.
An in-browser animation editor was posted to HN by @AshleyGullen. It's snappy and obvious that it's handcrafted JS, à la Photopea. Super obvious.
An HNer asks about what UI framework it's using. The answer is "none", and the founder replies, saying so: "all written from the ground up in house". This is beyond the comprehension of another user who chimes in. @AshleyGullen has to step in and reiterate that they wrote it all.
If Firefox jettisoned its devtools in favor of GToolkit integrated in the browser, that would be a tremendous boon not just to Web development, but computing generally.
Honorable mention: #GlamorousToolkit, which is not a browser development toolset, but it gets the basic idea right for its domain. It uses Miller columns to let you drill down in an object graph and switch between multiple viewers—plus create your custom viewers on-the-fly.
Every debugger should ship with this kind of tightly integrated inspector framework to let probe whatever you want that's in scope like this when stopped at a breakpoint.
It boggles my mind that I seem to be the only one annoyed by the modal interaction style that modern browser devtools inherited from Firebug.
I see people constantly talk about how they can't imagine working without multiple displays because it affords the ability to put so much on the screen at once. So they say. Meanwhile, I do everything on a ~13 laptop screen. But I'm the one who has to point out how crazy it is that you can't inspect the style rules of two elements at the same time?
This wasn't a problem in Joe Hewitt's original DOM Inspector. You could right click any object (e.g. DOM node) and bring up a new inspector window for that object. You could do the same for another object, and another. You could have multiple "viewers" trained on a single object.
Then Joe made Firebug and gave it a modal UI, and literally every browser has copied that same basic design error. Exasperating!
HTTP endpoints that process form data and accept text/html as input
Instead of having to send application/x-www-form-urlencoded, you can just POST the markup for the page containing the original form back to the server, except with the form already filled in: all the fields have their value attributes set appropriately, i.e. whatever the submitter actually wants that field to be.
Not unlike the way real (i.e. paper) forms work. Any other prior art? (And I do mean "art"—which this is.)
Around the time that I wrote "How to displace JS" <https://www.colbyrussell.com/2019/03/06/how-to-displace-javascript.html>, I checked the then-latest README for instructions about doing a hello-world program using create-react-app. It was fuckin' 500 fuckin' megabytes *just* to be able to do a successful `npx create-react-app` and `npm start` and show the word "hello". Nuts!
@edsu you now have permission to respond with "STFU" to any insolent jabberwocky insisting that you have to follow what the NodeJS/NPM community's deems to be best practices.
JS and its history of use for programming in the large (let's call it "industrial" use) is so much bigger (and better) than whatever kooky nonsense the NPMers have convinced themselves is essential/irreducible complexity.
Mozilla figured out how to put JS to serious use before `npm install`* was ever a thing. Of the hundreds of thousands of lines of JS powering Firefox (at least up until I called it quits on my hope that Mozilla would stay out of the gutter ~10 years ago)—mostly just a bunch of script elements and pre-ES6 JSM `import` statements. Very little "build" work in sight.
In a front-page story about a potential vulnerability in an Android app decompiler, an HN commenter brings up vulnerabilities arising from memory corruption bugs in binutils.
Change my view: there's little reason to maintain+propagate the traditional C implementation for objdump, readelf, &c, or to recommend them—BUT I'm not saying they should be rewritten in Rust, either. Rather, the most sensible target to port these tools to is... JavaScript. (Specifically, browser-compatible JavaScript.)
@wizzwizz4 the thinking on the part of Google Engineers wrt the XSLT and Issue 1065085 links: exactly the mindset I'm talking about. They can't see the harm in rev'ing the "platform" because that's just, like, something intrinsic to software, right? (cf mobile)
The `<param>` element changes are a non-issue; by the time plugins are involved, you're already compromised. It's like relying on UB—that pain is self-inflicted.
Re: 1st paragraph—vague, dubious + almost def a double standard there...
@wizzwizz4 it's not good to embrace/roll with their attitude because it leads to more of the same (lackadaisical approach to just YOLOing along with yet another platform). This breeds and feeds on itself—on both sides.
Don't know what point your second paragraph is supposed to contain.
Chrome devs may very well choose to break their browser. Their loss.
@wizzwizz4 for the linked Show HN submission to suddenly stop working at some point is def. a failure condition of something—whether that be the development practices that created it, or (in the event that it uses only standard APIs and it ends up breaking along the way, anyway) then a failure of the browser vendors and their stewardship of the Web, no matter how tall the towers of abstraction that are involved.
@wizzwizz4 most programmers have this ephemeral mindset of, "yeah, it works now, but will probably break sometime—doesn't everything break?" because they're only used to dealing with things that are constantly needing to be fixed (and getting paid for it). It's just considered normal.
The real point of my remarks: this is a terrible mindset. To have the same expectations of the Web is a sort of fatalism that encourages (sloppy) practices that lead to _more_ (v. likely avoidable!) breakage.