@aral Not a clue. I’m guessing that it’s a (yet another) bug in their autocomplete system so it must have been affecting everything of theirs that uses it.
@aral Yeah, PWA installed to the home screen are essential if you want actually working offline apps. Hard to see how this isn’t malicious compliance esp. since that’t their play for everything else they’re doing (alt app store requirements, weird web engine reqs, etc.)
“Apple broke web apps in iOS 17 beta and hasn’t fixed them • The Register”
What a fucking shit show. With shenanigans like this Apple is punishing both the end users and—if we are to be honest—the Safari team, because it’s their hard work that’s being disabled https://www.theregister.com/2024/02/08/apple_web_apps_eu/
- Uncluttered, a 21k words on import maps and TDD - Out of the Software Crisis, an ebook on systems-thinking in dev - The Intelligence Illusion, an ebook on the biz risks of generative models - Yellow, an oddball audio/video series
Just be really REALLY careful when you read anything from web.dev or a domain with Chrome or Google in the name about exciting new standard web APIs. Much too often what they are promoting is effectively a proprietary API that other browser vendors have serious concerns about.
It's absolutely irresponsible of Google to continue to promote Chromium-only APIs such as the File System Access API as if they were standard APIs you can use with the expectation that cross-browser support will come eventually
Glossing over the difference between APIs that have genuine cross-browser implementation interest and Chrome-only APIs that are unlikely to ever get implemented in Firefox or Safari is tantamount to tricking devs into making their projects Chrome-only.
The current state of scrollbars is, on the face of it, objective evidence that the people who run tech do not give a single solitary fuck about usability or UX https://artemis.sh/2023/10/12/scrollbars.html
That's a really useful post but I think they're wrong on one point.
Training data leakage actually seems to be _the norm_. Most of the field ignores the cardinal rule of not testing on your training data, and it's caused a reproducibility crisis in ML-based science https://reproducible.cs.princeton.edu/
That OpenAI pulled this stunt isn't a mistake. It's par for the course. This is how the AI industry oversells the capabilities of its products.
I thought we'd spent the past few years digging into issues and looking for ways to use these systems on problems they can handle, like data and media conversion and modification.
But, no, we're going straight for the dystopian nightmare stuff:
* Knowledgebases that straight up lie. * Creating an underclass that's only served by mediocre AIs instead of actual lawyers or doctors * Summarisers that hallucinate _in their summaries_ * Decision-making systems rife with shortcuts and biases