As for microwave ovens and other appliances, if updating software is not a normal part of use of the device, then it is not a computer. In that case, I think the user need not take cognizance of whether the device contains a processor and software, or is built some other way. However, if it has an "update firmware" button, that means installing different software is a normal part of use, so it is a computer.
Imagine if we had a protocol where it was required that you'd open up your SQL server to the public internet (with access control on writes, or reads on protected data, of course), and just have remote servers/clients just query straight off your database, regardless of query complexity.
So how is Solid any different than that with SPARQL, N3, Shape Trees, etc?
Every time I look at the stack of protocols to Solid ( https://solidproject.org/TR/ ), it feels like the engineering mess that was the OSI protocols, of overcomplicating a [relatively] simple problem.
I'm not saying SPARQL, semantic data, and alike don't have utility, as I'm sure it's probably used in various massively-scaled production environments; but that I don't see how you expect to have something publicly internet-facing that any entity on the internet could incur a heavy query on the server, or just with offloading so much compute responsibility to a server, instead of making a dumber server (just like how you can achieve an ActivityPub implementation in just static files, since it doesn't have a query language).
I could be uninformed (and I only periodically peek at it, and skim through some of the specs at times, I don't know it in depth), but I still don't see how this is going to be anything that could be operated cost-effectively and not be prone to trivial Denial of Service abuse.
I have been reading parts of the DID Resolution spec, yes. There are some inconsistencies I noticed when trying to sorta-implement it, such as the example for "8. DID URL Dereferencing Result" whereas it has didUrlDereferencingMetadata while the current JSON-LD context (which is https://w3id.org/did-resolution/v1 which redirects to a broken URL of https://w3c-ccg.github.io/did-resolution/contexts/did-resolution-v1.json, when I think it's instead meant to go to https://w3c.github.io/did-resolution/contexts/did-resolution-v1.json) defines a property name of dereferencingMetadata instead; or also relative-ref instead of relativeRef in some of the diagrams.
There had been light inferences about using DID URLs for binary content, but it's difficult to see the application of it, when most of it comes to returning a JSON resolution/dereferencing metadata document as an envelope. There's no mention of anything with content negotiation, like if there was a mechanism where: a DID-aware application could ask for the JSON info on resolution, or else, a non-DID-aware application (that doesn't list DID resolution media type in the 'Accept' header) could just be redirected to the dereferenced binary file instead.
There also doesn't seem to be much for options with simply pointing to the location of the resource, rather than embedding the resulting document directly.
I've generally tried just 'making up' some makeshift extensions to fill the gaps in my use-case, and might have some results within a week-ish (I have a resolver implemented with DID URL dereferencing, I just need to make further client-facing changes). There could also be a chance that I might have skipped over something important that might address my complaints, as I'm usually skimming through fragments of all the miscellaneous specs at a time.
I stole a few ideas from did:plc and did:tdw, yes. It's just an experiment insofar, as I'm using it as a stand-in for other methods, as something I can adjust to my needs as I toy with DIDs in a way with reverse-compatibility to standard non-DID ActivityPub.
As it currently stands, there doesn't seem to be a lot of methods that clarify whether DID URLs are permitted or not with the method.
There were a few adjustments I was going to add, such as what other 'authoritative' servers the did:fedi can be discovered from, within the method-specific protocol, maybe.
Either way, I haven't been public about it yet. Just finished a basic key wrapping and serialization format to go along with it, and I'll probably push out a newer version of the generator demo (which presently lacks a polyfill for browsers that don't have native Ed25519 within WebCrypto) in a day or two. I'll probably be more vocal when I have results.
As for the primer, that was probably over a year ago, and the mentioned FEPs, even a year before that (with all those FEPs devised by @silverpill )
I wonder if there's utility in having some sort of "degrees of association" ranking system for dealing with spam accounts. Whereas like the 'Web of Trust' model originally envisioned for PGP, but in this case, not being about asserting legitimacy of real identity, instead just a "not a bot" rating.
Mainly where you'd endorse someone else's account as 'not a bot' on some sort of scale of: recent online acquaintance, long-time online friend, or have met in-person plenty. The risk of course is that such a thing could be abused for datamining, although, some of that could already be heuristically inferred (follower status + frequency of interaction/replies).
This is just a musing of distantly watching the happenings of Nostr in dealing with spam, and seeing if there's ideas that could be implemented in ActivityPub-land first before implementations of key-based portable identities are more widespread.
Last I remember it only federates blog posts and comments. If the purpose is a blog, then yes, I'm sure that's fully within the use-case. WordPress is fine and pretty mainstream, just don't install a lot of plugins as that increases security risk, as really anyone can publish a plugin (including a lot of web designers that think they know how to code).
If you need any custom theme (or turning an existing design into a WordPress theme), guidance, or auditing, I can provide that.
The point with a self-hosted WordPress install is you have full autonomy over your own website/blog, and nobody can interfere with that.
If you use the SaaS WordPress.com offering, then I'm sure it's not much different of a situation than Substack, other than far more customization.
Edit: additionally, with self-hosted WordPress---it's probably the easiest web application to install on conventional web hosting (LAMP stack) that usually the layperson is able to figure out themself and maintain. And is meant to be widely supported over a broad range of environments or PHP versions, unlike most 'modern' software that's so brittle (and often only deployable via Docker or similar).
I mean, hell, I can just start a managed hosting service (under my pseudonym; I already do such professionally) if there's a market for it on fedi. Just a decade ago it wasn't perceived as that much of a hurdle, as people would pursue through it.
It seems like there's inbound federation issues with chat.wizard.casa. The server-to-server port (tcp/5269) is not open, and there's no SRV record at _xmpp-server._tcp.chat.wizard.casa that indicates another host/post otherwise.
It seems to federating inward properly now. The predicament that made this easy to overlook is: when S2S connections are initiated outward, that same connection also gets used for inbound traffic from that server too (BiDi).
Meanwhile, if a server wants to talk to chat.wizard.casa, it tries connecting to the S2S port, but can't if the port isn't open. It's only when someone on chat.wizard.casa initiates activity outward that it'd "start working" momentarily. But now that's resolved.
Nonetheless, it's probably a usability bug with Dino worth reporting, if it's burying a more critical error (no S2S connectivity) under a lesser-error (can't discover OMEMO keys, because it can't even talk to the server).
Right now on a Prosody server I have, with the daemon process running for 127 days straight, with 5 local users, +8 remote users, 9 connected servers is running at 161MB memory used.
For ejabberd, running for over 3 weeks (after restarting for changes for Matrix bridging), 6 local users, (I don't have the other metrics readily accessible) is running at 115MB memory used.
I do host were.chat, which is registration-by-invite (solely to reduce automated registration and bots), have the domain registered until at least year 2029, have active uptime monitoring and notification, and I tend to keep a deathgrip on keeping things online perpetually, even past their usefulness (such as keeping a forum online for 2 decades now, when it died in activity like a decade ago; and it's shifting into being hosted for preservation/archival sake at this point), so it's exceptionally unlikely that anything I run is going to just disappear.
It may be practical to just set up a server for yourself, as it tends to be lower-maintenance running a Prosody or ejabberd server, which usually has minor updates once every few +6 months or so (and typically nothing critical, like severe security issues). I should probably get around to updating/finishing my guide ( https://arcanican.is/guides/prosody.php ), since pulling in external repo shouldn't be as needed now (in Debian 12, soon Ubuntu 24.04, Fedora 39, etc)
In so far, I don't know of many fedi-adjacent XMPP servers (other than a handful neighbor servers that are predominantly 'furry' tilt, which I assume might not be an adequate fit).
For clients, it's generally 'Conversations' for Android users, Monal IM for iOS users, Gajim for desktop (with Dino as a newer option), and Movim (but I recommend self-hosting it, since it holds your login info on the server, most of the logic is server-side) for a Progresive Web App
I notice several times a day where your chat on conversations.im chokes up, delays messages, and so on; so it's not just them, I'm sure it's the server (probably dealing with varying denial of service). That's why I strongly advocate starting chatrooms on smaller servers, dispersed out.
I warned you against registering on a mega-server, meanwhile how many interruptions have you had on any of my servers? You're just using the same normie logic as registering on mastodon.social, and then concluding everything else is crap.
It's not just the RFC that just 'magically fixes everything'. I cited that it should be broader effort of working on a FEP, with inclusion of the RFC as a target rather than the earlier draft, including what people want changed of HTTP Signatures, as an initiative that should happen. But also that can't be done yet if we don't have feature/support discovery.
Instead of just shoving a FEP on everyone of "this is how HTTP Signatures needs to be done, everybody needs to do it my way", I'd like to see discussion first (such as on SocialHub) of other potential pitfalls or needs (e.g. what about users on polyglot platforms that don't have a "server" concept, where each user is a sovereign identity, etc, if there's to be a 'server-wide key'?) to be considered.
I don't know if everyone's just afraid of voicing their opinions, if SocialHub is measurably 'dead', if there's just general unseen friction between projects, or if most folks just aren't actionable personality types. If it's legitimately down to someone pushing in a direction, writing up a stack of specifications, and pestering people, I can do that. But I'm sure others likely have more constructive input/insight than what I could do solo.
Cleaner syntax (@method and @path are separate, versus rolling it together as (request-target)), clearly lays out the expectations of which implementation decisions must be made when rolling it into a larger protocol/system (section 1.4), more standardized signature methods (including ed25519), and can be possible to build and make use of well-tested reusable libraries than having to need some niche implementation that only applies to ActivityPub.
In cursory glance, it's just a few syntax changes to 'upgrade' existing implementations. But part of it isn't just RFC9421 itself, but an opportunity to fix the state of HTTP Signatures in ActivityPub within the same effort.
I think it's worth just replacing/upgrading the present state of HTTP Signatures, such as working towards a FEP that instead utilizes RFC9421 (instead of it's earlier incompatible drafts), enabling the ability to have a server-wide key (especially to lock it down to an HSM or other secured storage) rather than this present joke of private keys generated for each user, typically stored unwrapped in a database, that the user can't export for risk of other users on the same instance.
Yes, it won't solve anything with trying to resolve your implementation struggles in the current present, however there needs to be momentum started with fixing this, and garnering support for building a 'better HTTP Signatures', so that people don't have to fight with this absurdity hopefully in the future.
Would it be worth anything for me to spend my day writing a light dissertation between ATProto, ActivityPub, Nostr (very lightly, I still need to dig more), DIDs, etc? (and having to re-read much of the specs again)
and just to declare: I've only written a full ActivityPub server implementation, meanwhile I have not written an ATProto server nor anything Nostr yet. I believe I have a fairly complete understanding how it ATProto comes together in it's components, compared to a lot of other people that seem to be commenting on it.