It seems like there's inbound federation issues with chat.wizard.casa. The server-to-server port (tcp/5269) is not open, and there's no SRV record at _xmpp-server._tcp.chat.wizard.casa that indicates another host/post otherwise.
It seems to federating inward properly now. The predicament that made this easy to overlook is: when S2S connections are initiated outward, that same connection also gets used for inbound traffic from that server too (BiDi).
Meanwhile, if a server wants to talk to chat.wizard.casa, it tries connecting to the S2S port, but can't if the port isn't open. It's only when someone on chat.wizard.casa initiates activity outward that it'd "start working" momentarily. But now that's resolved.
Nonetheless, it's probably a usability bug with Dino worth reporting, if it's burying a more critical error (no S2S connectivity) under a lesser-error (can't discover OMEMO keys, because it can't even talk to the server).
Right now on a Prosody server I have, with the daemon process running for 127 days straight, with 5 local users, +8 remote users, 9 connected servers is running at 161MB memory used.
For ejabberd, running for over 3 weeks (after restarting for changes for Matrix bridging), 6 local users, (I don't have the other metrics readily accessible) is running at 115MB memory used.
I do host were.chat, which is registration-by-invite (solely to reduce automated registration and bots), have the domain registered until at least year 2029, have active uptime monitoring and notification, and I tend to keep a deathgrip on keeping things online perpetually, even past their usefulness (such as keeping a forum online for 2 decades now, when it died in activity like a decade ago; and it's shifting into being hosted for preservation/archival sake at this point), so it's exceptionally unlikely that anything I run is going to just disappear.
It may be practical to just set up a server for yourself, as it tends to be lower-maintenance running a Prosody or ejabberd server, which usually has minor updates once every few +6 months or so (and typically nothing critical, like severe security issues). I should probably get around to updating/finishing my guide ( https://arcanican.is/guides/prosody.php ), since pulling in external repo shouldn't be as needed now (in Debian 12, soon Ubuntu 24.04, Fedora 39, etc)
In so far, I don't know of many fedi-adjacent XMPP servers (other than a handful neighbor servers that are predominantly 'furry' tilt, which I assume might not be an adequate fit).
For clients, it's generally 'Conversations' for Android users, Monal IM for iOS users, Gajim for desktop (with Dino as a newer option), and Movim (but I recommend self-hosting it, since it holds your login info on the server, most of the logic is server-side) for a Progresive Web App
I notice several times a day where your chat on conversations.im chokes up, delays messages, and so on; so it's not just them, I'm sure it's the server (probably dealing with varying denial of service). That's why I strongly advocate starting chatrooms on smaller servers, dispersed out.
I warned you against registering on a mega-server, meanwhile how many interruptions have you had on any of my servers? You're just using the same normie logic as registering on mastodon.social, and then concluding everything else is crap.
It's not just the RFC that just 'magically fixes everything'. I cited that it should be broader effort of working on a FEP, with inclusion of the RFC as a target rather than the earlier draft, including what people want changed of HTTP Signatures, as an initiative that should happen. But also that can't be done yet if we don't have feature/support discovery.
Instead of just shoving a FEP on everyone of "this is how HTTP Signatures needs to be done, everybody needs to do it my way", I'd like to see discussion first (such as on SocialHub) of other potential pitfalls or needs (e.g. what about users on polyglot platforms that don't have a "server" concept, where each user is a sovereign identity, etc, if there's to be a 'server-wide key'?) to be considered.
I don't know if everyone's just afraid of voicing their opinions, if SocialHub is measurably 'dead', if there's just general unseen friction between projects, or if most folks just aren't actionable personality types. If it's legitimately down to someone pushing in a direction, writing up a stack of specifications, and pestering people, I can do that. But I'm sure others likely have more constructive input/insight than what I could do solo.
Cleaner syntax (@method and @path are separate, versus rolling it together as (request-target)), clearly lays out the expectations of which implementation decisions must be made when rolling it into a larger protocol/system (section 1.4), more standardized signature methods (including ed25519), and can be possible to build and make use of well-tested reusable libraries than having to need some niche implementation that only applies to ActivityPub.
In cursory glance, it's just a few syntax changes to 'upgrade' existing implementations. But part of it isn't just RFC9421 itself, but an opportunity to fix the state of HTTP Signatures in ActivityPub within the same effort.
I think it's worth just replacing/upgrading the present state of HTTP Signatures, such as working towards a FEP that instead utilizes RFC9421 (instead of it's earlier incompatible drafts), enabling the ability to have a server-wide key (especially to lock it down to an HSM or other secured storage) rather than this present joke of private keys generated for each user, typically stored unwrapped in a database, that the user can't export for risk of other users on the same instance.
Yes, it won't solve anything with trying to resolve your implementation struggles in the current present, however there needs to be momentum started with fixing this, and garnering support for building a 'better HTTP Signatures', so that people don't have to fight with this absurdity hopefully in the future.
Would it be worth anything for me to spend my day writing a light dissertation between ATProto, ActivityPub, Nostr (very lightly, I still need to dig more), DIDs, etc? (and having to re-read much of the specs again)
and just to declare: I've only written a full ActivityPub server implementation, meanwhile I have not written an ATProto server nor anything Nostr yet. I believe I have a fairly complete understanding how it ATProto comes together in it's components, compared to a lot of other people that seem to be commenting on it.
Interesting, there's two different behaviors for bridging account identifiers available:
matrix_id_as_jid: false: Matrix user ID is treated as a JID (e.g. example@matrix.org); if it can't delivered as an XMPP recipient (if no XMPP service exists as matrix.org, in this example), it then tries via Matrix protocol (as @example:matrix.org)
matrix_id_as_jid: true: a separate XMPP component is used as the identifier for all bridged Matrix users (e.g. Matrix users appear as example%matrix.org@matrix.xmpp-server.example).
One thing I assume will be a sizable annoyance: is that E2EE won't be possible across the bridge, unless both sides adopt a mutual E2EE protocol. Thus it might be the first case to see a push for experimental implementation of MLS (from MIMI), or otherwise bring some Olm/Megolm compatibility to XMPP, or OMEMO compatibility to Matrix, as I don't know if those can be 1:1 translated back-and-forth. Or just less-sophisticated things like PGP over instant messaging (like PGP over XMPP as transparently facilitated by plugins).
They're both key ratcheting systems, I just don't know how much Olm/Megolm differs from OMEMO.
But at least it bridges the communication gap to be able to start reaching people on another sizable federated instant messaging network.
I think part of it just comes down to developers of each project having direct communication channels with each other, whether it’s poking each other over email, instant messaging, or direct messages; meanwhile kicking around topics in microblogging format (as something that can get buried to the timeline with everything else), sometimes makes it difficult. I do agree that SocialHub for whatever reason feels difficult to keep up with.
Essentially with FEPs, it feels almost like something that should be treated like trying to get a bill through Congress. “Hey, I’ve got this new proposal, I’ve talked to X and Y project, and they seem onboard, can I count on your support too? Is there any feedback you have on this idea?”
As for Mastodon: fuck it. Everyone else can continue advancing on together, and probably craft things in a “progressive enhancement” manner to augment new things, while Mastodon can act like the “Internet Explorer of the fedi” in it’s own little aimless corner. While the rest of us get to have: custom emote reactions, animation markup, search, post quoting, (now recently) post tipping, and whatever else comes next.
Or with locking down fedi: have some opt-in “strict mode” (that would otherwise ‘break’ federation, if it wasn’t opt-in) that could be advertised in nodeinfo, like in similar nature to HSTS with web browsers regarding strict HTTPS use; or if an actor has keys listed for Object Integrity Proofs, to trust that mechanism only for proving something authentic as originating from that user, and skipping whatever insanity of HTTP Signatures, same-origin, or other mechanisms, etc.
So here’s an example of one of the maliciously-crafted payloads that resulted in a 9.8 severity CVE (CVE-2024-23832) against Mastodon:
{
"@context": ["https://www.w3.org/ns/activitystreams"],
"id": "https://mastodon.social/users/Gargron/posts/123456",
"type": "Note",
"actor": "https://mastodon.social/users/Gargron",
"attributedTo": "https://mastodon.social/users/Gargron",
"content": "Well, this is an extremely concerning vulnerability I should have accounted for.",
"to": [ "https://www.w3.org/ns/activitystreams#Public" ],
"cc": [ "https://mastodon.social/users/Gargron/followers" ],
"published": "2024-01-28T22:00:00Z"
}
I have previously double-checked with one of the Mastodon developers (while CC’ing the Mastodon Security email) to confirm that I’m free to release the details at this scheduled time (Feb 15th 15:00 UTC). According to the current observed metrics on FediDB, >73.6% of Mastodon instances are patched against CVE-2024-23832, as manually tabulated.
For the best case, you can just copy the whole Follow activity into the ‘object’ field; meanwhile just referencing the activity ID of the Follow alone would be insufficient.
The Block activity is used to indicate that the posting actor does not want another actor (defined in the object property) to be able to interact with objects posted by the actor posting the Block activity.
In the ActivityPub spec itself, it bears no meaning of access control; it’s purely just to ignore notifications and objects (such as replies) from that actor, as there is no rational way to accomplish limiting access to public posts from specific actors.
Anything sensitive that requires access control should not be posted publicly on social media to begin with. This isn’t a software design issue, it’s a human behavioral issue.
I routinely [privately] warn people about oversharing, such as when I stumble across someone posting a photo that gives away the exact location of where they live, or where they work, and most of the time it’s shrugged off as a non-issue, because they assume they have no tangible threats in the present, but never consider the future.
Then of course, they could always end up in some controversy much later on, over something completely innocuous, and face some tangible threats/risk, but yet put the blame on everyone else for their reckless posting behaviors (“omg doxxing!”). Blocking people they perceive as a threat solves nothing.
The reason I think ‘federated’ has much more practicality, is because it’s far easier to conceptualize, establish responsibility of who pays the bills for running the servers, easier to locate a resource (if it uses some conventional identifier, like a URL), etc.
Whereas with “truly decentralized mesh, everything is a node, no distinction of client/server”: usually some entity still has to pick up the slack and host high-bandwidth/high-uptime nodes, or seed a sizable portion of the network (if storage focused), or centrally run some ‘jumpstart’ servers (to be a new node’s first peer, to discover the rest of the network to peer with) for the network, entirely as some cash-furnace charity.
The only model that I think anything ‘truly decentralized’ would be self-sustaining is if it involves some autonomous cryptocurrency-based concept, but that also adds more cost and overhead (including blockchain, consensus, etc), and I assume also difficult to design a system that provably measures resource costs (such as rewarding someone for hosting a resource, providing bandwidth, etc).
It feels like everyone always tiers the concepts strictly into (from worst to inherently best): centralized, federated, decentralized mesh; always striving decentralized mesh as ‘the Holy Grail’, always better above-all. It’s seldom viewed instead where there’s tradeoffs between federated and decentralized.
Also instead of having to combine decentralization all into one application protocol, sometimes it’s better just being left as an external responsibility of an underlying network; in other words, just take what we already have, and combine them together: host a single-user fedi instance on Tor, I2P, or some other overlay encrypted meshnet, and you get some of the bonuses without having to invent a whole new protocol and whole new suite of cross-platform client/node software (which can take YEARS to iron out).
This is with an implementation of HTTP Signatures in fedi. Just as I was looking into someone asking help on implementing HTTP Signatures, I notice the library they pull in doesn’t even validate the digest, just if the signature is valid and nothing else.
This is also why I hate the mentality of “well, surely other people out there are more responsible and educated than me on this domain-specific knowledge, so I’ll just import this random library that seems popular enough”.
I mean, HTTP Signatures wasn’t very hard to implement and get working fine and interoperable with other fedi software, and I’ve read portions of the [draft] spec, it’s just not anything usable as a format of portable data that could be relayed between servers. You just also have to check with implementations of what headers they expect to be signed, which is part of the unwritten rules in fedi that you’re not going to find in the HTTP Signatures spec itself.
But if you want to find endless rabbit holes of practically “protocol mills” (if that’s an appropriate moniker?), just dig into some of the distant depths of the Verifiable Credentials suite of standards, or for insane extremes, go through the labyrinth of specs for Solid: https://solidproject.org/TR/
But outside of the topic of Solid, and as mentioned earlier: at least some parts of Verifiable Credentials can be borrowed into fedi, and narrowly implemented for a specific opinionated use, such as object signatures, as I’ve described in: https://arcanican.is/primer/ap-decentralization.php
But yes, there’s just insane degrees and extents in which people just keep dreaming up new standards, and making things unfathomably more complex than needed, likely just to sell consultancy and to pitch more VC startups.