Fediverse servers popping up on #I2P@i2p or #Tor@torproject corporate-free overlay networks when? Are there any technical barriers to this? (Friction in getting people to start adopting "dark nets" for regular use notwithstanding) #Mastodon#Fediverse And if you want to dig into it further, how immune to regulation would Fedi servers be on the open plains of the free web?
@lispi314@roboneko@i2p@torproject@z3r0fox Relays are just special accounts that silently repost all notes, and since repost activities only reference the URI of reposted object, they don't matter unless the instance can access the overlay network directly. So they don't matter, the only thing that matters is the engine being able to connect to said network by any means, e.g. through proxy for outgoing requests.
@lispi314@z3r0fox@i2p@torproject depending on what you want "it's complicated" @mint did some stuff with it. quite a few clearnet instances are available for the end user to access via mixnet, some are able to actually federate via multiple different overlay networks in addition to operating on the clearnet, and a (very) few are available only via overlay network
> how immune to regulation would Fedi servers be on the open plains of the free web?
how immune to regulation are darknet markets? depends on how much of a target they make themselves and how good the operator's opsec is :puniko_shrug:
... I don't even know if either of you will receive this post :nep_xd:
@mint@roboneko@i2p@torproject@z3r0fox Ah, I'd been under the impression some directly shared Activities in a manner that required active support from the instance software with the goal of full eventual consistency between relay peers.
@lispi314@roboneko@i2p@torproject@z3r0fox All current ActivityPub implementations are domain-bound. Outgoing requests use HTTP signatures that are verified by fetching the actor info which includes the public key used to verify that signature. Maybe with some code modification it should be possible for instances to send out full activity including the signature to relay, and for that relay to send out unmodified signed activity to receiving instance, but then again, you have to have direct access to sending instance to fetch the keys of newly federated accounts.
Starts to be quite a few. It's weird to make a federation protocol in this day & age and not consider peering difficulties and inability to communicate directly (fuck, #Usenet does it better via #UUCP & #NNCP).
China isn't the only regime with a hard-on for censorship and utter disregard for the sanctity of networks after all. That should have been part of the design considerations.
> but then again, you have to have direct access to sending instance to fetch the keys of newly federated accounts
only if you accept domains as the fundamental underlying identity primitive :smug10: (and yes I realize that the way current UX presents things leads to impersonation attacks if domains can be spoofed)
@lispi314@mint@i2p@torproject@z3r0fox I think it's not actually an AP deficiency so much as a "current implementations" deficiency. AP-the-protocol just uses these opaque URLs which is perfectly compatible with any arbitrary DNS resolution scheme (as long as it's context free) and any overly network scheme (just configure routing and proxies at the system level as appropriate). afaik AP itself just cares about exchanging JSON via HTTP (is the HTTP part even mandatory?)
current implementations assume the domain as the identity primitive and use the key as a verification mechanism. if you assume the key as the identity primitive and the domain as ephemeral then yeah you have to change a few implementation details but the protocol itself is fine as is
@lispi314@i2p@torproject@z3r0fox@mint is it tho? doesn't AP effectively just say "the object can currently (for the time being) be retrieved from this url and is encoded in the following manner"? where does the standard specify that a given object can only reside at a single url, must not move between urls, that domains are identity, or anything like that? (I might be wrong about this tho so if I am I hope someone will point it out)
afaict current implementations just make a bunch of assumptions about this stuff
@lispi314@i2p@torproject@z3r0fox@mint huh. I initially read that as that the identifier could only point to a single object but I guess in light of point 2 is it actually intended to mean that a given object is only ever permitted to have a single identifier (and thus a single url)?
regardless, both interpretations run counter to reality. "forever" isn't a viable timeframe. for example bae.st used to be located at neckbeard.xyz before the domain got yanked. in theory someone could set up a fedi instance there in the future and intentionally reuse old urls
similarly, all objects that used to be located at neckbeard.xyz are now located at bae.st so they all have (at least) 2 urls that were valid *at some point*
so I'd argue that this is both an ambiguity as well as a clear defect in the current spec which generally isn't possible to satisfy and is being actively violated every time an instance goes offline
> It pretty heavily relies on the synchronous HTTP interaction for most things.
I'm not saying you could switch up the domain every few milliseconds. obviously you need to feasibly be able to retrieve objects from urls within some reasonable timeframe of becoming aware of them. I'm just saying that hopping between domains once per week, or simultaneously residing at multiple domains, or etc, doesn't generally seem to violate the AP spec (outside of the specific wording surrounding object identifiers described above. thanks for pointing that out btw)
> nothing exists to signal such moves in the standard
true. seems like a deficiency. although if the representation presented by the server is stable (no such requirement afaik) then content hashing is an easy workaround