I’d point more to it relying on MDB2 for all the database interactions, whereas MDB2 doesn’t work in recent versions of PHP which broke some things of reverse-compatibility (supposedly in the sake of better performance?), and that library hasn’t been updated for beyond a decade: https://pear.php.net/package/MDB2
It’s that it may entail reworking anything DB-related to move off of MDB2, which could be the whole project, or else fixing MDB2 to run in ‘modern’ PHP. I assume everyone would rather just write something entirely new, but pretty much any of the efforts miss the point, such as not doing: easy moddability via plugins, being able to run on constrained hosting environments, being able to just extract an archive and use a web installer versus expecting everyone to be a Linux neckbeard to use build tools, etc.
Only if the switch is configured to accept tagged packets of that VLAN ID on that specific port, otherwise it gets dropped.
Nonetheless, I'm really really curious if there's anyone in recent years bothering to pentest network switch firmware, because I wouldn't be surprised if it was a total blindspot, as many things are.
There’s the promise that Bluesky will federate eventually, and there hasn’t been much indication of anything beyond whitelist-only federation that I’ve come across yet. Also they were boasting about having an ‘open protocol’ but yet there were significant disparities between their specification and what they had in production (even in a variety of light variations or typos of attribute names, etc).
Are you sure you’re not mistaking the DNS aliases as federation? Because that’s like no different than someone thinking they “run a server” because they created a Discord guild (per the Discord’s misleading marketing).
There’s also plenty of excuses and possibly ‘misinfo’ in their side as well regarding ActivityPub, of just making an excuse for making a whole separate protocol, where they initially hold federation exclusively to themselves for a period, so that they’ll always have a stronghold grasp on ‘their’ ATProto network (by user count and site age), and where development and direction on that network will be entirely dependent on whether the flagship instance bothers to ever support any third-party extensions to the protocol. It’s just as “decentralized” as the LBRY network of Odysee.
Also, when you devise a protocol, you don’t just have one group make it, then it’s on everyone else to adopt it; you have two or more separate groups make their own implementations of it, to test if the standard is even sane, rather than just figure out and test interoperability later. It’s generally a prerequisite to most standards bodies for a reason.
In regards of their remarks on ActivityPub: it’s operationally a meritocratic living standard; it’s not about what’s solely enshrined by the W3C nor how only one software (Mastodon) implements it, as their FAQ seems to imply their outlook on it. Also, majority of implementations just end up implementing it as plain JSON rather than full true JSON-LD support. There’s also no standard nor FEP that mandates double-at representation, that’s primarily a Mastodonism (more on the aspect of mandatory WebFinger resolution).
Also the remark that identity and data portability as not being retrofittable to ActivityPub, yet there’s discussions and efforts with proposed FEPs to establish exactly that. I reasonably believe we’ll have ID/data portability in some ActivityPub implementations before the day that Bluesky is full open federation, built on much more matured codebases. Much of the complaints of ActivityPub are being resolved as a larger meritocratic group effort (by proving it with code and implementation), but users evidently want to throw out entirely everything, just to gravitate to the newest shiny Venture Capital funded start-up, learning absolutely nothing from ditching out from Twitter.
That reads a lot like the “Crypto/Web3 is going great” channels that are always circlejerking on sometimes incorrect information, just to have something to mock and reaffirm biases.
Software can’t fix people, software can’t fix emotionally unstable admins, trying to consolidate everyone on one service (not implying that Bluesky is ‘forever a silo’ or anything) isn’t a solution either. The social problems that inhibit federated protocols and networks aren’t the problem, it’s the decay of moral standards and decorum that is the greater issue, because without that addressed you can’t have reliable federated networks. Even the internet itself is increasingly fracturing and becoming unreliable over social/political antics in recent years, like people pulling stunts on the continental internet backbone level, because they don’t like people being able to access content on a particular website. If you have protocol-level suggestions, I’d be glad to hear of ideas.
You can’t have a mega-platform while also doing simultaneously doing gatekeeping to keep only “our guys” on it, just as especially of people ditching out from fedi just to escape “the Nazis”, when they’re only going to keep platform hopping on the next trigger they find.
There’s been a handful of dissertations I’ve seen from others attesting to being some authority figure on the subject (regarding ActivityPub), that I strongly disagree with on the technical remarks of, that I want to get around to writing a response to at some point on a personal website.
As for things that are being actively worked on and developed:
Seeing all the responses to a post, versus the present ‘split-brain’ post discoverability: there’s an architectural reason that even if you have the server of the parent post track all the responses, that you couldn’t just have a remote server pull all the responses. Because even if the parent server lists all the known responses to a post (local and remote), that there’s no proof of authenticity for remote posts, thus every single remote reply would have to be re-queried. Think of hellthreads, and a sudden flood of +500 queries from just one server querying a thread, that’s obviously a bad idea.
Thus, the solution to that is establishing a framework of ActivityPub object signing, of portable inline object signatures. Whereas: as long as the querying server has a cached copy of the actors in the thread, it can verify the authenticity of their posts mentioned in a ‘replies’ list on a discussion on another server, without having to directly query every single reply post from it’s respective originating server, as long as the whole object is in the ‘replies’ collection (not just the object IDs). There’s already extensions being experimented upon for object signing: https://codeberg.org/fediverse/fep/src/branch/main/fep/8b32/fep-8b32.md
Then with object signing, as well as further extensions to cryptographically sign actors, and authenticating a key to represent an identity (e.g. FEP-c390), you can start to build an framework for portable objects and identities, such as recent proposed experiments of: https://codeberg.org/silverpill/feps/src/branch/main/ef61/fep-ef61.md
It’s a patient process of formulating ideas and solutions to make something that works, rather than just dumping it all of it as a lost cause, and swapping over to some replacement that most people don’t even know the deep implementation technicals of yet (opposite of the notion “better the devil you know, than the devil you don’t”, or perhaps instead directly “the grass is always greener on the other side”). Some of it takes work and effort, but it’s absurd to just drop everything once the shortcomings are apparent: if there are shortcomings, you FIX them, you don’t just shrug it off and shuffle over to the next marketed gimmick.
Now I feel the curious itch if someone’s made a bash-based UEFI shell as a standalone UEFI binary, rather than the standard DOS-imitative shell. Or hell, if it’s possible to make a basic shim atop the standard UEFI ABI to ship glibc and other POSIX stuff and whatever else is needed for basic shell applications to run (for things that don’t expect the Linux kernel to be present and running).
Just download everything worthwhile you find, because YouTube (and the internet at large) is now more amnesiac than some dementia-crippled geriatric. Even a few years ago I’ve found that at least 1/4th of my bookmarks would be dead links in just 2-3 years time.
Additionally, if we’re to use the did:key scheme (or whichever), but end up treating it like a sorta-URL, it might actually be worth having a different scheme ID for referencing objects ‘grouped under’ that DID, because otherwise it’s stretching that scheme outside of it’s original use (being an identifier meant for representing a public key, but now also acting as a sorta-URL too)
Perhaps a property that contains an associative array that serves as a substitution table for any referenced object IDs or it’s children (if I’m not making a grossly bloated proposition)?
Or sorry, I should have read first: you did explain server-independent IDs in FEP-ae97 (I may have overlooked it in the past). I guess I’m cautionary in using DIDs directly as JSON-LD IDs, for the concern of breaking compatibility. I think a separate property could be used instead, despite it adding more cruft (although I don’t know a comparable solution for endpoints in an actor object). Or alternately there having to be alternate representations of objects (DID and legacy), which however would inherently be it’s own separate fediverse entirely.
For server-independent IDs, would that be stored under another property name other than id, or something fully replacing the id or being appended to it and parsed?
If a server is receiving an activity, then it should inherently have some implied discoverability about where the activity is stored, if it wasn’t sent through a relay. I don’t know if there’s some supplemental identifier that could be associated to an instance that’s decoupled from DNS. Maybe a public key-based identifier for a ‘activity/actor storage server’?
For encrypting private data: perhaps start on a simple PGP-ish model, where payload is encrypted directly for the actor’s keypair. People may whine that it doesn’t fill every checkbox of their “demands” for privacy, but it would be trivial to implement, and some later “true E2EE with full forward secrecy” solution can come later as an optional upgrade. Perhaps there’d need to be a new object type (or something borrowed from vocabulary of other JSON-LD-based crypto specs) such as ‘EncryptedActivity’, maybe with an optional type-hint of what the payload activity/object type is (if it’s not anything somehow sensitive).
Ultimately, I do strongly believe FEP-c390 and FEP-ae97 is the inherent future for ActivityPub, or some light variation of it, and I really hope to see the current hack of HTTP Signatures (and especially the current one-key-only per actor representation, for a key that’s just an entirely server-held always-unencrypted private key in a database) to be gradually phased out soon (or at least a shift towards a ‘server key’ for HTTP Signature-based delivery, of something that can be locked down, versus the lie of a private key for each actor, that the actor doesn’t even control, in the current use of HTTP Signatures).
Here’s some predictions I have with soon (<2 months?) releasing a trivially deployable fediverse software, of what’ll happen later down the road after it’s released:
Instance admins will stereotype it as being “nothing more than a tool for abuse”
There’ll be instances that will try to auto-block anything that runs it
I’ll be shit on for “wrecking the fediverse” by letting people be more sovereign and independent from mentally-ill/abusive admins of mega-instances, and for rendering fediblock more useless. More instances might go whitelist-only.
Instance admins will try to dissuade users from emigrating to it, preaching that “but you need to be on my instance, you won’t ‘have wider visibility’ by being on the alternative, besides: you’ll most likely be auto-blocked” or “you’ll be ‘vulnerable’ to all the Nazis out there, you NEED my ‘protection’”.
None of this would be anything by-design nor intentional, it’d just be by mere nature of something that’s easier to deploy on standard hosting, compared to many of the over-architected/arcane predecessors.
Federated platforms are far easier to build, develop for, divvy up moderation responsibilities, and finance rather than “truly decentralized” platforms which I believe pulls in far more risk (how much fun is it running a Tor exit node?), harder to fund (where would Tor/etc be without large universities and charities propping it up?), more content moderation issues (gl;hf dealing with CSAM if you build a ‘completely uncensorable, decentralized’ platform), and so on. At least utility cryptos (e.g Namecoin as mentioned) help solve the finance/commerce issue for some ‘truly decentralized’ ideas.
I don’t think servers are so much the problem. The problem is we have such an adversarial internet backbone and core internet infrastructure now that’s actively trying to prevent routing around censorship intentionally.
Despite my aforementioned concerns with the sustainability of the Tor network, I believe it’s a fairer option (or perhaps I2P, or whatever other overlay networks come about) to be hosting services on Tor/etc instead of clearnet.
Speaking in context of onion services exclusively (and not about interop with clearnet):
Domain seizures on clearnet? So what, nobody can ‘seize’ your onion address on Tor, unless they legitimately have your private key.
ISP shuts you down? So what, move the server elsewhere, come back online, nothing with addressing or any configuration changes at all.
Stuck behind CGNAT and can’t self-host? So what, connect to the Tor network, and you can start hosting services on Tor/etc regardless of what your network topology is. Now everyone can self-host and spread out more.
Afraid of rogue CAs, Cloudflare, and other TLS MitM? So what, the Tor network provides encrypted tunneling that only terminates at the holder of the private key for the respective onion address, far simpler than involving a Certificate Authority or delegated trust system.
I’m sure a response could probably be “well we tried, but nobody really bothers using an onion counterpart fedi server”, and honestly that’s because: a lot of fedi server software legitimately sucks for high-latency networks, especially for things that are heavily client-side rendered. All we need is some simple fedi server implementations that lean more on server-side rendering, and then the experience is far less miserable than waiting a literal minute or so for a Misskey profile to load (for example)
Simple enough. At least Collections/OrderedCollections can have a description (such as the summary property as mentioned in the ActivityStreams vocab spec, in an example). Hopefully I can make use of this for separating out “gallery upload” posts from microblogging posts. Although I guess it probably doesn’t have much utility since I don’t think any implementations let you follow a collection instead of an actor (or to specify what collection(s) you want to subscribe to in a Follow request? ..That would be an interesting extension..), I guess it’s still stuck to needing a ‘virtual’ actor anyway for those sort of ‘substreams’ (like PeerTube does, if I remember correctly).
That’s a bizarre coincidence. I was just noticing streams property in the spec a few hours earlier while reading over it, realizing it’s something I was looking for previously. I assume it’s just up to someone making a semantically proper use of it.
Not sure if it’s intended to be just to be a list of collection URLs, embedded Collection objects, or maybe PropertyType objects?
Oh neat, I think this makes using alternate I2P router implementations more practical as a result as well, given there's so much bundled in the Java implementation that makes it hard to replace.
Well there's LSB and XDG standards just to have basic intercompatibility inside of the Linux world, so that you can jump between GNOME and KDE or whatever, and not have everything fall apart. Standards are fine as long as it's not just to drive monoculture, or to overengineer something just to sell consultancy.
Ultimately I don't care about 'everyone' moving to Linux, I just don't want it where one company is able to get away with sabotaging so much of an industry, and yet have people doing the usual "well, it's what 'the real world' uses" lip service to double-down on their mistake of defending Windows/Adobe/whatever crapware that people trap their livelihoods on. But honestly, the stupider it gets, and the more Microsoft sabotages things, then it's honestly a fate well deserved for its users.
If someone makes a better option than Linux, even if more esoteric, I'd probably be fine with using it for myself (as long as it's some form of free or open source software). I've juggled with BSDs for some use-cases. I'm just tired of proprietary software that's actively fucking over its users, that often gets defended by people unwilling to learn or adapt, or by people just willingly choosing their misery.
DRM and anticheat has been like the lead cause of all problems, even on the targeted platform of Windows itself. There’s plenty of the older games I have from younger years that were a task just to get working on Windows again, such as with ‘Safedisk’ DRM with Sim Theme Park, where you can’t even get past the first loading screen, without several patches or third-party edits just to get it launchable, in Windows.
I agree that Wine/Proton shouldn’t be the end-all solution, meanwhile as Proton came about, it seems like developers just stopped bothering to care about native builds, and too much of the community has grown complacent with Proton. But there’s also the flipside that: ironically older Windows APIs are long-term supported (also just because it’s a clearly distinguished target; a la Win7 or something), and thus it just ends up being ‘easier’ supporting something Windows-only, unless you practically ship a Docker-ish container of userspace libraries your game needs to run on, which Steam on Linux honestly somewhat is: each of the Steam Linux Runtimes (scout, soldier, sniper, etc) are just specific version-froze distro userspace libs, which a developer can target and have assurance is going to be long-term supported.
Compare with plenty of the early Humble Indie Bundle games with DRM-free Linux native builds, where plenty of them are incredibly difficult to natively install now (including having to pull in all the 32-bit libraries, and more), or not at all (because of dependency issues on a no-longer provided version of a library). Thus Linux gaming is primarily down to Valve; there could just be similar community efforts of having an alike container environment (of the same distro targets as the Steam Linux Runtimes) perhaps, to just become the norm for shipping games on Linux, otherwise I don’t know what else there is for options. Flatpak already can do some/all of that, there’s just no specific options decreed as ‘the standard options’, otherwise you’ll end up with probably like 13 different Linux userspace environments installed for like 20 games or something.