@alex@realcaseyrollins yooo i had the same problem with the IWF!!! my final conclusion came to this:
"The IWF takes a holier-than-thou stance when it comes to assisting smaller website operators in the universal mission to prevent the spread of child abuse material, possibly so they can operate an extortion racket or waste UK public funds."
@alex@realcaseyrollins This is by design. If you're not in their special club, they can shut you down at a moment's notice by uploading CP to your server and claiming that you are "hosting" it. Even if your moderators delete it in a minute, that's still enough for an article in Vice (ironic name) or Huffpo about how your service is a "haven for CSAM".
@alex@realcaseyrollins Just forward the response to the verge so they can follow up on their article with how NCMEC and friends are actively impeding fixing the issue.
@Zerglingman@alex@feld@idiot@realcaseyrollins You just know if feds will pose as polio vaccine administrators to catch terrorists, and help promote the same vaccine hesitancy that is why polio is still a global killer of children...
They will upload, idk, wrong-thing image hashes to csam databases, or just any hash so they can map out a "dangerous" communites online, like the tracers they inject people with to do catscans. Husky_1690302884949_ZAOK9WOUE0.jpg
@alex@realcaseyrollins Much like pretty much everything else, we are going to have to make our own, unless they adapt their policies due to Threads or whatever. I think your idea of training an AI on nudity and children separately is the best actionable way forward. Since in order for these orgs to admit the fediverse is a big player now is to insult their corpo friends.
@Shadowman311@alex@realcaseyrollins Maybe something like a distributed hash table that is shared between pleroma instances. Whenever an instance admin deletes CP, it is hashed and the hash is added to the table. New uploads are checked with the table and flagged before becoming visible on the timeline. Of course, there would also need to be some trust mechanism to prevent bad actors from abusing that for censorship purposes...
@caekislove@Shadowman311@alex@realcaseyrollins Good idea, but I really do think an actual AI filter model should be getting trained on it, though. It'd be nice to eventually be free of this bullshit, instead of swatting like flies forever.