@mint@theorytoe@locagainstwall@linastat.ripe.net has that (and generally a bunch of convenient tools in one place, including BGP updates), but it too doesn't have any whois data prior to 2023-07-09 here.
> few post before it was days after. I'm talking about this portions [picrelated].
> either way why would you assume someone who hacked someone would say the truth about that person. It is generally more effective to expose an actual dirty secret than fabricate one and risk being caught, loosing entire retrospective credibility. Since credibility attribution is rather strong here (not exactly anonymous), we can at least take into account the levels of bluff required when making a judgement.
@DarkMahesvara@lina@mint@Azur_Fenix@Moto_Chagatai Such instances are usually displayed in iknowwhatyoudownload as matching first and last seen date, which isn't the case here, so there is a stronger evidence of hash attribution to the ip, unless exact same ip was reused by pollution bots within an hour.
@p@Terry@colonelj@sjw@colonelj@lanodan@mint@Moon 202 is more of "okay then, now go away". It's exactly as what happens in pleroma - we'll enqueue it for incoming federation, but there is no guarantee of causing any intended side-effects (may be dropped silently by MRF for example) and since entity is identical, no storage changes are expected.
409 per same spec is more of "hey, you tried to take the nickname that is already in use, fix that and retry right after". In case of APID conflict in server-to-server communication, there is nothing really server can change, and per RFC:
> This code is only allowed in situations where > it is expected that the user might be able to resolve the conflict > and resubmit the request
@p@Terry@colonelj@sjw@colonelj@alex@lanodan@mint@Moon It can also be debated whether 409 error is applicable in case of identical entity being re-posted, since there is nothing short of submitting wholly different entity that would avoid the error reoccurrence in this case. 202 sounds more fitting.
@Moon@Terry@colonelj@sjw@feld@colonelj@lanodan@mint@p If you want to be compliant with jsonld, even if you don't use it, you must support field prefixes, so you must also parse jsonld specs themselves to know which ones you may ignore. If you want to be a dick, you are technically allowed by the spec to federate something like
Also whole AP spec type dynamism makes it rather unpleasant to work with, every field may technically be an IRI, an object, a list of IRIs, a list of objects, or not be there at all, then there are JSONLD shenanigans with extra fields and field schema prefixes on top of that. A fully *compliant* implementation would be woefully inefficient, having to deal with all that extra bulk.
@mint@Terry@colonelj@sjw@feld@colonelj@lanodan@Moon@p Lists of recipients are all optional, I suppose those were an afterthought, when they realized pushing separate rest endpoint for every recipient isn't viable.
@menherahair@kaia It isn't so much question of good or bad, ceramic/ceramogranite is always hard and therefore brittle. Tiles have near no elasticity, hence only way for kinetic energy to dissipate is to introduce a fracture. ~5kg dropped from 1.5-2 meters with spherical point of contact may crack or chip away virtually any tile with standard for 60x60cm ~1cm thickness if underlying surface is also hard and rigid. Spares certainly do help, but if you ever replaced an isolated tile, you'd know the fun of clearing out remains of old tile and then trying to level/align fresh adhesive with already set from adjacent ones. Laminated wooden tiles are much more serviceable, and come in heavy duty certified variants too.
> please wear something + you are from Russia I'll leave that to burger golems.
@menherahair@kaia Until you drop something heavy on one and it cracks, then you have to hope matching ones are still produced and color don't vary between series too much. Also other than for office buildings, a floor heating system may be required, otherwise tiles can get really cold to be used without footwear; and those are real pain to maintain and repair.
@sjw@Terry@colonelj@p@mint Most likely entry isn't removed from oban federator_outgoing queue. Purely software errors might get recorded in errors column, that should be easiest to check first, if not, enabling debug logs, triggering/waiting for delivery attempt and then sifting through many megabytes of output would be required. Postgres logs at the same moment might also be helpful. Could be conflicting lock, transaction getting rolled back due to software error or some database error. Either way, combined output is likely to contain some hints.
Further speculations include runtime cache(s) not getting evicted, lingering oban workers from restarts/reloads without terminating main process. If issue stops reproducing after clean instance restart, could be the case, but locating such issues is ought to be involved and interactive process.
Could also be synchronization issue, if multiple workers somehow are getting the same task, but thats nearing improbability, it's not that difficult to implement row-locking postgres queue correctly to doubt oban implementation before other options are exhausted.