@lunarised@Lunarised Until we moved to our own box, I used to just make regular backups and I'd dump/restore when it got too big, because that took us down for less time than a full vacuum. People allegedly have a good time with pg_repack, but I have used it once and it fucks up if it is interrupted (like if you are running low on disk space and you attempt to run it to free up disk space and then it runs out of disk space), and even fucks up the *dump*.
> I finished a db prune and my db is still 68GB for what i thought was a small instance. I did the vacuum afterwards
Well, worst-case you can dump and restore. I don't know what is being pruned, since I have never used the DB-pruning tool. You could try manually removing the activities/objects that are older than a certain date (and that are not Follow/Accept/Block), or just delete all of the Like/Announce/EmojiReact/Update/Delete activities.
You wanna check the table sizes for all of the tables.
Other than that, what's wrong with the DB being 68GB? That's how big they are.
@lunarised@Lunarised `pg_dump>file.sql` and `psql<file.sql`. You can put your favorite bytestream compressor in the pipeline if you like. (This is recommended.)
Did Gleason seriously try to use nano on a DB dump? "Editing dumps / I have no idea." The JavaScript guy doesn't think to use awk or sed, huge shock. Someone should tell him (or, since he mercifully has gone away to Nostr, someone should maybe upload some better instructions).
> with --rsyncable
I didn't know about this flag, but it appears to be a Debian extension. pbzip2 does fine.
The bigger issue with this person's approach is that you will have to copy every block every time anyway; pg_dump gives you the dump in disk-order, and in a database where there are large tables with any amount of churn (Pleroma, this database), you will get a different order every time you dump, so rsync will not be able to just move the blocks that have changed.
(Postgres does row-versioning and deletes rows in place by marking them as deleted. To keep the data consistent during transactions, if there is an update, rows are COW'd. New rows get put into empty slots where they fit, overwriting previous deletions.)
@lunarised@Lunarised I'm just talking about pipes. `pg_dump | gzip > file.sql.gz` or `bzip2` or `pbzip2` or `pigz` or `lz4` or whatever. The data going down a pipe is a bytestream, I meant something that is friendly to put in a pipeline.
@PurpCat@lina@Lunarised@p@Cocoa@mint@lunarised for rebased, i don't think there's even been breaking changes to the DB schema, worth trying to just run pleroma off the same DB with no rollbacks
@p@i@Lunarised@lunarised fun fact Gleason hasn't been updating copebox otp builds and he's pushing people towards his nostr (deader than bsky) fetishism this month now
@PurpCat@i@Lunarised@p@lunarised lol, anyway yea i think he'll either return eventually or he'll actually stay there and copebox will die and with it will the copebox instances WHICH IS GOOD THERE ISN'T A SINGLE GOOD COPEBOX INSTANCE HOW THE FUCK IS THE SOFTWARE TIED TO COMPLETELY CRINGE INSTANCES UNLIKE PLEMORA
@lina@i@Lunarised@p@lunarised the only person I know trying to mess with his nostr stack is a Gleason disciple with no original thoughts of his own and oh btw it also is broken as fuck and has fewer people
@Cocoa@PurpCat@i@lina@Lunarised@p@lunarised I've sent him a patch here on fedi for the report rejecting MRF to reject reports sent to third-party users (Gargron spam update, https://github.com/mastodon/mastodon/issues/27219), he accepted it but later I figured out I fucked it up and it was rejecting *all* remote report. I've made a bugfix and sent it to him twice, he acknowledged it first time but never bothered to commit it. Couldn't make a proper MR because shitlab's cloudflare check on login page dehumanizes me.