Notices by shitpisscum (shitpisscum@social.mrhands.horse), page 2
-
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:21 JST shitpisscum @graf @miklol2 Ping from pleroma server to db server is pretty stable around 25ms. Here are the resultst of bench.sh drive test:
I/O Speed(1st run) : 646 MB/s
I/O Speed(2nd run) : 687 MB/s
I/O Speed(3rd run) : 686 MB/s
I/O Speed(average) : 673.0 MB/s -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:20 JST shitpisscum @graf @miklol2 Now only to figure out what lol -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:20 JST shitpisscum Maxes out 4 CPUs on a dedicated db server and still can't fetch the timeline. Serioulsy whoever wrote pleroma's db schema and queries should be permanently blacklisted from touching anything db related for the rest of their lives -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:19 JST shitpisscum @graf @miklol2 Does this look ok? -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:19 JST shitpisscum CC @graf Looks like it "works" now. Took more than 24 hours to create indexes -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:18 JST shitpisscum @graf Ok just realized that 2 days old backup means it's effectively been offline for the past 2 days. It will come back to life eventually I guess -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:16 JST shitpisscum @graf It's on a VPS so I guess yes -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:14 JST shitpisscum @graf
>not good
Ngl it feels like it's running even worse now
>pleroma.repo
I'll check when I'm back home in maybe an hour, do you happen to know the exact path to that file? -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:13 JST shitpisscum @graf @miklol2 Ok just noticed something, since migrating to the new db there are barely any new activities added to the db. Look at the times, on the old "broken" db there used to be a few new activities every minute (which makes sense) and now there are gaps of 10+ minutes without any new activities, meaning the db definitely wasn't restored properly. My question is, is it worth even attempting to fix it or should I just drop and restore from the backup again?
cc @p @sjw -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:12 JST shitpisscum @graf Here's pleroma.repo, as I said I'll play with pg_top when I'm back in ~1 hour
config :pleroma, Pleroma.Repo,
adapter: Ecto.Adapters.Postgres,
username: "[USERNAME]",
password: "[PASS]",
database: "pleroma",
hostname: "[host.tld]" -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:11 JST shitpisscum @graf Here's pgtop if it's of any help -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:08 JST shitpisscum @graf Also nothing under waiting and blocking queries -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:07 JST shitpisscum @graf It was in some kind of half offline half online state since late December (I had a cron job restarting it every hour I'm not even kidding lol). As for the db backup and restoring to a new machine, full backup Saturday about 1AM, starting the instance Sunday around 2PM so ~37 hours -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:05 JST shitpisscum @graf Like this?
config :pleroma, Pleroma.Repo,
adapter: Ecto.Adapters.Postgres,
username: "asd",
password: "asdf",
database: "pleroma",
hostname: "asdfgh.net"
pool_size: 10,
timeout: 15_000,
prepare: :named,
parameters: [
plan_cache_mode: "force_custom_plan"
]
config :pleroma, :dangerzone, override_repo_pool_size: true -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:16:03 JST shitpisscum @graf Ok it's running. I guess I'll just have to wait now -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:15:59 JST shitpisscum @miklol2 @graf -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:15:58 JST shitpisscum @miklol2 @graf Crashing more and more often, cron is currently restarting Pleroma every 30 minutes instead of every hour -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:15:52 JST shitpisscum @graf @miklol2 Yea definitely way too much jobs, here's this instance for reference. What's the worst that could happen if I just delete from oban_jobs; (delete all of them)? -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:15:50 JST shitpisscum @graf @miklol2 Fuck it I guess, can't get worse than it already is (I hope lol) -
shitpisscum (shitpisscum@social.mrhands.horse)'s status on Thursday, 09-Feb-2023 02:15:49 JST shitpisscum @graf @miklol2 OK it's running again, back to waiting I guess