Conversation
Notices
-
idea_enjoyer (idea_enjoyer@nicecrew.digital)'s status on Tuesday, 02-Apr-2024 05:25:10 JST idea_enjoyer @matty nicecrew has been acting sluggish, but other sites have not. Is there something going on that you know of? -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 05:25:09 JST @realman543 @idea_enjoyer @matty Matty claims "The instance isn't under a heavy load", which is the opposite of my postgres issues. † top dog :pedomustdie: likes this. -
reeeeeelman (realman543@annihilation.social)'s status on Tuesday, 02-Apr-2024 05:25:10 JST reeeeeelman @idea_enjoyer @matty
Apparently @mint is having a similar issue. Maybe they are related? Check the logs. -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 05:28:22 JST @matty @realman543 @idea_enjoyer I've been struggling with long CPU usage spikes from postgres lately, that's about it. -
Matty-kun :Christmas_kitty_bell: (matty@nicecrew.digital)'s status on Tuesday, 02-Apr-2024 05:28:23 JST Matty-kun :Christmas_kitty_bell: What? -
Matty-kun :Christmas_kitty_bell: (matty@nicecrew.digital)'s status on Tuesday, 02-Apr-2024 05:30:40 JST Matty-kun :Christmas_kitty_bell: And when is the last time you repacked or vacuumed your database? -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 05:30:40 JST @matty @realman543 @idea_enjoyer Yes and yes. Repacked it as soon as the issues started. -
Matty-kun :Christmas_kitty_bell: (matty@nicecrew.digital)'s status on Tuesday, 02-Apr-2024 05:30:41 JST Matty-kun :Christmas_kitty_bell: Have you tuned Postgre? -
Matty-kun :Christmas_kitty_bell: (matty@nicecrew.digital)'s status on Tuesday, 02-Apr-2024 05:34:14 JST Matty-kun :Christmas_kitty_bell: What Postgre job is causing the hang? -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 05:34:14 JST @matty @realman543 @idea_enjoyer No idea. Phoenix dashboard shows some long-ass query often running for up to 15 seconds, which I assume is a timeline. -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 05:37:12 JST @matty @realman543 @idea_enjoyer SELECT a0."id", a0."data", a0."local", a0."actor", a0."recipients", a0."inserted_at", a0."updated_at", o1."id", o1."data", o1."inserted_at", o1."updated_at" FROM "activities" AS a0 INNER JOIN "objects" AS o1 ON (o1."data"->>'id') = associated_object_id(a0."data") INNER JOIN "users" AS u2 ON (a0."actor" = u2."ap_id") AND (u2."is_active" = TRUE) INNER JOIN "users" AS u3 ON (a0."actor" = u3."ap_id") AND (u3."invisible" = FALSE) WHERE ($1 && a0."recipients") AND (a0."actor" = $2) AND (a0."data"->>'type' = $3) AND (not(o1."data"->>'type' = 'Answer')) AND (not(o1."data"->>'type' = 'ChatMessage')) ORDER BY a0."id" desc nulls last LIMIT $4
I never enabled activity expiration to begin with, so it's unrelated.
-
Matty-kun :Christmas_kitty_bell: (matty@nicecrew.digital)'s status on Tuesday, 02-Apr-2024 05:37:13 JST Matty-kun :Christmas_kitty_bell: Which query specifically? I'm asking because I had this same issue when I had activity expiration on like a year and a half ago. -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 05:44:20 JST @matty @realman543 @idea_enjoyer Randomly but frequently. Pretty sure it is timeline since more of the same queries spawn when I just spam the new tab with, say, home timeline. -
Matty-kun :Christmas_kitty_bell: (matty@nicecrew.digital)'s status on Tuesday, 02-Apr-2024 05:44:21 JST Matty-kun :Christmas_kitty_bell: Is someone spamming chats on your instance? Does this happen periodically like on a cycle or just randomly? -
:archlinux: :nyarch: :verified: (lewdthewides@hidamari.apartments)'s status on Tuesday, 02-Apr-2024 05:46:47 JST :archlinux: :nyarch: :verified: @mint @realman543 @idea_enjoyer @matty Are you being scraped? -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 05:46:47 JST @lewdthewides @realman543 @idea_enjoyer @matty No unusual activity in logs. -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 05:56:05 JST @lewdthewides @realman543 @idea_enjoyer @matty There were two IPs that might or might not be scrapers, but dropping them in firewall did literally nothing. Pretty much all other timeline requests are done by legitimate users. -
:archlinux: :nyarch: :verified: (lewdthewides@hidamari.apartments)'s status on Tuesday, 02-Apr-2024 05:56:06 JST :archlinux: :nyarch: :verified: @mint @realman543 @idea_enjoyer @matty disable public viewing of timelines and profiles and see what happens :yui_shrug: -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 05:58:34 JST @lewdthewides @idea_enjoyer @matty @realman543 On closer inspection, their since_id requests have the same delay between them, it's just the timing between requests from both of them happen to be synchronized. Red herring. -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 06:09:51 JST @lewdthewides @realman543 @idea_enjoyer @matty Guess the disk is shitting itself then. Some lower numbers might be due to me setting CPU governor to powersave and not turning postgres off in meanwhile, though.
fio.txt -
:archlinux: :nyarch: :verified: (lewdthewides@hidamari.apartments)'s status on Tuesday, 02-Apr-2024 06:09:52 JST :archlinux: :nyarch: :verified: @mint @realman543 @idea_enjoyer @matty I tried. Only other thing I can think of is fio your database drive and check if random reads are where it's suppose to be -
munir (munir@fedi.munir.tokyo)'s status on Tuesday, 02-Apr-2024 06:12:33 JST munir @mint @realman543 @idea_enjoyer @matty its hard to say because the the parameters are missing from the query, im skewing through the code right now but it's hard to find the query because they're using language extensions that simply get translated into a query, grrrrr i hate pleroma likes this. -
:archlinux: :nyarch: :verified: (lewdthewides@hidamari.apartments)'s status on Tuesday, 02-Apr-2024 06:13:21 JST :archlinux: :nyarch: :verified: @mint @realman543 @idea_enjoyer @matty iops aren't terrible, but what good is that when the data isn't being fed in fast enough likes this. -
(mint@ryona.agency)'s status on Tuesday, 02-Apr-2024 06:15:14 JST @lewdthewides @realman543 @idea_enjoyer @matty I will try scoring some better SSD one day, but guess I'd have to bear with it for now.
-