Notices by practically without merit (pwm@gh0st.live)
-
practically without merit (pwm@gh0st.live)'s status on Saturday, 27-Jul-2024 03:57:58 JST practically without merit @mint mine was trivial just remove some notes in a shell script -
practically without merit (pwm@gh0st.live)'s status on Saturday, 27-Jul-2024 03:49:55 JST practically without merit LOCAL MAN WHO NEEDS A SHOWER (working on it) FILES TRIVIAL BUG REPORT ON MAJOR OPEN SOURCE PROJECT
MORE AT 11 -
practically without merit (pwm@gh0st.live)'s status on Friday, 26-Jul-2024 22:58:18 JST practically without merit @pernia > SAN disrespector
brUH -
practically without merit (pwm@gh0st.live)'s status on Friday, 26-Jul-2024 22:58:16 JST practically without merit @pernia Subject Alternative name
one certificate, many names
www
mail
pleroma
all in one cert -
practically without merit (pwm@gh0st.live)'s status on Thursday, 25-Jul-2024 02:08:53 JST practically without merit @FrailLeaf -
practically without merit (pwm@gh0st.live)'s status on Thursday, 25-Jul-2024 02:08:40 JST practically without merit @threat > "I don't think you should do helm charts with nix"
teehee -
practically without merit (pwm@gh0st.live)'s status on Wednesday, 24-Jul-2024 09:33:49 JST practically without merit @0 @a7 @FailurePersonified @RustyCrab I'd be fine with either or tbh
(cashapp in bio) -
practically without merit (pwm@gh0st.live)'s status on Tuesday, 23-Jul-2024 18:37:55 JST practically without merit @p @NonPlayableClown @fba @mint
> Yeah, that was my thinking, but that it'd be recorded locally in the DB, so that someone imports a blocklist and then you don't need to fetch a thousand instances every time.
Sorry, poor phrasing, that was also my thinking. You could just keep a table of "instance, state, last_checked" and refresh that record if last checked was past some arbitrary threshold. Hell you could even do exponential backoff on checks with a counter column (reset the counter to 0 when the state is toggled). Would still allow for revival of dead instances (does that ever even happen though?). -
practically without merit (pwm@gh0st.live)'s status on Tuesday, 23-Jul-2024 18:37:09 JST practically without merit @mint @p @NonPlayableClown @fba
I can definitely cook up a beating a dead horse winrar list if there's some interest. First impression tells me that we would need an api to fedilist that asks if an instance is dead, and if so, since when. Then every time you record a block you make sure you know whether the blocked instance is alive or dead at the time of the block according to fedilist, and update your database accordingly if you have no data. Probably need some contingencies for resurrection (fba-side), and a minimum time after which someone is dead (fedilist side), and some caching for live instances to make sure fedilist isn't hammered with queries about if poast is dead yet (fba-side). After that the query is simple. It's intelligently ascertaining the information and keeping it in sync that's a bit weird but not impossible. -
practically without merit (pwm@gh0st.live)'s status on Monday, 22-Jul-2024 22:12:36 JST practically without merit 9:11 make a wish -
practically without merit (pwm@gh0st.live)'s status on Saturday, 20-Jul-2024 23:16:04 JST practically without merit ~ $ echo "wham bam shamalam" | ./freakify 𝔀𝓱𝓪𝓶 𝓫𝓪𝓶 𝓼𝓱𝓪𝓶𝓪𝓵𝓪𝓶 -
practically without merit (pwm@gh0st.live)'s status on Saturday, 20-Jul-2024 23:16:02 JST practically without merit @prettygood -
practically without merit (pwm@gh0st.live)'s status on Saturday, 20-Jul-2024 23:16:02 JST practically without merit @prettygood neither. more to follow when I clean it up a bit -
practically without merit (pwm@gh0st.live)'s status on Saturday, 20-Jul-2024 05:54:44 JST practically without merit @0 @pernia @11112011 @DiamondMind @DigitalCheese @Doll @Forestofenchantment @Sui @Terry @Waerloga @dcc @dick @ins0mniak @j @p @syzygy @thatguyoverthere @thebitchisback @threat fuck now I'm hungry -
practically without merit (pwm@gh0st.live)'s status on Friday, 19-Jul-2024 19:29:21 JST practically without merit @pernia
okay so scenario a:
You need json from a row in a database (one of your posts) because someone wanted you to serve it so that it federates or some shit. we also suppose the posts are indexed by like, id and that we have that from the request. The database has to check an index for the id of that post, which is pretty quick, BUT then it has to actually go get the json (it probably wouldn't be in the index -- this is called a covering index, if it has all the information you need --, because that would effectively double up the size of the object data) so it looks at the page number and row id pointed to by the index (this is the logical location of the data in the database).
Then the database asks the page manager for the page it needs. In this scenario that page is not in memory, and the page manager must read it from disk. with this page loaded into memory, we then grab the tuple we wnat BUT WAIT, the json data is bigger than the remaining size available in the page and spills over into another page so we have to ask the page manager for that page, and any subsequent spillover pages until we are done reading all the json we want (each page fetch necessitates a new disk read if that page is not already in memory, and, since these are pages full of nothing but json from one row, it's highly likely that they are not already in memory).
THEN after all that we can stream the json data out to whatever is handling the http response
OR
scenario b:
We receive a request for an object. We have cleverly named all our objects to that the file path maps to the url path, and have told nginx about this mapping and where to look.
nginx just serves the file (it is very fast at this), and the request never touches our backend. -
practically without merit (pwm@gh0st.live)'s status on Friday, 19-Jul-2024 19:29:20 JST practically without merit @pernia
> i assume by page manager u mean the mmu?
the page manager is the component of the database (it's part of the software, not the OS) responsible for reading and writing pages. It usually has a LRU cache of pages which it has recently fetched from disk so it can sometimes return them quicker. Pages can come in several types that indicate what information is stored in them (data tuples, table definitions, indexes, mappings of tables to which pages contain data for that table) but the big one here is the data page. Pages are addressed by their page number, which is literally just the order they are in (usually). A data page holds data tuples. Data tuples are the rows in a table, and they can be logically addressed by (page_number, row_id).
> wouldn't moving json from disk to memory have to happen anyway? why would it be slower in a db than from disk?
It does have to happen anyway but when you get it from a database instead of ripping it straight from a file, first you have to go and find the data you want, and then call fopen. If you just know which file you want to rip json out of, then you can skip all the work of locating it, and just call fopen.
> and wouldn't reading the data from disk be faster since its a B tree, rather than reading the file sequentially?
indexes are b trees, data tuples are just sort of chucked in there in the order they are created usually unless you are doing something fancy like maintaining a physical sort order within the pages, which would be really expensive for CRUD operations as you would have to shuffle potentially your entire table around for every insert.
> then in scenario b, that would mean reading the file sequentially to load it from disk to memory,
nginx does this and it does it in fancy optimized ways that stream the file, rather than load the entire file into memory in one big buffer and then flush it out.
Scenario b is faster if you engineer the files to be laid out in such a way that you don't have to look for them. Placing them strategically means that you just know where they are based on filename. If you did have to search them with like grep and shit then yes that would be much slower.
You have some misconceptions about where exactly the b-tree comes into play. The b-tree powers indexes. To fetch indexed data you first consult the index by traversing its b-tree (fast), and then you still have to fetch the data from its data page if the index tuple wasn't indexing the field you wanted in the first place (which it wasn't in our scenario). The index IS way faster than doing a sequential scan of every datapage that has data for a given table, and checking each tuple in it for the one or however many your query wants. With an index you know the address of the data you want but you still have to fetch it off the disk (unless it's cached by the page manager but let's pretend it isn't).
The database CAN'T be faster than simply reading off a static file. It is simply more work to be done, work that is a superset of the work done by just ripping the file off the disk and out onto the network.
The limitation is that not every scenario allows you to engineer the database out of the picture. This is not a universally applicable strategy. The database offers flexibility and makes difficult things possible, but the realization here is that all you're really doing is serving a static file, and that this isn't necessarily a difficult thing (if you're clever about it). -
practically without merit (pwm@gh0st.live)'s status on Friday, 19-Jul-2024 19:29:15 JST practically without merit @pernia @vic caddy is for zoomers who are scared of config files longer than 6 lines -
practically without merit (pwm@gh0st.live)'s status on Friday, 19-Jul-2024 13:51:35 JST practically without merit @pernia don't lecture me on databases, nigger I AM databases -
practically without merit (pwm@gh0st.live)'s status on Friday, 19-Jul-2024 13:50:47 JST practically without merit @MercurialBlack @pernia does snac2 have plugins? you should patch it in -
practically without merit (pwm@gh0st.live)'s status on Friday, 19-Jul-2024 13:50:44 JST practically without merit @MercurialBlack @pernia dlopen man page would be about all you would need to read.