> you can tell them that it's highly praised by none other than Drew DeVault:
If a terrible person likes something that I like, the thing is still the thing I like. This is like a step below "a thing I like turns out to have been created by a terrible person" and everyone still likes to listen to music and watch movies and look at drawings, despite artistic types often having flexible morals, poor impulse control, and terrible politics.
@m0xEE@pyrate@MK2boogaloo@Zerglingman@sysrq The OS was mostly C; it had some stuff in Alef that eventually got ported to C when the nice parts of Alef were turned into a library, but a lot of them became the Limbo programming language, which was more or less the immediate predecessor to Go.
Limbo is one of the interesting bits in Inferno that didn't move to Plan 9: rather than fork()/exec(), you would just load the other program and spawn a function call. Executable files that the shell loads aren't special except that they satisfy the right interface, namely that there has to be a function called `init` that has type `fn(nil: ref Draw->Context, args: list of string)`. As a result, it's trivial to do a library that doubles as an executable: https://www.rosettacode.org/wiki/Executable_library#Limbo . If you look at the code, it looks more or less like Go, with a handful of small exceptions, like libraries being loaded on-demand, native support for linked list and tuple types but no native map type, and some cosmetic differences (using `->` instead of `.`, regular semicolons, and Google wanted Go to be a language for "large systems" and insisted on camelCase and explicit exports).
Since libraries and executables are type-checked by the VM and Limbo's memory-safe, it doesn't need an MMU when running on metal, so stuff like the Nintendo DS port run like normal.
It is a cool language and there are pieces of it that I miss when doing Go, like having tuples is really nice. `x: list of (string, int)` is more convenient than having to do something like `x := make([]some_struct_you_had_to_make_for_a_single_use_data_structure)` in Go.
Yeah, JIT can be toggled at runtime, even. `echo -n 1 >/dev/jit`.
> Most of the interesting ideas have been reimplemented/ported back into P9 though.
There is no competition for the shell. Best shell of all time.
I still don't think anyone's ported the registry, because Plan 9 was designed around a more or less static network and Inferno's design had nodes popping up and going down. They're designed to fit in different spaces, though: Inferno was designed to compete with Java and to be a reasonable environment for conventional application programming, so there's Tk support, things like that. Plan 9 was designed for research.
@Zerglingman@iska@MK2boogaloo@sysrq Conventional OS design, yes, but even in unconventional designs, you make a 510-byte boot sector OS and it's just that those 510 bytes are the "kernel".
@sysrq@MK2boogaloo@Zerglingman@iska Yeah. pipe(), read(), write(). Syscall overhead is real, but it's going to be negligible on any serious task, and on a non-serious task, it doesn't matter.
One way to back up a database live is `ssh -C $host pg_dump $dbname | pbzip2 > backup.sql.bz2`. So ssh uses deflate, which gets the data kind of bulky but fast compression (it's gzip/deflate) so the dump gets across the network quicker without loading the CPUs on the DB server, then locally you're running parallel bzip2 to do the actual compression before writing it to disk. That's something you can't do without a pipe: the remote system *can't* share a memory space with the local machine. A TCP connection is an ordered bytestream so this kind of thing is trivial. Trying to enforce some kind of typed stream puts more overhead on both sides and makes the general compression impossible: everything is bytestreams or has to be reduced to bytestreams for general tools like that to work, unless you can get the entire earth to agree on a wire format for objects.
@sysrq@MK2boogaloo@Zerglingman@iska The idea is that spawning a process is "too much overhead", and writing to a pipe is a memory copy, and that this is worth additional coupling and forcing things to be in a single language. It's a really top-down view.
> Why would you need to be efficient when you can just throw more and faster cores, memory, and storage into the mix without incurring a huge overhead and while staying competitive in the consumer markets?
Yeah, there's almost no downward pressure on bloat.
> I find it both sad and humorous that I used to run Microsoft office on a 66mhz dx2. 30 years later, and though the speed, resolution, and print quality has improved, the ultimate end of the product remains the same.
It's actually gotten worse: speed is up, but latency is way up. (Try booting a FreeDOS ISO with a PS/2 keyboard plugged in if you don't believe me.)
But I know what you mean. I've spent a lot of time computing with these little ARM boards, and until you try to run a browser on them, you can't actually tell: all the usual computing tasks happen and they all work fine. On a single-core one, sometimes you notice some jittering when something hefty is starting up, but other than that, a lot of the software I use was fast enough to use in the 90s (or the 80s, even). Sitting down at a regular computer full of bloatware is painful.
> how do you actually do heartbeat monitoring (not just cron) without using some party solution like this?
There are a lot of solutions for this, Prometheus, etc., or rolling your own.
> and I want to receive real time notifications (email/SMS/whatever) whenever one of them fails/
SMS gateway, Twilio or whatever. You can't really get into the phone system without a third party being involved: at least the phone company will be involved. But you could always just get a 4G USB modem and an extra SIM card and send yourself SMS like that, root an Android phone, whatever.
It's much easier to send yourself an email, especially if you're talking about cron. Just set up an MTA to handle the local local spool by sending it to your regular mail server (as long as that server accepts the other server as a relay).
> Pete said using these tools means you "don't know how cron works" so I was like "ok enlighten me, how do I implement this in cron itself
Basically, it's already implemented. cron supports appending stderr to your local mail spool, so as long as you have one and you haven't explicitly disabled it (or installed a distro that explicitly disables it), you get that for free. dcron just gives you the `-M` option.
> And then I guess trigger some kind of mailer in the "except" part.
You've got to calculate something before you can share a pointer to it.
Actual lazy evaluation is not possible with sharing pointers, only with a means of connecting producers, filters, and consumers. In Unix, this is done by pipes and sockets, in Haskell it's done by leaking memory, and in Common Lisp it is done by coping.
That was what came before. It was all corpoware back then, anyway.
> the difference between (sort (grep x)) and grep x | sort is that the latter involves 2 program initializations and lots of copying
Lisp's entire existence has been "Look, maybe it's not fast, but it's got better capabilities." Here is a performance-based argument against Unix. The ironing. The argument was bad to begin with, for the same reason that "Recursion is slower than a for loop" is bad: it's not accurate and it misses the point. A process is very cheap in Unix anyway, or it least it was until emacs was ported.
But I can see that sentence because we are on a network of independently operating machines instead of all connected by terminal to a giant mainframe in Alaska that serves all the world's computing needs, because CSP won, and CSP won because it was inevitable. It's way less efficient to move data across a network than to have everyone living in the one massive supercomputer, but that doesn't matter, because you cannot build the one massive supercomputer.
CSP is not just a convenient notation, CSP decouples code and isolates processes. The internals don't have to conform, just the interface. If you've crammed everything into the same process space, everything has to conform. sort|grep doesn't require that sort and grep even be written in the same programming language, one of them just needs to be able to understand the other.
This is my primary frustration with most flavors of Lisp. There are languages designed around incorporation instead of integration, Common Lisp is one such language, and Unix is all the way at the other end of the spectrum: Unix is built around independent processes communicating rather than everything you want to do getting crammed into one process. If you want a monolith, sure, have a monolith. Because it has to incorporated everything instead of having a common interface to integrate with other programs, it has to share a namespace, it won't be called (sort (grep x)), it will be called (sort-strings-by-lexicographic-order 't (select-from-strings-by-regular-expression x) 'case-insensitive-search), and if it crashes, everything crashes.
Forth is a better model for monoliths than Lisp anyway.
BOFH of freespeechextremist.com, and former admin. The usual alt if FSE is down: @p@shitposter.club, and others. I am no longer the admin. FSE has no admins now. Welcome to the FSE Autonomous Zone.I'm not angry with you, I'm just disappointed.I am physically in Los Angeles but I exist in a permanent state of 3 a.m.I have dropped a bytebeat album, feel free to DM me for a download code or a link to a tarball: https://finitecell.bandcamp.com/album/villain . There is a chiptunes album there, too.Revolver is coming: https://blog.freespeechextremist.com/blog/revolver-kickoff.html