Notices by Trade Minister Tagomi (trademinister@freespeechextremist.com), page 2
-
@p
@TradeMinister
> C++ seems to have somehow developed a reputation for being fast. I blame Java for this.
Java: another language I never really much liked.
It's not that I'm object-averse: actually I quite like them. I liked Javascript, and especially PHP back when I did that sort of thing after my kernel days were done. It was all the other shit c+++ and to a lesser extent Java tacked on, the templates, multiple inheritance, overloading, so one could look at the source and really have no idea wtf is going on without looking at endless other files. Oh sure, in c, one might have to chase things thru header files (emacs + tags was good for this), but at least one wouldn't see some monstrously like two structs being 'added' together. And I saw no monstrosities in ObjC, just convenient shorthand for how I was already writing object-oriented c.
> So, instances/nodes won't be running on phones anytime soon.
Oh, depends: people have run small (unreliable) Pleroma instances on Android tablets and hacked Nintendo Switches, also Raspberry Pis, etc. Revolver should be even easier on hardware like that. There's how much you federate, and with how many servers. I suspect very strongly that Revolver should be able to handle more load. We'll see when it gets into the wild.
> I see no obvious way around this: a network-as-storage-and-server model seems to require a lot of local storage and network chatter.
Well, it's a matter of scale, and there are different failure modes for exceeding capacity, right?
-
> It is nice in a few respects: I like the prototype-based object system...but I hate their execution. A lot of the language is like that. It nearly ruined Lua for me: every time I write Lua, I spend maybe 10% of the time thinking about the Lua program and the other 90% thinking about how much nicer the world would be if we'd adopted Lua instead of Javascript.
Lua was after my time.
Good things about JavaScript: maybe a little sloppy, but easy to actually do things with. Runs in browser, so all important programming write-execute-write cycles were fast.
Lua was after my time.
Of course, the main lesson of JavaScript for me was that other than fun, toys and gimmicks, one should always do everything server side, never trust the client with more than simple HTML. I effin loved PHP: best online docs I ever encountered, always easy to find out anything about the language, and it just worked and did things right. That was before HTML5 with layers and such, so it involved frequent whole-page re-rendering, but it was worth it to be sure a page would work on all the different browsers out there.
Now they've got stuff where afaik you can dynamically, asynchronously update just a snippet of a page. I sure would have enjoyed that.
-
@p
@TradeMinister
> It is predictable, but it feels sub-optimal a lot of the time. It lacks a lot of the flexibility and power you get from the Perl/Python/Ruby/etc. family that was more Lisp-inspired and heavy on the string/array manipulation.
You *may* be talking about PHP, not sure. It did what I wanted, anyway.
I used Perl too, but it always seemed somehow sloppier, easier to wander down strange dark paths. But then tbh I never did much like Lisp, either. I guess C ruined me for anything other than lineal descendents.
> Ah, yeah, it gets over-used a lot and every damn browser leaks memory from it. (I proposed at a previous employer that we just port Tk and draw to a <canvas> element.) And it was, of course, relentlessly exploited, including by people whose names are on the spec, as an attack vector for people to get your computer to do things you'd rather it not do, as with any means of just running untrusted code by clicking.
The 'advancing' html spec seems almost as if was designed by Big Tech companies staffed substantially by 'ex'-intelligence officers to make it easy to monitor everything one does, while stealing one's data. But that would never happen.
Still, having to refresh a whole page instead of just the relevant part was ugly.
> I might be too cynical, but I wish they'd just ship a bytecode VM and be done with it.
But not Java.
-
@p
@TradeMinister
> What I am saying, though, is that the usual course of action is that the big, ugly thing is rewritten rather than pared down and simplified (and the new thing eventually becomes even more bloated).
I think in that is in part because the big, ugly thing isn't firewalled into modules or layers. I remember even in v7+ Unix, if one wanted to rewrite even a significant function right, one had to trace down all the statics, externals, everything the fn messed with that something outside might also mess with. TCP/IP probably could have been written as a single protocol with greater efficiency, but having IP separate meant one could add UDP, and theoretically tinker with UDP and TCP and such without breaking IP.
So, with a big non-modularized codebase like BSD, and an endless stream of students tinkering for a few years, one could actually see in the code that instead of ever taking anything out, they'd just add more, so the 4.1c BSD listing was I think at least 2 or 3 times the size of our v7 kernel which I'd brought up to sys5 syscall interface, but didn't do much or anything more, except, OK, relocation/protection, because it was written for VAX, but still.
> > Used to be some guy named mib (aka Elizabeth?), good someone else took over, but maybe too late.
> Before that; this is the guy that was working on it in 1990. He went over to the dark side, he's at Google, last I heard. (You might have met him at some point, I don't know; he was Michael Bushnell
Yeah, that mib. I sort of got the impression he was really neurotic/autistic ("Elizabeth", unless I'm thinking of someone else, maybe the guy I worked on writing a GTK rewrite of kernel make, also annoying, project petered out) and no one could work with him, is why Hurd became a historical footnote
> until he got inducted into the Brotherhood of St. Gregory, some sort of Episcopalian thing.
Makes sense.
> Steak and clam chowder yesterday, steak and kale today. Maybe I'll put some burgers away tomorrow, but I am hacking today, and can't risk eating too many burgers and getting sluggish.
I'm coming over.
-
@p
> I think it is hard to make it big and ugly while still organizing it sensibly.
TCP/IP could grow without AFAIK getting ugly because of its clear boundaries. Unix got somewhat uglier by growth by accretion. Browsers, maybe they started out ugly, idk.
> >one could actually see in the code that instead of ever taking anything out, they'd just add more
> Many such cases!
And I expect they all end up being impossible to even maintain.
> > because it was written for VAX
> Still runs on a VAX last I checked; I attended SCALE some years ago and the NetBSD delegation had brought a running VAX. Very fun.
I regret that I never actually got to play with one. The Math dept at Brown got one in about 1980 and I wanted to run some code in it (I'd gotten a manual and written something in Vax assembly), but Brown was the kind of place that would rather have it sit unused in a basement (what this seemed to be doing) than let someone use it who didn't have stampiti, even tho my father was a sort-of-big-deal Physics Prof (or maybe because: academic politics).
So within a few years I became a self-taught highly-paid OS coder at an elite Cambridge house. It never occured to me to go back and try to shove it in their faces, sort of wish I had.
> Being friends with him was more of a skill, but I chalked it up to normal Lisp programmer things.
I sort of knew two Lisp guys, rms and some guy at Symbolics. And probably the C-Interpreter guy, I bet. To be strange people among serious coders is an achievement.
> We had some disagreement about HTTP statuses (he was doing this backend data store with ACLs built from cons's of integers
Can't. Unsee.
and 0% of it dedicated to our application logic, his reasoning being that we could build whatever we wanted and he got to write a more complicated K/V store in Chicken Scheme; that was fashionable to do at the time: http://widgetsandshit.com/teddziuba/2009/06/startups-keep-it-in-your-pants.html ), and an hour in, I said "Okay, do what you want, I have to get this done." and the conversation then turned to how important it was that I never throw up my hands and accept any compromises to get the work done because everything must halt until we have unanimous agreement on what constitutes the Right Thing, which took another hour. Once you figure out how to be friends with him, he's a great guy, entertaining sense of humor, broad knowledge, but major stick up his ass about everything on earth. (I did manage enough self-awareness back then to recognize that I share this flaw.)
> ("Elizabeth", unless I'm thinking of someone else
I think you are.
> no one could work with him, is why Hurd became a historical footnote
His version is (I may be recalling incorrectly, treat this as half-remembered hearsay because it is half-remembered hearsay) that rms was somewhat uninterested in kernel development but very interested in micro-managing it.
> I'm coming over.
Ha, wait until the coughing stops, so that I'm certain I'm no longer contagious.
-
@p I'd like to read that ACM paper.
I was lying in bed after a nap, and thinking about the object management, or filesystem if you prefer, that might underlie a decentralized, ruggedized Fedi, and I got to thinking about recounts and access times. The flaw I saw in the fs (dataspace?) you described at one point is that blocks are never deleted: the space just grows to infinity, and most of it is dead garbage eventually.
Thus, refcounts. When a path is created in the namespace, and an inode is created to link to some or all of the data in the dataspace, the inode gets a refcount of 1, which ++s with additional paths to it and --ed on unlink (this part is standard *nix fs; I'm not addressing decentralizing this yet).
In standard *nix, upon the last unlink() setting inode recount to zero, all of its data is freed. In a decentralized protocol, it might be better if the data objects (blocks, extents) also have recounts, for a number of reasons. First, if the recounts go to zero, the data can be deleted, or placed in dead storage, candidate for deletion.
Secondly, and a reason also for access times, is so the protocol can decide about how many duplicates of a block (there should obviously be redundancy) to keep around, and where to keep them. High refcount, recently accessed data should be kept in fast storage with many copies, and low refcount unaccessed storage, off to the morgue with it.
It will make quoting/duplicating/retweeting an object more costly, because in addition to at least creating a new path in namespace and upping the refcount in the inode, the inode level of the protocol has to send out updates to all the data objects, and the protocol either has to handshake, meaning every action has to be acked by recipient, or possibility of incompletion has to be contemplated.
-
@p
> Yep, I've got a solution to this. Block collisions, when the block size is small enough (8kB), should make the storage requirements go log-scale.
You'd have to dumb this down for me. Block collisions, which I assume meaning two data blocks hashing to the same thing, sound like a crash of the protocol to me.
> Aside from that, there's a tuneable GC process (in progress), basically doing slow-motion mark/sweep, then evicting LRU. (Full nodes don't want to GC anything, and for normal nodes, mark what that node has published, then what that node is interested in, and the rest is transient data.)
Yeah, good, so a normal node is locally maintaining a hot data cache. But if full nodes never GC, storage requirements grow to infinity, but perhaps in practice no faster than storage limits towards infinity, and we have a 'which infinity is larger' question.
> > inode is created to link to some or all of the data in the dataspace
> You think this might work better than just walking the tree?
The inode struct is just what I am most used to. But there are all manner of ways of keeping track of the actual data objects that underly abstract objects. I'm sure you've already come up with better ideas about that than I'm likely to. And there's all the history, NFS and such, to look at.
> One of the issues is that there are the top-level objects assembled from blocks, and then there is this slush of blocks.
The abstract objects, and the messy data.
> (Six queues: explicit outgoing, signed outgoing, network outgoing, explicit incoming, signed incoming, network incoming, all managed somewhat defensively.)
This would need dumbing down for me to comment.
> So a large number of unaddressed blocks moving through is the norm, then there's a heuristic for which ones we don't need when GC pressure happens.
Need definition of 'unaddressed': transiting network-chatter not addressed by objects of local interest , or...?
> > so the protocol can decide about how many duplicates of a block (there should obviously be redundancy) to keep around, and where to keep them
> The protocol has to assume the entire network is hostile; I think this precludes most classes of group-level decision.
There I go, thinking we're on the Arpanet, a high-trust low-security environment.
> Essentially, what we're doing is more or less a lightly tweaked Kademlia,
Did a quick read: this sounds right.
> nodes have a cooperation score for other nodes (key-based; that is, independent of the means by which we talk to that node). Ideally, your node has all of the things you've published and are interested in, plus mirrors of other blocks that eventually get swept if no one cares about them.
Sounds good. Cooperation presumably meaning a node doesn't violate protocol, answers requests in a timely fashion, doesn't produce bad data, doesn't show signs of trying to crack the protocol.
> > possibility of incompletion has to be contemplated.
> That's something we have to deal with, yeah.
NFS amusingly enough decided not to, being 'stateless'. I remember I was at an Olivetti (they actually did computers in Europe) conference in Florence in '86, and the NFS people were presenting their protocol, and I saw a window or race condition (actually as one can imagine, a 'stateless' protocol is prone to them) and pointed it out, and their answer was that it happened yet. Not sure, but I think years later I read that yeah actually it *did* happen, and wasn't good.
So you have to think about things like multiple writers to the same addresses in the same object colliding, and writes silently failing out there somewhere, unless you have locking in the protocol, and some kind of handshake where a write isn't a write until wherever the data lives says it is. I'm sure people much smarter than I have thought long and hard. NFS, as one might expect of the BSD people (I got *really stoned* in their hotel room at a DC Unix conference in maybe '84) opted for fast and simple over slow, complex and reliable, and in practice it mostly worked. OK for almost anything, but maybe not real mission-critical stuff.
But I think that's the basic tradeoff. And now that I think of what the Fedi is, and how no one will die if an old meme gets lost, fast and simple might be the right choice.
-
@p
@TradeMinister
> Birthday problem or pigeonhole principle (depending if you are a fan of statistics or discrete math). Enough people in a room and the coincidences start being something you can rely on.
I forgot that you're using, if I understand, a content-addressable filesystem where the objects are hashed and thenceforth referenced by hash: two objects with the same hash (repeated images for example) are treated as one object (with a refcount of two if you're doing that). So collisions are good, except in the corner-case where two different objects collide. Disaster if you're a SpaceX rocket, but a filesystem for social media, no biggie.
> Well, not necessarily; it has been a minute since I collected garbage!
If you're not actually deleting blocks, it may be more like cache management than what I think of as GC, but I never formally studied either. Actual GC, for me, would be looking for data blocks that nothing references, orphans with refcount < 1. If you're looking at what a given node should keep a copy of, it's a somewhat different question.
> It is actually intentional that the slush is a mess; you can reject some keys while still participating in block propagation, so what you do with your node affects only you.
It seems like a decentralized distributed filesystem would sort of require this. It's inefficient, but having a bunch of nodes requesting data by content hash, other nodes putting that data on the Net, all without a central manager or directory, is going to be messy.
> This would need dumbing down for me to comment.
> Once you've broken it into blocks, there are six lanes: blocks that a user has signed with his key ("Head of my tree is $x, witness my seal, here affixed in anno domini MMXXII"), explicit blocks (i.e., children of blocks signed by users of a given node or by users that the node is interested in), and network blocks (unaddressed slush that you have received and are passing on to other nodes: "I have received 4543a82f8f5a7914eaccaf626d311babaa0bd58cbaa9944748e8ba17c179ee8c and 302e3be1a049d2ad498d64af7c9d87ad64d7e41b7f077db80a67ff8516e2645b and 6ee94fe372d94918c75144ffa34c0fa0eb471bd0ee047241abaad0d31ea99298 and can relay those if you need them".). The other three lanes are the same, but for incoming data.
OK, I think I get this now.
> Yeah, my reasoning is if we just go by how reliably it cooperates, we don't need to treat flaky or malicious nodes differently.
Both state and nonstate actors will have nodes trying to crack the protocol, de-anonymize users and locate nodes, perhaps inject forged data. If the protocol is rugged enough, their nodes will reveal themselves.
> I am a big fan of ARM still.
It's a good architecture family.
> This is the benefit of content-addressed storage.
It's always been an interesting idea.
> That sounds like a fun story. (Also, to hear the contemporaries talk about BSD, you could probably have gotten stoned just by standing close enough to their door.)
I guess they were already growing super-sinsemilla in California then. Whatever they had was seriously one-hit, and nobody had just one. Don't remember alcohol being part of their room-party, but then I barely remember the party, or probably anything within a day or so either side of it.
> you can get away with posting with a really, *really* low-spec system and a couple of kilobytes of persistent storage. (You need more in order to read posts, or at least enough RAM.)
If one needs to run a node to participate, then nodes should run on phones and be economical about data.
-
@p
> The browser crashed while I was writing this, so I am going to forget some obvious things.
One might hope that after decades of browsers crashing with half an hour of input in some buffer, they would have recovery mechanisms. One would be disappointed.
> The worst impact, if you can find a colliding SHA-256 (and find one in less than 8kB), is that there is a corrupted block referenced from a higher-level block. This is hopefully unlikely, but shouldn't be a major problem.
Again, unacceptable in a Mars-probe landing system, but OK in a meme-swapping network. I'd never thought about it before, but some degree of unreliability might be OK in some applications.
> Yeah, blocks are being deleted in cases where storage matters. (Although, if there is enough redundancy, then notionally, all blocks are just cache of the network.)
That's an interesting idea: the network as storage, the raw data as caches of the network.
> It is, yeah, but them's the breaks. I think it has a good chance of being more efficient than fedi's handshake explosion.
Handshakes could drive one nuts. How does one know one's ack was received? What happens if it isn't? It's been many decades, but the original TCP spec thought about all this, and had a SYN/ACK system that was totally reliable and seemed fairly light on useless chitchat. I forget what SYN did (must have stood for Synchronize) but the way I *think* ACK worked was something like ACK 1024, meaning 'I have received the first 1024 bytes of your data'. And because the underlying IP did not guarantee much of anything, the cases of out-of-order data, ACKs not matching, etc all had to be dealt with, and were. It was all so damn good that we're still using it, and the French philisophe X.25 never went very far.
> I think the best I can do is avoid the major architectural botches and try to play it conservative but adaptable.
Truly old-skool; that's how we used to roll, had to given our ridiculous hardware constraints.
> The target is full-featured node on small ARM box (which should, at least vaguely, translate to nodes running on phones), and I think I can pull that off for the initial release. It's designed such that nodes can have multiple users (though it's better to use your own node).
I'd recommend considering what phones specs, and more important, data access, the middle class of Argentine will have in a year. I say Argentina because 1. 🏆🏐🇦🇷 with a team of 🇦🇷 and not 🐵; 2. It's an intelligent population with very limited budgets and cell data access; 3. Maldacena! Quantum gravity Goooooaal!; 4. Argentiiiiiiiina! 5; I don't gaf about Africa, India etc or care what they have to say (except Boers: Boers are cool).
-
@p
@TradeMinister
> (Speak of the devil, and he's in your midst: Firefox died because I kicked off a `make` in the background, and didn't realize that it was C++ and it ate a large amount of RAM.)
Both are such pigs that the kernel OOM code (out of memory) had to decide what to kill.
> Ah, in this case, I meant "n*(n-1) connections between servers". Everyone has to talk to everyone for everyone to get everyone's posts. There isn't going to be 100% saturation for any given post, but at present, for 20,554 nodes, that's 422,446,362 edges. (Revolver actually grew out of a sketch for a fix for this: it started as half an experiment in developing an object proxy.)
I've got this vague notion that some sort of ring architecture might work here, maybe hierarchical. Instead of every post being blasted to every node, have something more like frequent mail delivery.
> the way I *think* ACK worked was something like
It's got TCP sequence numbers (which are nowadays non-sequential, to prevent spoofing).
> Truly old-skool; that's how we used to roll, had to given our ridiculous hardware constraints.
Thank you, sir.
> what phones specs, and more important, data access, the middle class of Argentine will have in a year.
This is actually a really good plan. I think we have some Argentinians on FSE; in any case we have a few Chileans, and although they're not quite the same culturally, I suspect it's roughly the same in terms of median phone specs.
-
@p
If I didn't have this browser, I think most of my RAM would sit idle.
"Quote" that person with the Mars and such:
Browser
Perhaps HTML by nature causes a huge mess of unreadable code. Or maybe it's just the Pajeets of Mozilla.
Judging by that new partitioned caching object manager the Chromium people came up with, and the fresh start they got, I would expect their HTML implementation is probably about as good as it gets. Or maybe Safari is. Maybe it's even written in ObjC.
> The way the propagation works as things stand has to do with who is following who; posts associated with the activities of anyone on FSE is following make their way here. So posts, replies, likes, etc., the idea being that the posts likely to be of interest to people on this instance are the posts of interest to the people we're following. It's a really carefully designed system.
OK, but that requires that every node knows all that stuff about FSE and every other node: sounds like a big database, requiring careful syncing.
-
@p
@TradeMinister
> Maybe it's even written in ObjC.
> C++. Except Firefox and its unholy Servo/Gecko situation, they're all using Webkit. Even Opera. Even Microsoft's Edge uses Chromium's engine, which is a fork of Webkit, same rendering engine popularized by Safari, though it was initially developed for KDE. It's harder to avoid Webkit in 2022 than it was to avoid Microsoft in 1998.
Probably because CS majors were taught C++, not ObjC.
> Well, they broadcast it. We do have a huge DB, 360GB on-disk at present. (Just the posts, the likes, etc.; not counting uploads.) That's the handshake problem: someone on FSE likes a post, every instance that contains a follower receives a delivery, then, although I have hacked this out of FSE, there is also an entry in the deliveries table recording each successful delivery.) So even a single-user instance where the single user is followed by enough people will chew bandwidth and leak disk at a pretty alarming rate.
So, instances/nodes won't be running on phones anytime soon. I see no obvious way around this: a network-as-storage-and-server model seems to require a lot of local storage and network chatter.
-
@p
@TradeMinister
> I don't think you can write Perl without darkness.
It was OK for making stuff web designers did back then, which was mostly painting pretty pictures. But I tried to write a miniature web server in it, wandered off into using complex content-addressed arrays or something, never could get it working.
>> But then tbh I never did much like Lisp, either. I guess C ruined me for anything other than lineal descendents.
> Lisp is pretty fun, but I understand what you mean: C's nice, really nice.
Also, my mind has specialized hardware or something that was a perfect fit with C and firmware/OS code, and even moreso microcode, and pretty useless at big sweeping masses of code like gcc, and languages like lisp. Same hardware made my father really really good at NMR/NQR back when you had to build the gadgetry yourself.
> Almost completely open. They created the "Defense Innovation Advisory Board" at the Pentagon for Eric Schmidt to run.
Reminds me of the Defense Policy Board, which was Paul Wolfowitz's fiefdom to use to betray us into attacking Iraq for Israel in 2003, for which service he got the World Bank.
> But not Java.
> Not my favorite VM.
I wonder if gcc's RTL could be executed by a VM. Nothing like that shows up in a search, so maybe it's not doable, or a good idea.
-
@p
@TradeMinister
> > Chopping it up into modules
> I've never seen this kind of effort succeed past a certain threshold. I mean, look at the Mozilla codebase.
I did a long time ago, ain't never been right since.
One apparently successful approach is microkernel OSs. One puts the most basic stuff, process and memory management, probably device IO, stuff that talks directly to hardware, into ring 0, and all the other stuff, filesystems and such, outside as 'servers'. Somewhere on top of all this is a server which emulates an OS. It seems like this is a strict enough separation that one could be a Mach geek (where I would be) and not know or care what is using the interface one provides.
Apple seems to have made this work.
> >For all I know, it's already Skynet.
> This was a big chunk of the premise for the Metal Gear Solid series. Shady government cabal turns out to have been getting *its* strings pulled by a network of AIs that were built by a globalist that had himself gone senile about twenty years before they took over.
Judging by the AIs that turn out to be based af and LiterallyHitler, it wouldn't surprise me at all if the WEF is one, has gone full-tilt vjer and wants to eliminate carbon-based lifeforms except the Klaus, the Kreator.
Just past midnight here (gunfire reminded me) . Happy New Year! 🎉
> I followed the Web Assembly link. If not for the damned speculative fetching/execution stuff, fully-sandboxed Webasm might be OK
It's a backwards attempt at arriving at a VM, but Fabrice Bellard did manage to port the Linux kernel to compile without all the required gcc extensions so that his tinycc could build it, then get Emscripten to build and boot a kernel in the browser.
-
@p
@TradeMinister
Yeah, but you don't get there by reorganizing Linux, you just start a new project or fork Mach or L4 or something.
The design beauty of the microkernel approach is you just write a linux or BSD server.
> Did I tell you about the time I was having lunch with tb and I made HURD jokes without knowing that he was the HURD guy?
Used to be some guy named mib (aka Elizabeth?), good someone else took over, but maybe too late. With Linux and *BSD around, I guess no one much cares.
> Happy new year! I keep checking on my food and it's still not done. My appetite came back today, with a vengeance.
I can only imagine how many cheesburgers. You'd like Argentina: meat with a side of meat, when anyone can afford it. An asado would be heaven.
-
@ArdainianRight @ChristiJunior @LukeAlmighty
> Antipope is depressing. There's no Catholic justification for seeing either of those things as more worthy of censorship than pornography or tranny propaganda.
To be fair, he did mention grooming, so he's not totally on board with the 'LGBQTPEDOeft', just way too far on board with them.
-
@p
Human readability, as opposed to machine readability, was a core design rule in the good old days. Good times.
-
@p
> "You won't have to worry about that, the computer does it for you!" Then the lives of the people that build the computer to do it for you get worse.
Don't quite follow the reference.
-
@p
> This is the refrain when you complain that a low-level detail is incomprehensible.
Nemmine, finally saw the reference.
Reminds me of the guys (that C interpretor guy was a leader of them) who insisted hand optimization was stupid, the compiler would do all that. They tended to write huge masses of code, some of which like X-Windows actually did something useful, and some of which, like his C-terp, didn't.
Me, I enjoyed writing stuff like
{
register struct obj ** obj_next;
...
}
to tell the compiler that this var was only going to be used in this local scope, and that I wanted it kept in a register. The comliler was pretty stupid back then; gcc would probably do all this for one now, but back then one could see the difference in the generated assembly language. But I was at heart a firmware guy, almost a hardware guy at heart, and that's how we rolled. I really had no head for the huge sprawling projects, just wanted to make machine go fast vroom vroom.
-
@p
> >They tended to write huge masses of code
> Huge and indecypherable, no doubt.
I left that bit implied, and you caught it. Perhaps not to them, but to me at least.
>You are correct. Register allocation has gotten smarter, that's one nice thing. gcc and Ken's cc both usually ignore the register keyword. Ken's, in fact, just puts everything in registers, and then kicks your program back to you if it can't find enough registers for your intermediate values on the target arch. You usually don't see this error unless you are doing some absurd multi-line expression; I've never run into it except when doing bytebeat songs (where the traditional form is one long, single expression).
Bytebeat sounds conceptually interesting, but perhaps as part of my 'Back to 1825' mindset I mostly listen to Classical these days, so probably wouldn't like it.
Statistics
- User ID
- 2949
- Member since
- 21 Dec 2022
- Notices
- 53
- Daily average
- 0