I've just written parsers for several CSS properties! Namely those for the CSS box model & (in part) fonts! Building upon my previous work on fonts as I wrote FontConfig language bindings.
I'll parse several more tomorrow, then normalize it into the datamodel I've been working on previously.
After that I can start hooking up inline layout & assemble something rudimentary to demo! Also it'd be time for a refactor, I'm not proud of my CatTrap code...
Not sure about grid layout's height computation though, but I have made some fixes! Get back to that later... For now I want to move on to parsing the styletree!
Finally started watching a (long!) video about the UE Gameplay Ability System. I'd tried to get a sense for it from the written docs, but they just don't go into enough detail on how & why you might use it in practice. More like 1 paragraph on each bit & then "go read the Lyra source". Needing to watch a *3h* video has been a blocker on me bothering to dig more before now.
I don't think it's something I'll use on this game, but I can definitely see the utility. Just a lot of moving parts!
In OpenGL you can insert a GLSL "fragment shader" here to finalize output colours. Often this involves little more than looking up a pixel from a "texture" image to output. Though you can move lighting computation here, treating the triangles as approximating a smooth surface.
Or you could stick with gradients.
Finally a "depth-buffer" is consulted to select the frontmost fragment pixel to output, if it hasn't been outputted already. This is the trickiest step to parallelize.
In OpenGL you can substitute that logic for a GLSL "vertex shader"!
Now that we know where everything is discarding offscreen, as well as optionally backfacing (since they're typically obscured by frontfacing triangles), triangles we can drastically cut down the workload to approximately the number of output pixels.
Before interpolating the vertex-shader's output within each triangle, either via the bounding-box or it's longest side. Ofcourse offscreen pixels would be discarded.
Most of the magic of depth-buffered 3D graphics comes the 16 numbers in it's matrix transform! Covering the vertex's position in it's object, The object's position in the world, the camera's position in the world, & the camera's lens. Placing the virtual camera at the center of the virtual universe, incorporating perspective.
Meanwhile we approximate how much each "diffuse", "speculative" (via exponentiation), & omnipresent "ambiant" lightsource contributes to the vertex's lighting.
Throughout the history of computer graphics there's 2 main 3D graphics techniques used: depth-buffering & raytracing. Until relatively recently depth-buffering, today's topic, was more popular.
Because depth-buffering's easier to hardware parallalize!
Basically it consists of the steps:
1. Matrix-transform all points 2. Light each point 3. Cull offscreen & backfacing triangles 4. Interpolate within each triangle 5. Finalize "fragment" pixel colours 6. Select frontmost pixel for output
Tomorrow: Solr's plugin system (in place of revisiting BinUtils) Next: FreeType smoothing Then: GNU Multiprecision Later: FreeType SVG output
Not much further to go on FreeType! And none of the upcoming looks that complex! Then I'll have another another toolbox to study as I finish my threads on the text-stack...
A browser developer posting mostly about how free software projects work, and occasionally about climate change.Though I do enjoy german board games given an opponent.Pronouns: he/him#noindex