🔗 Data Oriented Design, a.k.a. Lower Level Programming?
I’m not sure if this title is clickbaity, but it certainly summarizes some of the impressions I wanted to write about.
Yesterday I watched Andrew Kelley’s fun talk on Practical Data Oriented Design — do check it out! — and this post will contain some “spoilers” (as in, I will discuss his takeaways). I was drawn to the talk for two reasons: first, because I wanted to check if I was up-to-date on my programming TLAs, but also because he starts by talking about how he felt he had been stuck in a plateau as a programmer for the past decade — a feeling I’m sure many of us have felt at times! — and how this new knowledge got him out of it.
The bulk of the talk, and his takeways on refactoring his Zig compiler to use Data Oriented Design, is on how to get better runtime performance by making data structures smaller, so they are easier on the cache.
DOD techniques
Lots of the examples involved understanding struct alignment, to raise awareness of how much space gets wasted if you don’t take it into account. One way to deal with it includes replacing 64-bit pointers with 32-bit array indices (pointing out the assumption that we can only then have at most 4G items, which is often fair) and, most importantly, that type safety is lost once you no longer have a `MyStruct*` but just a `u32`. This comes along with moving from arrays of structures to structures of arrays, so you can pack data more tightly.
Another method is to apply “encodings” of data to avoid additional booleans in structs. Instead of an enum Creature { Elf, Orc } and a boolean isAlive, you do a enum Creature { AliveElf, DeadElf, AliveOrc, DeadOrc }, effectively moving that bit of data into the byte used by the enum. This is no different than packing structures using bitfields. Combining this with the switch to arrays, you can possibly even avoid using that bit altogether, by keeping two arrays dead_creatures and living_creatures.
As he went through the various examples of refactors to reach this goal, one by one I kept getting this sense of deja vu: “hey, this is how we used to program in the olden days!”
8-bit coding
If you look at how assembly for the 6502, the 8-bit processor used in the NES (my first game console) and the Apple II (my first computer!), you’ll see some of those tricks embedded in the processor design itself.
The 6502 is an 8-bit processor with a 16-bit address space: each instruction features a 1-byte opcode optionally followed by up to two bytes. Since the address space is 16-bits, addresses can go from 0 ($0000) to 65535 ($FFFF). So, to load a byte from memory position $1234 into the A register, you do a `LDA $1234`, which takes three bytes: `AD 34 12` (yes, the 6502 is little-endian!). However, to allow for more compact code, the first 256 bytes of memory have special processor support: addresses $0000 to $00FF, the “Zero Page”. So, just like in the enum trick for `AliveElf` and `DeadElf`, the “enum of opcodes” in the 6502 processor uses a separate number for loading from the Zero Page, so `LDA $0012` encodes into two bytes only: `A5 12`. This also reminds me of switching from pointers to integers, since that one-byte offset into the Zero Page is also a half-sized index that can be used given a set of assumptions.
Going from structs of arrays to arrays of structs is also a very old trick. In fact, I recall my earliest days of BASIC programming where we didn’t have structs and only had arrays, so storing each “attribute” in its own array was essentially the only way, so if I wanted to store x/y coordinates and a name for a bunch of characters, I’d have three arrays `XS`, `YS` and `NS$`. I also remember how, over time, using parallel arrays like this started to get frowned upon as “poor technique”, since arguably, code using arrays of structs is easier to read and maintain that that using structs of arrays, where you need to manually juggle more things in sync.
Refactoring for performance
And this is a common theme: all those old-school techniques being reframed in the talk as Data Oriented Design were in fact one day the norm, and they started to be phased out in the name of ease of development and maintenance. Yes, they do result in faster code — sometimes much faster code! — if you restructure your code to count each byte and optimize for cache usage. But a key word there is restructure. Writing code this way makes sense when you know how the data is be used, and how it will continue to be used. I was happy to see Andrew doing real-world measurements in his talk, and he correctly points out the assumptions involved, with comments such as “if we assume that most monsters are alive”, etc.
It’s very difficult to do this from the get-go, as you’re still iterating around your problem space. But once you know the typical behavior of the program, you can rework the data to match it. And yes, that will most likely give you a performance boost, but most often not without a cost in maintainability: how does that change in the structure changes the client code that uses it?
Further, how hard would it be to change it over again if the underlying assumptions change — for example, if the usage patterns change, if we port it over and the architecture changes, or if we need to add another bit of data into that structure. Sometimes those are important concerns, for example in a codebase of projects that change often and fast (think a startup evolving its product as market targets move), but sometimes projects reach a stage of maturity where you can step back, look at it and say: “Well, I think I have a good understanding of how this behaves now. What is the most memory-efficient representation for the data?”
Andrew’s case looks like a prime example for that. Once you get the tokenizer for a compiler done, you don’t really expect big seismical changes to its codebase (in fact, I think I could benefit from making some similar changes to my own Teal compiler!). In fact, a compiler is a perfect project for these kind of techniques: it’s fairly low-level and performance-critical code. If I recall correctly, Andrew used to work for a web company before Zig, so it makes sense that the style of code he gravitated towards before was higher-level than the one he’s excited about now.
What about maintenance
Optimizing code for performance always feels like a fun puzzle, but the maintenance cost is always in the back of my mind. Even in something like a compiler, making the code “as tight as possible” can backfire, if your implementation language does not allow for proper abstractions. The difficulties in adapting LuaJIT’s C codebase to the changes in newer versions of the Lua language come to mind. One such low-level trick in that codebase hinged on the fact that 32-bit address spaces were limited to 4GB, which allowed for some neat packing of data; that assumption, which was perfectly fair in the early 2000s, became central to the implementation. Of course, 64-bit systems arrived and assumptions changed. Getting rid of that limitation in a codebase full of smart data packing turned out to be a multi-year process.
Of course, if you can get a memory-efficient representation without hitting a maintenance cost, that’s the ideal situation. Some languages are better for this than others. I was impressed that Zig implements structs-of-arrays as MultiArrayList using apparently the same client interface as a regular ArrayList, such that changing from one to the other seems to be a “5-character change”. If you think of other languages that offer no such abstraction, that’s a much more impactful change throughout a codebase (think of all the places where you’d have to change a `monsters[i]->health` into `monster_healths[i]`, and how the memory management of those arrays and their contents change). I’ve also seen Edward Kmett pull some very cool tricks in Haskell combining super-efficient internal representations with very clean high-level abstractions.
In conclusion…
Still, I think it’s nice that some “old-school” techniques are getting a fresh coat of paint and are being revisited. We all benefit from being more performance conscious, and thinking about also means thinking about when to do it.
There’s something to be said about bringing back “old-school” techniques for programming, though, especially for those of us old enough to remember them: the trade-offs for modern architectures are definitely different. Andrew raises a good point about memoization vs. recomputation: the kinds of things you should choose to memoize when coding for the 6502 processor on an NES are very different than those for a modern x86-64. So it’s actually good that those things are being rethought over rather than just rehashed — there’s too much outdated advice out there, especially regarding performance.
The one piece of advice regarding performance that never goes old is: measure. And keep measuring, to see if the tricks you’re keen on using still make sense as the years go by! Another conclusion we get from this is that optimization and abstractions are not at odds with each other, but in fact, combining them, across language and application levels, is the right way to do it, so that we can keep the performance and the high-level code — but that’s probably a subject for another time!
Follow
🐘 Mastodon ▪ RSS (English), RSS (português), RSS (todos / all)
Last 10 entries
- A Special Hand
- How to change the nmtui background color
- Receita de Best Pancakes
- That time I almost added Tetris to htop
- Receita de Orange Chicken
- Receita de frango empanado no panko
- Receita de cebola caramelizada
- Receita rápida de crepe
- Finally upgraded FlatPress
- Sobre o boom das commodities