🔗 A love letter to bands, in music and code
This starts with the Beatles, but it’s not about them. It’s about us.
So, a friend tweeted earlier today the question “what is the final Beatles song“, and as usual, these topics lead to fun conversations. You see, the “end” of the Beatles is a fuzzy matter, because they recorded their last two albums in 1969 and 1970, but released them in reverse order — Abbey Road was recorded last and released first, and Let it Be was the other way around. And then, do the Anthology sessions from 1994 with Paul, George and Ringo playing over an old recording of John count? (I feel they do!)
As the conversation went on, she shared with me this little story (on everything2! they’re still around!!), which only drives its point home really deep if you’re a hardcore Beatles fan, but one element in it is that it imagines a Beatles album made from songs that the individual members released immediately post-breakup. That on itself is not very remarkable: building those imaginary what-if-they-stayed-together album setlists is a favorite pastime for Beatles fans. Invariably, the semi-obvious conclusion is that if you took the best tracks from each of the four solo albums from a given year, you’d make an album that’s better than the individual works. And as you can imagine, those playlists are widely available. Still, those always sound like compilations to my ears (and, without giving away too much, the story linked above addresses that beautifully).
To me, the fact that such fan-fictional albums sound like compilations and not like true albums has to do with the reason why I think collective works — be it in music or not — tend to be better in general than solo efforts. Because work done in a collective yields results of a different nature.
“People change like colors bleed,
As I sense a shade of you in me”
— “Color Bleed pt. 2”, from the album Color Bleed (2011)
Work done by two or more people always reflects that multitude, it’s always unlike any single person’s work. When I’m in a collective environment — and by that I mean any setting where my work is presented to and discussed by others as it is developed — even when I’m doing work completely on my own, even before I’ve had my first piece of feedback, I feel a sort of mind game playing in my head where I “play the part” of my peers and imagine what their feedback would be, be it consciously or subconsciously. I’m doing the work not only for myself, but for others too, whose opinions I care about. That affects in subtle and not-so-subtle ways the results of what I do. Even the work I do alone, when in such environment, is different and better than the work I do alone-alone.
When I recorded Color Bleed, I invited my friends who had played in a Pink Floyd cover band with me to join in. Even though I wrote all the songs — some written years prior, some written as the project took shape — all of them ended up influenced by the fact that I was playing the songs with them, and even though I tried to venture away from the cover stuff we were doing, there are clearly some floydian moments here and there, which maybe don’t sound so much like Pink Floyd to my ears, but certainly alluded to our favorite moments on stage together (and let me tell you, playing keyboards along with another keyboardist is really cool!) — by the point we were recording the original songs, the point was never to sound like the cover band, but for it to sound like us. So even if that wasn’t truly a “band effort” in the idealized sense that we imagine “four people in a garage”, it was never a lonely project: from the very beginning it was me and Coutinho, who was the other keyboard player in our band, who also owned the studio and took the role of producer/engineer, and then the other folks joined in contributing parts as we went. The end result was much better than anything I could achieve on my own, and most importantly, even the things that I did were better than if I had done them on my own.
This lineup ended up playing only one gig! I swear that if this pandemic is ever over, I’ll get the band together for a Color Bleed 10th Anniversary concert.
That feeling of being “in a band” is not exclusive to music. I definitely felt it as a software developer as well. At my last job, I made the interesting observation that it was the third time in my career that I was part of a team called “Core Team”. The first one was back in college, and it was the most special one — maybe because it was the first, maybe because it was the experience that shaped the rest of my career: the Core Team for GoboLinux, my first successful open source project.
Looking back, it’s funny how it started very much like Color Bleed. Back in 2002, I was at the university and I had this idea for a crazy Linux distribution which would require recompiling the entire system from scratch. A friend joined in and we did it together over the course of a weekend. One by one, more friends joined in, switched over their systems, created bootable CDs, made kernel patches adding cool features, we were just having lots of fun tinkering with the OS. Then we were slashdotted (think “HN frontpage x10”), then we were on a magazine cover, photo shoots and all, then we were invited for internships in Silicon Valley, all the way from Brazil. I never took music seriously, but at least in tech I had the “indie band having its one-hit-wonder moment” experience. And yes, it was as cool as it sounds.
Cover story! And a cool band pic!
There we are, next to KDE and Gentoo! We’re “for real”!
The second Core Team was at a local startup, where I worked with Guilherme, who was also in GoboLinux. Since we already had the chemistry from that project, this really felt more like a side project than a new band — think Petrucci and Portnoy doing LTE away from Dream Theater (yes, I just blatantly made that comparison haha).
In between the first and second Core Teams I got my Masters degree, and between the second and third, I got my PhD. I loved it at PUC-Rio (or else I wouldn’t have gone there twice!), and made great friends each time, but it never felt like a band. In both occasions we had a research group, but each person was running their own project, with little or no overlap. Opportunities for collaboration were limited, everyone was on a different schedule, and while we created a great environment which I’ve called home for many years, it just wasn’t “a band”.
The third Core Team was at my last gig, at Kong. Again, that felt like a band — a scattered group of hackers from all over the world — China, Finland, Spain, Brazil, Peru, Canada, US — brought together because of their Lua knowledge to maintain this open source project. Each one with very different skills and backgrounds, and it was complimentary: it felt like each one of us played a different instrument. And doing creative work as part of that group felt like doing it in a band context: even when I did stuff on my own, I had it in my mind that it is was being done for that particular team to review and maintain (even if each of us would still put our personal flavor to the code). I had a great 3½ years with that team, where I learned a lot and played different instruments—I mean, roles, and then I put in practice something that I learned from the many bands I played since I was a teenager: leave on a high note. Looking back, the only regret I have is that… apparently we never took a picture of our team? (To be honest, I’m not sure we’ve ever got my last lineup of the team together in the same room — it was supposed to have happened in 2020, but then the pandemic hit. Maybe a pic of some previous lineup at least?)
Not all of my coding projects were “bands”. Even though I had tons of pull requests with contributions over the years, the process of developing htop was always a solo endeavor. I liked it this way, for a good while it was my chill-on-my-own thing away from everything and everyone. But then I drifted away from it, and free/open source projects (FOSS) projects need maintenance. From a distance, I feel like that the new team who picked it up works like a band. I’m happy for them!
Maybe it’s better if FOSS projects work more like bands than solo projects — bands often outlast their members, after all. But then sometimes you just want to pick up an acoustic guitar and do stuff on your own. There’s got to be a place for that too. Now that I think of it, I’ve never been part of a really huge FOSS project — I have a tendency of starting projects rather than joining established ones! — and I don’t know if this “band mentality” of mine has prevented any of my projects from growing (whenever I read about the structure of the Rust project, even before the Foundation, it seemed super sprawling!) but I know that a team can only feel like a team when it is about the size of a band, and I know that a team feels best when it feels like a band.
Not all teams feel like a band, and to be honest, not even all bands. But when it happens, it’s somewhat magical. It’s something that build memories that you take with you forever, and which change you in some way or another. Whenever I listen to the solo works from my favorite artists who left my favorite bands, I can always tell that the influence of their old bandmates is always obviously there, whether they want it or not, whether they’re Paul McCartney or John Lennon, David Gilmour or Roger Waters. I’m sure the influences from all the great people from my past history are there whenever I play, and whenever I code.
🔗 Compiler versus Transpiler: what is a compiler, anyway?
Teal was featured on HN today, and one of the comments was questioning the fact that the documentation states that it “compiles Teal into Lua”:
We need better and more rigorous terms in computing science. This use of the compiler word blurs the meaning of interpreted vs compiled languages.
I was under the assumption that it would generate executable machine code, not Lua source code.
I thought that was worth replying to because it allowed to dispel two misconceptions at once.
First, if we want to be rigorous about computer science terms, calling it “interpreted vs compiled languages” is a misnomer, because being interpreted or compiled is not a property of the language, but of the implementation. There have been things such as a C interpreter and an ahead-of-time compiler for PHP which generates machine code.
But then, we get to the main course, the use of “compiler”.
The definition of compiler has never assumed generating executable machine code. Already in the 1970s, Pascal compilers have generated P-code (a form of “bytecode” in Java parlance), which was then interpreted. In the 1980s, Turbo Pascal produced machine code directly.
I’ve seen the neologism “transpiler” being very frowned upon by the academic programming language community precisely because a compiler is a compiler, no matter the output language — my use of “compiler” there was precisely because of my academic background.
I remember people joking around on Academic PL Twitter jokingly calling it the “t-word” even. I just did a quick Twitter search to see if I could find it, and I found a bunch of references dating from 2014 (though I won’t go linking people’s tweets here). But that shows how out-of-date this blog post is! Academia has pretty much settled on not using the “compiler” vs. “transpiler” definition at all by now.
I don’t mind the term “transpiler” myself if it helps non-academics understand it’s a source-to-source compiler, but then, you don’t see people calling the Nim compiler, which generates C code then compiles it into machine code, a “transpiler”, even though it is a source-to-source compiler.
In the end, “compiler” is the all-encompassing term for a program that takes code in one language and produces code in another, be it high-level or machine language — and yes, that means that pedentically an assembler is a compiler as well (but we don’t want to be pedantic, right? RIGHT?). And since we’re talking assembler, most C compilers do not generate executable machine code either: gcc produces assembly, which is then turned into machine code by gas. So gcc is a source-to-source compiler? Is Turbo Pascal more of a compiler than gcc? I could just as well add an output step in the Teal compiler to produce an executable in the output using the same techniques of the Pascal compilers of the 70s. I don’t think that would make it more or less of a compiler.
As you can see, the distinction of “what is a transpiler” reduces to “what is source code” or “what is a high-level language”, the latter especially having a very fuzzy definition, so in the end my sociological observation on the uses of “transpiler” vs. “compiler” tends to boil down to people’s prejudices of “what makes it a Real, Hardcore Compiler”. But being a “transpiler” or not doesn’t say anything about the project’s “hardcoreness” either — I’m sure the TypeScript compiler which generates JavaScript is a lot more complex than a lot of compilers out there which generate machine code.
🔗 Again on 0-based vs. 1-based indexing
André Garzia made a nice blog post called “Lua, a misunderstood language” recently, and unfortunately (but perhaps unsurprisingly) a bulk of HN comments on it was about the age-old 0-based vs. 1-based indexing debate. You see, Lua uses 1-based indexing, and lots of programmers claimed this is unnatural because “every other language out there” uses 0-based indexing.
I’ll brush aside quickly the fact that this is not true — 1-based indexing has a long history, all the way from Fortran, COBOL, Pascal, Ada, Smalltalk, etc. — and I’ll grant that the vast majority of popular languages in the industry nowadays are 0-based. So, let’s avoid the popularity contest and address the claim that 0-based indexing is “inherently better”, or worse, “more natural”.
It really shows how conditioned an entire community can be when they find the statement “given a list x, the first item in x is x[1], the second item in x is x[2]” to be unnatural. :) And in fact this is a somewhat scary thought about groupthink outside of programming even!
I guess I shouldn’t be surprised by groupthink coming from HN, but it was also alarming how a bunch of the HN comments gave nearly identical responses, all linking to the same writing by Dijkstra defending 0-based indexing as inherently better, as an implicit Appeal to Authority. (Well, I did read Dijkstra’s note years ago and wasn’t particularly convinced by it — not the first time I disagree with Dijkstra, by the way — but if we’re giving it extra weight for coming from one of our field’s legends, then the list of 1-based languages above gives me a much longer list of legends who disagree — not to mention standard mathematical notation which is rooted on a much greater history.)
I think that a better thought, instead of trying to defend 1-based indexing, is to try to answer the question “why is 0-based indexing even a thing in programming languages?” — of course, nowadays the number one reason is tradition and familiarity given other popular languages, and I think even proponents of 0-based indexing would agree, in spite of the fact that most of them wouldn’t even notice that they don’t call it a number zero reason. But if the main reason for something is tradition, then it’s important to know how did the tradition start. It wasn’t with Dijkstra.
C is pointed as the popularizer of this style. C’s well-known history points to BCPL by Martin Richards as its predecessor, a language designed to be simple to write a compiler for. One of the simplifications carried over to C: array indexing and pointer offsets were mashed together.
It’s telling how, whenever people go into non-Appeal-to-Authority arguments to defend 0-based indexes (including Dijkstra himself), people start talking about offsets. That’s because offsets are naturally 0-based, being a relative measurement: here + 0 = here; here + 1 meter = 1 meter away from here, and so on. Just like numeric indexes are identifiers for elements of an ordered object, and thus use the 1-based ordinal numbers: the first card in the deck, the second in the deck, etc.
BCPL, back in 1967, made a shortcut and made it so that p[i] (an index) was equal to p + i an offset. C inherited that. And nowadays, all arguments that say that indexes should be 0-based are actually arguments that offsets are 0-based, indexes are offsets, therefore indexes should be 0-based. That’s a circular argument. Even Dijkstra’s argument also starts with the calculation of differences, i.e., doing “pointer arithmetic” (offsets), not indexing.
Nowadays, people just repeat these arguments over and over, because “C won”, and now that tiny compiler-writing shortcut from the 1960s appears in Java, C#, Python, Perl, PHP, JavaScript and so on, even though none of these languages even have pointer arithmetic.
What’s funny to think about is that if instead C had not done that and used 1-based indexing, people today would certainly be claiming how C is superior for providing both 1-based indexing with p[i] and 0-based pointer offsets with p + i. I can easily visualize how people would argue that was the best design because there are always scenarios where one leads to more natural expressions than the other (similar to having both x++ and ++x), and how newcomers getting them mixed up were clearly not suited for the subtleties of low-level programming in C, and should be instead using simpler languages with garbage collection and without 0-based pointer arithmetic.
🔗 What’s faster? Lexing Teal with Lua 5.4 or LuaJIT, by hand or with lpeg
I ran a small experiment where I ported my handwritten lexer written in Teal (which translates to Lua) to lpeg, then ran the compiler on the largest Teal source I have (itself).
lexer time / total time LuaJIT+hw - 36 ms / 291 ms LuaJIT+lpeg - 40 ms / 325 ms Lua5.4+hw - 105 ms / 338 ms Lua5.4+lpeg - 66 ms / 285 ms
These are average times from multiple runs of tl check tl.tl, done with hyperfine.
The “lexer time” was done by adding an os.exit(0) call right after the lexer pass. The “total time” is the full run, which includes additional lexer passes on some tiny code snippets that are generated on the fly, so the change between the lpeg lexer and the handwritten lexer affects it a bit as well.
Looks like on LuaJIT my handwritten parser beats lpeg, but the rest of the compiler (lots of AST manipulation and recursive walks) seems to run faster on Lua 5.4.2 than it does on LuaJIT 2.1-git.
Then I decided to move the os.exit(0) further, to after the parser step but before the type checking and code generation steps. These are the results:
lexer+parser time LuaJIT+hw - 107 ms ± 20 ms LuaJIT+lpeg - 107 ms ± 21 ms Lua5.4+hw - 163 ms ± 3 ms Lua5.4+lpeg - 120 ms ± 2 ms
The Lua 5.4 numbers I got seem consistent with the lexer-only tests: the same 40 ms gain was observed when switching to lpeg, which tells me the parsing step is taking about 55 ms with the handwritten parser. With LuaJIT, the handwritten parser seems to take about 65 ms — what was interesting though was the variance reported by hyperfine: Lua 5.4 tests gave be a variation in the order of 2-3 ms, and LuaJIT was around 20 ms.
I re-ran the “total time” tests running the full tl check tl.tl to observe the variance, and it was again consistent, with LuaJIT’s performance jitter (no pun intended!) being 6-8x that of Lua 5.4:
total time LuaJIT+hw - 299 ms ± 25 ms LuaJIT+lpeg - 333 ms ± 31 ms Lua5.4+hw - 336 ms ± 4 ms Lua5.4+lpeg - 285 ms ± 4 ms
My conclusion from this experiment is that converting the Teal parser from the handwritten one into lpeg would probably increase performance in Lua 5.4 and decrease the performance variance in LuaJIT (with the bulk of that processing happening in the more performance-reliable lpeg VM — in fact, the variance numbers of the “lexer tests” in lpeg mode are consistent between LuaJIT and Lua 5.4). It’s unclear to me whether an lpeg-based parser will actually run faster or slower than the handwritten one under LuaJIT — possibly the gain or loss will be below the 8-9% margin of variance we observe in LuaJIT benchmarks, so it’s hard to pay much attention to any variability below that range.
Right now, performance of the Teal compiler is not an immediate concern, but I wanted to get a feel of what’s within close reach. The Lua 5.4 + lpeg combo looks promising and the LuaJIT pure-Lua performance is great as usual. For now I’ll keep using the handwritten lexer and parser, if only to avoid a C-based dependency in the core compiler — it’s nice to be able to take the generated tl.lua from the Github repo, drop it into any Lua project, call tl.loader() to register the package loader and get instant Teal support in your require() calls!
Update: Turns out a much faster alternative is to use LuaJIT with the JIT disabled! Here are some rough numbers:
JIT + lpeg: 327ms 5.4: 310ms JIT: 277ms 5.4 + lpeg: 274ms JIT w/ http://jit.off: 173ms JIT w/ http://jit.off + lpeg: 157ms
I have now implemented a function which disables the JIT on LuaJIT and temporarily disables garbage collection (which provides further speed ups) and merged it into Teal. The function was appropriately named:
🔗 Parsing and preserving Teal comments
The Teal compiler currently discards comments when parsing, and it constructs an abstract syntax tree (AST) which it then traverses to generate output Lua code. This is fine for running the code, but it would be useful to have the comments around for other purposes. Two things come to mind:
- JavaDoc/Doxygen/LDoc/(TealDoc?) style documentation generated from comments
- a code formatter that preserves comments
Today I spent some time playing with the lexer and parser and AST, looking into preserving comments from the input, with these two goals in mind.
The compiler does separate lexer and parser steps, both handwritten, and the comments were discarded at the lexer stage. That bit was easy. As I consumed each comment, I stored it as a metadata item to the following token (what if there’s no following token? I’m currently dropping the final comment if it’s the last thing in the file, but that would be trivial to fix).
The hard part of dealing with comments is what to do with them when parsing: the parser stage reads tokens and builds the AST. The question then becomes, to which nodes attach each comment as metadata. This is way trickier than it looks, because comments can appear everywhere:
--[[ comment! ]] if --[[ comment! ]] x --[[ comment! ]] < --[[ comment! ]] 2 --[[ comment! ]] then --[[ comment! ]] --[[ comment! ]] --[[ comment! ]] print --[[ comment! ]] ( --[[ comment! ]] x --[[ comment! ]] ) --[[ comment! ]] --[[ comment! ]] --[[ comment! ]] end --[[ comment! ]]
After playing a bit with a bunch of heuristics to determine to which AST nodes I should be attaching comments in attempts to preserving them all, I came to the conclusion that this is not the way to go.
The point of the abstract syntax tree is to abstract away the syntax, meaning that all those purely syntactical items such as then and end are discarded. So storing information such as “there’s a –[[ comment! ]] right before “then” and then another one right after, and then another in the following line—no, in fact two of them!—before the next statement” are pretty much antithetical to the concept of an AST.
However, there are comments for which there is an easy match as to which AST node they should attach. And fortunately, those are sufficient for the “TealDoc” documentation generation: it’s easy to catch the comment that appears immediately before a statement node and attach to it. This way, we get pretty much the whole relevant ground covered: type definitions, function declarations, variable declarations. Then, to build a “TealDoc” tool, it would be a matter of traversing the AST and then writing a small separate parser to read the contents of the documentation comments (e.g. reading `@param` tags and the like). I chose to attach comments to statements and fields in data structures, and to stop there. Especially for a handwritten parser, adding comment handling code can add to the noise quickly.
As for the second use-case, a formatter/beautifier, I don’t think that going through the route of the AST is the right way to go. A formatter tool could share the compiler’s lexer, which does read and collect all tokens and comments, but then it should operate purely in the syntactic domain, without abstracting anything. I wrote a basic “token dumper” in the early days of Teal that did some of this work, but it hasn’t been maintained — as the language got more complex, it now needs some more context in order to do its work properly, so I’m guessing it needs a state machine and/or more lookahead. But with the comments preserved by the lexer, it should be possible to extend it into a proper formatter.
The code is in the preserve-comments branch. So, I think with these two additions (comments fully stored by the lexer, and comments partially attached to the AST by the parser), I think we have everything we need to serve as a good base for future documentation and formatter tooling. Any takers on these projects?
Follow
🐘 Mastodon ▪ RSS (English), RSS (português), RSS (todos / all)
Last 10 entries
- There are two very different things called "package managers"
- Last day at Kong
- A Special Hand
- How to change the nmtui background color
- Receita de Best Pancakes
- That time I almost added Tetris to htop
- Receita de Orange Chicken
- Receita de frango empanado no panko
- Receita de cebola caramelizada
- Receita rápida de crepe