hisham hm

🔗 The algorithm did it!

Earlier today, statistician Kareem Carr posted this interesting tweet, about what people out there mean when they say “algorithm”, which I found to be a good summary:

When people say “algorithms”, they mean at least four different things:

1. the assumptions and description of the model

2. the process of fitting the model to the data

3. the software that implements fitting the model to the data

4. The output of running that software

Unsurprisingly, this elicited a lot of responses from computer scientists, raising the point that this is not what the word algorithm is supposed to mean (you know, a well-defined sequence of steps transforming inputs into outputs, the usual CS definition), including a response from Grady Booch, a key figure in the history of software engineering.

I could see where both of them were coming from. I responed that Carr’s original tweet not was about what programmers mean when we say “algorithms” but what the laypeople mean when they say it or read it in the media. And understanding this distinction is especially important because variations of “the algorithm did it!” is the new favorite excuse of policymakers in companies and governments alike.

Booch responded to me, clarifying that his point is that “even most laypeople don’t think any of those things”, which I agree with. People have a fuzzy definition of what an algorithm is, at best, and I think Carr’s list encompasses rather well the various things that are responsible for the effects that people credit on a vague notion of “algorithm” when people use that term.

Booch also added that “it’s appropriate to establish and socialize the correct meaning of words”, which simultaneously extends the discussion to a wider scope and also focuses it to the heart of the matter about the use of “algorithm” in our current society.

You see, it’s not about holding on to the original meaning of a word. I’m sure a few responses to Carr were of the pedantic variety, “that’s not what the dictionary says!” kind of thing. But that’s short-sighted, taking a prescriptivist rather than descriptivist view of language. Most of us who care about language are past that debate now, and those of us who adhere to the sociolinguistic view of language even celebrate the fact language shifts, adapts and evolves to suit the use of its speakers.

Shriram Krishnamurthi, CS professor at Brown, joined in on the conversation, observing that this shift in the language as a fait accompli:

I’ve been told by a public figure in France (who is herself a world-class computer scientist) — who is sometimes called upon by shows, government, etc. — that those people DO very much use the word this way. As an algorithms researcher it irks her, but that’s how it is.

Basically, we’ve lost control of the world “algorithm”. It has its narrow meaning but it also has a very broad meaning for which we might instead use “software”, “system”, “model”, etc.

Still, I agreed with Booch that this is still a fight worth fighting. But not to preserve our cherished technical meaning of the term, to the dismay of the pedants among our ranks, but because of the observation of the very circumstances that led to this linguistic shift.

The use of “algorithm” as a vague term to mean “computers deciding things” has a clear political intent: shifting blame. Social networks boosting hate speech? Sorry, the recommendation algorithm did it. Racist bias in criminal systems? Sorry, it was the algorithm.

When you think about it, from a linguistic point of view, it is as nonsensical as saying that “my hammer assembled the shelf in my living room”. No, I did, using the hammer. Yet, people are trained to use such constructs all the time: “the pedestrian was hit by a car”. Note the use of passive voice to shift the focus away from the active subject: “a car hit a pedestrian” has a different ring to it, and, while still giving agency to a lifeless object, is one step closer to making you realize that it was the driver who hit the pedestrian, using the car, just like it was I who built the shelf, using the hammer.

This of course leads to the “guns don’t kill people, people kill people” response. Yes, it does, and the exact same questions regarding guns also apply regarding “algorithms” — and here I use the term in the “broader” sense as put forward by Carr and observed by Krishnamurthi. Those “algorithms” — those models, systems, collections of data, programs manipulating this data — wield immense power in our society, even, like guns, resulting in violence, and like guns, deserving scrutiny. And when those in possession of those “algorithms” go under scrutiny, they really don’t like it. One only needs to look at the fallout resulting from the work by Bender, Gebru, McMillan-Major and Mitchell, about the dangers of extremely large language models in machine learning. Some people don’t like hearing the suggestion that maybe overpowered weapons are not a good idea.

By hiding all those issues behind the word “algorithm”, policymakers will always find a friendly computer scientist available to say that yes, an algorithm is a neutral thing, after all, it’s just a sequence of instructions, and they will no doubt profit from this confusion of meanings. And I must clarify that by policymakers I mean those both in public and private sphere, since policies put forward by the private tech giants on their platforms, where we spend so much of our lives, are as effecting on our society as public policies nowadays.

So what do we do? I don’t think it is productive to start well-actually-ing anyone who uses “algorithm” in the broader sense, with a pedantic “Let me interject for a moment — what you mean by algorithm is in reality a…”. But it is productive to spot when this broad term is being used to hide something else. “The algorithm is biased” — What do you mean, the outputs are biased? Why, is the input data biased? The people manipulating that data created a biased process? Who are they? Why did they choose this process and not another? These are better interjections to make.

These broad systems described by Carr above ultimately run on code. There are algorithms inside them, processing those inputs, generating those outputs. The use of “algorithm” to describe the whole may have started as a harmless metonymy (like when saying “White House” to refer to the entire US government), but it has since been proven very useful as a deflection tactic. By using a word that people don’t understand, the message is “computers doing something you don’t understand and shouldn’t worry about”, using “algorithm” handwavily to drift people’s minds away from the policy issues around computation, the same way “cloud” is used with data: “your data? don’t worry, it’s in the cloud”.

Carr is right, these are all things encompassing things that people refer to as “algorithms” nowadays. Krishnamurthi is right, this broad meaning is a reality in modern language. And Booch is right when he says that “words matter; facts matter”.

Holding words to their stricter meanings merely due to our love for the language-as-we-were-taught is a fool’s errand; language changes whether we want it or not. But our duty as technologists is to identify the interplay of the language, our field, and society, how and why they are being used (both the language and our field!). We need to clarify to people what the pieces at play really are when they say “algorithm”. We need to constantly emphasize to the public that there’s no magic behind the curtain, and, most importantly, that all policies are due to human choices.

🔗 Compiler versus Transpiler: what is a compiler, anyway?

Teal was featured on HN today, and one of the comments was questioning the fact that the documentation states that it “compiles Teal into Lua”:

We need better and more rigorous terms in computing science. This use of the compiler word blurs the meaning of interpreted vs compiled languages.

I was under the assumption that it would generate executable machine code, not Lua source code.

I thought that was worth replying to because it allowed to dispel two misconceptions at once.

First, if we want to be rigorous about computer science terms, calling it “interpreted vs compiled languages” is a misnomer, because being interpreted or compiled is not a property of the language, but of the implementation. There have been things such as a C interpreter and an ahead-of-time compiler for PHP which generates machine code.

But then, we get to the main course, the use of “compiler”.

The definition of compiler has never assumed generating executable machine code. Already in the 1970s, Pascal compilers have generated P-code (a form of “bytecode” in Java parlance), which was then interpreted. In the 1980s, Turbo Pascal produced machine code directly.

I’ve seen the neologism “transpiler” being very frowned upon by the academic programming language community precisely because a compiler is a compiler, no matter the output language — my use of “compiler” there was precisely because of my academic background.

I remember people joking around on Academic PL Twitter jokingly calling it the “t-word” even. I just did a quick Twitter search to see if I could find it, and I found a bunch of references dating from 2014 (though I won’t go linking people’s tweets here). But that shows how out-of-date this blog post is! Academia has pretty much settled on not using the “compiler” vs. “transpiler” definition at all by now.

I don’t mind the term “transpiler” myself if it helps non-academics understand it’s a source-to-source compiler, but then, you don’t see people calling the Nim compiler, which generates C code then compiles it into machine code, a “transpiler”, even though it is a source-to-source compiler.

In the end, “compiler” is the all-encompassing term for a program that takes code in one language and produces code in another, be it high-level or machine language — and yes, that means that pedentically an assembler is a compiler as well (but we don’t want to be pedantic, right? RIGHT?). And since we’re talking assembler, most C compilers do not generate executable machine code either: gcc produces assembly, which is then turned into machine code by gas. So gcc is a source-to-source compiler? Is Turbo Pascal more of a compiler than gcc? I could just as well add an output step in the Teal compiler to produce an executable in the output using the same techniques of the Pascal compilers of the 70s. I don’t think that would make it more or less of a compiler.

As you can see, the distinction of “what is a transpiler” reduces to “what is source code” or “what is a high-level language”, the latter especially having a very fuzzy definition, so in the end my sociological observation on the uses of “transpiler” vs. “compiler” tends to boil down to people’s prejudices of “what makes it a Real, Hardcore Compiler”. But being a “transpiler” or not doesn’t say anything about the project’s “hardcoreness” either — I’m sure the TypeScript compiler which generates JavaScript is a lot more complex than a lot of compilers out there which generate machine code.

🔗 Again on 0-based vs. 1-based indexing

André Garzia made a nice blog post called “Lua, a misunderstood language” recently, and unfortunately (but perhaps unsurprisingly) a bulk of HN comments on it was about the age-old 0-based vs. 1-based indexing debate. You see, Lua uses 1-based indexing, and lots of programmers claimed this is unnatural because “every other language out there” uses 0-based indexing.

I’ll brush aside quickly the fact that this is not true — 1-based indexing has a long history, all the way from Fortran, COBOL, Pascal, Ada, Smalltalk, etc. — and I’ll grant that the vast majority of popular languages in the industry nowadays are 0-based. So, let’s avoid the popularity contest and address the claim that 0-based indexing is “inherently better”, or worse, “more natural”.

It really shows how conditioned an entire community can be when they find the statement “given a list x, the first item in x is x[1], the second item in x is x[2]” to be unnatural. :) And in fact this is a somewhat scary thought about groupthink outside of programming even!

I guess I shouldn’t be surprised by groupthink coming from HN, but it was also alarming how a bunch of the HN comments gave nearly identical responses, all linking to the same writing by Dijkstra defending 0-based indexing as inherently better, as an implicit Appeal to Authority. (Well, I did read Dijkstra’s note years ago and wasn’t particularly convinced by it — not the first time I disagree with Dijkstra, by the way — but if we’re giving it extra weight for coming from one of our field’s legends, then the list of 1-based languages above gives me a much longer list of legends who disagree — not to mention standard mathematical notation which is rooted on a much greater history.)

I think that a better thought, instead of trying to defend 1-based indexing, is to try to answer the question “why is 0-based indexing even a thing in programming languages?” — of course, nowadays the number one reason is tradition and familiarity given other popular languages, and I think even proponents of 0-based indexing would agree, in spite of the fact that most of them wouldn’t even notice that they don’t call it a number zero reason. But if the main reason for something is tradition, then it’s important to know how did the tradition start. It wasn’t with Dijkstra.

C is pointed as the popularizer of this style. C’s well-known history points to BCPL by Martin Richards as its predecessor, a language designed to be simple to write a compiler for. One of the simplifications carried over to C: array indexing and pointer offsets were mashed together.

It’s telling how, whenever people go into non-Appeal-to-Authority arguments to defend 0-based indexes (including Dijkstra himself), people start talking about offsets. That’s because offsets are naturally 0-based, being a relative measurement: here + 0 = here; here + 1 meter = 1 meter away from here, and so on. Just like numeric indexes are identifiers for elements of an ordered object, and thus use the 1-based ordinal numbers: the first card in the deck, the second in the deck, etc.

BCPL, back in 1967, made a shortcut and made it so that p[i] (an index) was equal to p + i an offset. C inherited that. And nowadays, all arguments that say that indexes should be 0-based are actually arguments that offsets are 0-based, indexes are offsets, therefore indexes should be 0-based. That’s a circular argument. Even Dijkstra’s argument also starts with the calculation of differences, i.e., doing “pointer arithmetic” (offsets), not indexing.

Nowadays, people just repeat these arguments over and over, because “C won”, and now that tiny compiler-writing shortcut from the 1960s appears in Java, C#, Python, Perl, PHP, JavaScript and so on, even though none of these languages even have pointer arithmetic.

What’s funny to think about is that if instead C had not done that and used 1-based indexing, people today would certainly be claiming how C is superior for providing both 1-based indexing with p[i] and 0-based pointer offsets with p + i. I can easily visualize how people would argue that was the best design because there are always scenarios where one leads to more natural expressions than the other (similar to having both x++ and ++x), and how newcomers getting them mixed up were clearly not suited for the subtleties of low-level programming in C, and should be instead using simpler languages with garbage collection and without 0-based pointer arithmetic.

🔗 What’s faster? Lexing Teal with Lua 5.4 or LuaJIT, by hand or with lpeg

I ran a small experiment where I ported my handwritten lexer written in Teal (which translates to Lua) to lpeg, then ran the compiler on the largest Teal source I have (itself).

lexer time / total time
LuaJIT+hw   -  36 ms / 291 ms
LuaJIT+lpeg -  40 ms / 325 ms
Lua5.4+hw   - 105 ms / 338 ms
Lua5.4+lpeg -  66 ms / 285 ms

These are average times from multiple runs of tl check tl.tl, done with hyperfine.

The “lexer time” was done by adding an os.exit(0) call right after the lexer pass. The “total time” is the full run, which includes additional lexer passes on some tiny code snippets that are generated on the fly, so the change between the lpeg lexer and the handwritten lexer affects it a bit as well.

Looks like on LuaJIT my handwritten parser beats lpeg, but the rest of the compiler (lots of AST manipulation and recursive walks) seems to run faster on Lua 5.4.2 than it does on LuaJIT 2.1-git.

Then I decided to move the os.exit(0) further, to after the parser step but before the type checking and code generation steps. These are the results:

lexer+parser time
LuaJIT+hw   -  107 ms ± 20 ms
LuaJIT+lpeg -  107 ms ± 21 ms
Lua5.4+hw    - 163 ms ± 3 ms
Lua5.4+lpeg -  120 ms ± 2 ms

The Lua 5.4 numbers I got seem consistent with the lexer-only tests: the same 40 ms gain was observed when switching to lpeg, which tells me the parsing step is taking about 55 ms with the handwritten parser. With LuaJIT, the handwritten parser seems to take about 65 ms — what was interesting though was the variance reported by hyperfine: Lua 5.4 tests gave be a variation in the order of 2-3 ms, and LuaJIT was around 20 ms.

I re-ran the “total time” tests running the full tl check tl.tl to observe the variance, and it was again consistent, with LuaJIT’s performance jitter (no pun intended!) being 6-8x that of Lua 5.4:

total time
LuaJIT+hw   -  299 ms ± 25 ms
LuaJIT+lpeg -  333 ms ± 31 ms
Lua5.4+hw    - 336 ms ± 4 ms
Lua5.4+lpeg -  285 ms ± 4 ms

My conclusion from this experiment is that converting the Teal parser from the handwritten one into lpeg would probably increase performance in Lua 5.4 and decrease the performance variance in LuaJIT (with the bulk of that processing happening in the more performance-reliable lpeg VM — in fact, the variance numbers of the “lexer tests” in lpeg mode are consistent between LuaJIT and Lua 5.4). It’s unclear to me whether an lpeg-based parser will actually run faster or slower than the handwritten one under LuaJIT — possibly the gain or loss will be below the 8-9% margin of variance we observe in LuaJIT benchmarks, so it’s hard to pay much attention to any variability below that range.

Right now, performance of the Teal compiler is not an immediate concern, but I wanted to get a feel of what’s within close reach. The Lua 5.4 + lpeg combo looks promising and the LuaJIT pure-Lua performance is great as usual. For now I’ll keep using the handwritten lexer and parser, if only to avoid a C-based dependency in the core compiler — it’s nice to be able to take the generated tl.lua from the Github repo, drop it into any Lua project, call tl.loader() to register the package loader and get instant Teal support in your require() calls!


Update: Turns out a much faster alternative is to use LuaJIT with the JIT disabled! Here are some rough numbers:

JIT + lpeg: 327ms
5.4: 310ms
JIT: 277ms
5.4 + lpeg: 274ms
JIT w/ http://jit.off: 173ms
JIT w/ http://jit.off + lpeg: 157ms

I have now implemented a function which disables the JIT on LuaJIT and temporarily disables garbage collection (which provides further speed ups) and merged it into Teal. The function was appropriately named:

turbo(true)

🔗 User power, not power users: htop and its design philosophy

What the principles that underlie the software you make?

This short story is not really about htop, or about the feature request that I’ll use as an illustration, but about what are the core principles that drive the development of a bit of software, down to every last detail. And by “core principles” I really mean it.

When we develop software, we have to make a million decisions. We’re often driven by some unspoken general principles, ranging from our personal aesthetics on the external visuals, to our sense of what makes a good UX in the product behavior, to things such as “where does bloat cross the line” in the engineering internals. In FOSS, we’re often wearing all these hats at the same time.

We don’t always have it clear in our mind what drives those principles, we often “just know”. There’s no better opportunity to assess those principles than when user feedback asks for a change in the behavior. And there’s no better way to explain to yourself why the change “feels wrong” than to put those principles in writing.

Today was one such opportunity.

I was peeking at the htop issue tracker as an end-user, which is a refreshing experience, having recently retired from this FOSS project I started. I spotted a feature request, asking for a change to make it hide threads by default.

The rationale was sensible:

People casually using htop usually have no idea what userland threads are for.
People who actually need to see them can easily enable them via SHIFT+H.

With them currently enabled by default, it is very inconvenient to go through the list and see what is running, taking up RAM, CPU usage and whatnot, therefore I think it’d be more user-friendly to not show them by default.

He proceeded to show two screenshots: one with the default behavior, full of threads (in green) mixed with the processes, and another with threads disabled.

When one of the current developers said that it’s easier for the user to figure out how to hide things than for them to discover that something hidden can be shown, the counter-argument was also sensible:

Htop can also show Disk IO, which can be arguably very useful, but is hidden by default.

At that point, I decided to put my “original author” hat on to explain what was the intent behind the existing behavior. Here’s what I wrote:


Hi @C0rn3j, I thought I’d drop by and give a bit of historical background as to what was my motivation for showing threads by default.

People casually using htop usually have no idea what userland threads are for.

Yes! I fully sympathize with this sentiment. And the choice for enabling threads and painting them green was deliberate.

You have no idea how many times I was asked “hey, why are some processes green?” over these 15+ years — and no, it wasn’t annoying: each of these times it was an opportunity to teach someone about threads!


(credit: xkcd)

htop was designed to provide a view to what’s going on in the system. If a process is spawning threads like crazy, or if all your cores are overwhelmed because multiple threads of a process are doing work, I think it’s fair to show it to the user, even if they don’t know a thing about threads — or I would say, especially if they don’t know a thing about threads, otherwise these things would be happening and they wouldn’t even know where to look.

Htop can also show Disk IO, which can be arguably very useful, but is hidden by default.

One of my last projects when I was still active in htop development was to add tabs to the interface, so that the user would have a more discoverable path for navigating through these additional columns:

This code is in the next branch of the old repo https://github.com/hishamhm/htop/ — I think the code for tabs is actually finished (though not super tested); it’s pretty nice, you can click on them or cycle with the Tab key, but the Perf Counters support which I was working on, never really got stable to a point of my liking, and that’s when I got busy with other things and drifted away from development — turns out I enjoy writing interface code a lot more than the systems monitoring part!

Anyway, I think that also illustrates the pattern that went into the design: I implemented the entire tabs feature because I wanted to make IO and Perf Counters more discoverable. I wanted to put that information in front of users faces, especially for users who don’t know about these things! I considered the fact that I had implemented the entire functionality of iotop inside htop but people didn’t know about it to be a personal failure, and a practical learning experience: adding systems functionality is useless if the UI design doesn’t get it to users’ hands. (And even I didn’t use the IO features because there was no convenient way of using them.)

I never wanted to implement a tool for “super advanced Linux power users who know what they doing”, otherwise I would have never spent a full line at the bottom showing F-keys, and I would have spent my time implementing Vim bindings (ugh ;) ) instead of mouse support. I’ve always made a point that every setting can be settable via the Setup screen, which you can access via F2-Setup (shown at the bottom line!) and which you can control with the keyboard or mouse, no “edit this config file to use this advanced feature for the initiated only” or even “read the man page” (in fact it only has a man page because Debian developers contributed it!).

I wanted to make a tool that has the power but doesn’t hide it from users, and instead invites users into becoming more powerful. A tool that reaches out its hand and guides them along the way, helping users to educate themselves about what’s happening on the internals of their system, and in this way have more control of it. This is how you hand power to users, not by erecting barriers of initiation or by giving them the bare minimum they need. I reject this dicothomy between “complicated tools for power users” and “stripped-down tools for mere mortals”, the latter being a design paradigm made popular by certain companies and unfortunately copied by so many OSS GUI projects, without realizing what the goals of that paradigm really were, but that’s another rant.

And that’s why threads are enabled by default, and colored in green!

(PS: and to put the point to the proof, I must say that the tabs feature was a much bigger code change than Perf Counters themselves; it included some quite big internal refactors (there’s no “toolkit”, everything is drawn “by hand” by htop) with unfortunately might make it difficult to ressurect that code (or not! who knows?), and of course tabs are user-definable and user-editable!)


The user who proposed the change in the defaults thanked me for the history tidbits and closed the feature request. But in some sense writing this down was probably more enlightening to me than it was for them.

When I started writing htop, I did not set out to create “an instrument for user empowerment” or any such lofty goal. (All I wanted was to have a top that scrolled and was more forgiving with mistypes than circa-2005 top!) But as I proceeded to write the software, every small decision had to come from somewhere, even if done without much deliberate thought, even if at the time it was just “it felt right”. That’s how your principles start to percolate into the project.

Over time, the picture I described in that reply above became clear to me. It helped me build practical guidelines, such as “every setting must be UI-accessible”, it helped me prioritize or even reject feature requests based on how much they aligned to those principles, and it helped me build a sense of purpose for the software I was working on.

If you don’t have it clear to yourself what are the principles that are foundational to the software you’re building, I recommend you to give this exercise a try — try to explain why the things in the software are the way they are. There are always principles, even if they are not conscious to you at the moment.

(PS: It would be awesome if some enterprising soul would dig down the tab support code and ressurrect it for htop 3! I don’t plan to do so myself any time soon, but all the necessary bits and pieces are there!)


Follow

🐘 MastodonRSS (English), RSS (português), RSS (todos / all)


Last 10 entries


Search


Admin