🔗 Parsing and preserving Teal comments
The Teal compiler currently discards comments when parsing, and it constructs an abstract syntax tree (AST) which it then traverses to generate output Lua code. This is fine for running the code, but it would be useful to have the comments around for other purposes. Two things come to mind:
- JavaDoc/Doxygen/LDoc/(TealDoc?) style documentation generated from comments
- a code formatter that preserves comments
Today I spent some time playing with the lexer and parser and AST, looking into preserving comments from the input, with these two goals in mind.
The compiler does separate lexer and parser steps, both handwritten, and the comments were discarded at the lexer stage. That bit was easy. As I consumed each comment, I stored it as a metadata item to the following token (what if there’s no following token? I’m currently dropping the final comment if it’s the last thing in the file, but that would be trivial to fix).
The hard part of dealing with comments is what to do with them when parsing: the parser stage reads tokens and builds the AST. The question then becomes, to which nodes attach each comment as metadata. This is way trickier than it looks, because comments can appear everywhere:
--[[ comment! ]] if --[[ comment! ]] x --[[ comment! ]] < --[[ comment! ]] 2 --[[ comment! ]] then --[[ comment! ]] --[[ comment! ]] --[[ comment! ]] print --[[ comment! ]] ( --[[ comment! ]] x --[[ comment! ]] ) --[[ comment! ]] --[[ comment! ]] --[[ comment! ]] end --[[ comment! ]]
After playing a bit with a bunch of heuristics to determine to which AST nodes I should be attaching comments in attempts to preserving them all, I came to the conclusion that this is not the way to go.
The point of the abstract syntax tree is to abstract away the syntax, meaning that all those purely syntactical items such as then and end are discarded. So storing information such as “there’s a –[[ comment! ]] right before “then” and then another one right after, and then another in the following line—no, in fact two of them!—before the next statement” are pretty much antithetical to the concept of an AST.
However, there are comments for which there is an easy match as to which AST node they should attach. And fortunately, those are sufficient for the “TealDoc” documentation generation: it’s easy to catch the comment that appears immediately before a statement node and attach to it. This way, we get pretty much the whole relevant ground covered: type definitions, function declarations, variable declarations. Then, to build a “TealDoc” tool, it would be a matter of traversing the AST and then writing a small separate parser to read the contents of the documentation comments (e.g. reading `@param` tags and the like). I chose to attach comments to statements and fields in data structures, and to stop there. Especially for a handwritten parser, adding comment handling code can add to the noise quickly.
As for the second use-case, a formatter/beautifier, I don’t think that going through the route of the AST is the right way to go. A formatter tool could share the compiler’s lexer, which does read and collect all tokens and comments, but then it should operate purely in the syntactic domain, without abstracting anything. I wrote a basic “token dumper” in the early days of Teal that did some of this work, but it hasn’t been maintained — as the language got more complex, it now needs some more context in order to do its work properly, so I’m guessing it needs a state machine and/or more lookahead. But with the comments preserved by the lexer, it should be possible to extend it into a proper formatter.
The code is in the preserve-comments branch. So, I think with these two additions (comments fully stored by the lexer, and comments partially attached to the AST by the parser), I think we have everything we need to serve as a good base for future documentation and formatter tooling. Any takers on these projects?
🔗 User power, not power users: htop and its design philosophy
What the principles that underlie the software you make?
This short story is not really about htop, or about the feature request that I’ll use as an illustration, but about what are the core principles that drive the development of a bit of software, down to every last detail. And by “core principles” I really mean it.
When we develop software, we have to make a million decisions. We’re often driven by some unspoken general principles, ranging from our personal aesthetics on the external visuals, to our sense of what makes a good UX in the product behavior, to things such as “where does bloat cross the line” in the engineering internals. In FOSS, we’re often wearing all these hats at the same time.
We don’t always have it clear in our mind what drives those principles, we often “just know”. There’s no better opportunity to assess those principles than when user feedback asks for a change in the behavior. And there’s no better way to explain to yourself why the change “feels wrong” than to put those principles in writing.
Today was one such opportunity.
I was peeking at the htop issue tracker as an end-user, which is a refreshing experience, having recently retired from this FOSS project I started. I spotted a feature request, asking for a change to make it hide threads by default.
The rationale was sensible:
People casually using htop usually have no idea what userland threads are for.
People who actually need to see them can easily enable them via SHIFT+H.With them currently enabled by default, it is very inconvenient to go through the list and see what is running, taking up RAM, CPU usage and whatnot, therefore I think it’d be more user-friendly to not show them by default.
He proceeded to show two screenshots: one with the default behavior, full of threads (in green) mixed with the processes, and another with threads disabled.
When one of the current developers said that it’s easier for the user to figure out how to hide things than for them to discover that something hidden can be shown, the counter-argument was also sensible:
Htop can also show Disk IO, which can be arguably very useful, but is hidden by default.
At that point, I decided to put my “original author” hat on to explain what was the intent behind the existing behavior. Here’s what I wrote:
Hi @C0rn3j, I thought I’d drop by and give a bit of historical background as to what was my motivation for showing threads by default.
People casually using htop usually have no idea what userland threads are for.
Yes! I fully sympathize with this sentiment. And the choice for enabling threads and painting them green was deliberate.
You have no idea how many times I was asked “hey, why are some processes green?” over these 15+ years — and no, it wasn’t annoying: each of these times it was an opportunity to teach someone about threads!
htop was designed to provide a view to what’s going on in the system. If a process is spawning threads like crazy, or if all your cores are overwhelmed because multiple threads of a process are doing work, I think it’s fair to show it to the user, even if they don’t know a thing about threads — or I would say, especially if they don’t know a thing about threads, otherwise these things would be happening and they wouldn’t even know where to look.
Htop can also show Disk IO, which can be arguably very useful, but is hidden by default.
One of my last projects when I was still active in htop development was to add tabs to the interface, so that the user would have a more discoverable path for navigating through these additional columns:
This code is in the next branch of the old repo https://github.com/hishamhm/htop/ — I think the code for tabs is actually finished (though not super tested); it’s pretty nice, you can click on them or cycle with the Tab key, but the Perf Counters support which I was working on, never really got stable to a point of my liking, and that’s when I got busy with other things and drifted away from development — turns out I enjoy writing interface code a lot more than the systems monitoring part!
Anyway, I think that also illustrates the pattern that went into the design: I implemented the entire tabs feature because I wanted to make IO and Perf Counters more discoverable. I wanted to put that information in front of users faces, especially for users who don’t know about these things! I considered the fact that I had implemented the entire functionality of iotop inside htop but people didn’t know about it to be a personal failure, and a practical learning experience: adding systems functionality is useless if the UI design doesn’t get it to users’ hands. (And even I didn’t use the IO features because there was no convenient way of using them.)
I never wanted to implement a tool for “super advanced Linux power users who know what they doing”, otherwise I would have never spent a full line at the bottom showing F-keys, and I would have spent my time implementing Vim bindings (ugh ;) ) instead of mouse support. I’ve always made a point that every setting can be settable via the Setup screen, which you can access via F2-Setup (shown at the bottom line!) and which you can control with the keyboard or mouse, no “edit this config file to use this advanced feature for the initiated only” or even “read the man page” (in fact it only has a man page because Debian developers contributed it!).
I wanted to make a tool that has the power but doesn’t hide it from users, and instead invites users into becoming more powerful. A tool that reaches out its hand and guides them along the way, helping users to educate themselves about what’s happening on the internals of their system, and in this way have more control of it. This is how you hand power to users, not by erecting barriers of initiation or by giving them the bare minimum they need. I reject this dicothomy between “complicated tools for power users” and “stripped-down tools for mere mortals”, the latter being a design paradigm made popular by certain companies and unfortunately copied by so many OSS GUI projects, without realizing what the goals of that paradigm really were, but that’s another rant.
And that’s why threads are enabled by default, and colored in green!
(PS: and to put the point to the proof, I must say that the tabs feature was a much bigger code change than Perf Counters themselves; it included some quite big internal refactors (there’s no “toolkit”, everything is drawn “by hand” by htop) with unfortunately might make it difficult to ressurect that code (or not! who knows?), and of course tabs are user-definable and user-editable!)
The user who proposed the change in the defaults thanked me for the history tidbits and closed the feature request. But in some sense writing this down was probably more enlightening to me than it was for them.
When I started writing htop, I did not set out to create “an instrument for user empowerment” or any such lofty goal. (All I wanted was to have a top that scrolled and was more forgiving with mistypes than circa-2005 top!) But as I proceeded to write the software, every small decision had to come from somewhere, even if done without much deliberate thought, even if at the time it was just “it felt right”. That’s how your principles start to percolate into the project.
Over time, the picture I described in that reply above became clear to me. It helped me build practical guidelines, such as “every setting must be UI-accessible”, it helped me prioritize or even reject feature requests based on how much they aligned to those principles, and it helped me build a sense of purpose for the software I was working on.
If you don’t have it clear to yourself what are the principles that are foundational to the software you’re building, I recommend you to give this exercise a try — try to explain why the things in the software are the way they are. There are always principles, even if they are not conscious to you at the moment.
(PS: It would be awesome if some enterprising soul would dig down the tab support code and ressurrect it for htop 3! I don’t plan to do so myself any time soon, but all the necessary bits and pieces are there!)
🔗 Getting Amethyst up and running
I have played a bit with Rust in the past, but I’ve never done much with it.
To get myself more acquainted with the language, I’ve decided to try to write a small game with it, always a fun way to play with programming.
For that, I’ve started digging into the options and found Are we game yet?, a great resource with pointers for all things Rust and game development.
I’ve decided to try Amethyst for a game engine, as my first impressions with the documentation look quite good, and it will be nice to get updated with more modern game event management concepts.
I’ve followed the book’s instructions to get the latest stable Rust, and it also provides a 2D starter template repo. I managed to get it up and running, but I ran into a couple of hiccups, which I thought I’d document here.
First, I got this problem:
thread 'main' panicked at 'attempted to leave type `platform::platform::x11::util::input::PointerState` uninitialized, which is invalid', /rustc/d9a105fdd46c926ae606777a46dd90e5b838f92f/library/core/src/mem/mod.rs:659:9
Fortunately, 10 days ago someone at /r/rust_gamedev ran into the same thing, and the solution was to downgrade Rust from 1.48.0 to 1.46.0.
Doing that was simple:
rustup default 1.46.0
(Really impressed with how smooth the tooling is!)
I ran `cargo run` again, everything rebuilt, but then I ran into another problem:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Unsupported', /Users/hisham/.cargo/registry/src/github.com-1ecc6299db9ec823/gfx-backend-vulkan-0.3.3/src/lib.rs:326:9
There’s a ton of people reporting different errors for panicked at ‘called `Result::unwrap()` on an `Err` on the internet, but a search for “err value unsupported gfx-backend-vulkan” put me on the right track. Another Amethyst beginner fixed the same issue by installing the necessary Vulkan library.
The recipe for this library was not yet available for GoboLinux, but thanks to our friends in Gentoo, it was easy to read their ebuilds for vulkan-loader and vulkan-headers and build the equivalents for Gobo.
With the Vulkan library built, I tried again, and then it progressed further, but still failed. But then, the output itself told me what was missing and gave me a useful hint:
vulkan: No DRI3 support detected - required for presentation Note: you can probably enable DRI3 in your Xorg config
Fixing that was easy, thanks to a very on-point blog post by an Arch user, which told me to edit my /etc/X11/xorg.conf and add the following entries for my Intel Xorg driver:
Section "Device" Identifier "Intel Graphics" Driver "intel" Option "AccelMethod" "sna" Option "TearFree" "true" Option "DRI" "3" EndSection
I restarted X, and voilà! The 2D starter repo for Amethyst showed me window with graphics and accepted keyboard input.
Now let’s continue reading the Amethyst book to learn how to make a simple game in Rust! :)
🔗 Smart tech — smart for whom?
Earlier today I quipped on social media: “We need dumb tech and smart users, and not the other way around”.
Later, I clarified that I’m not calling users of “smart” devices dumb. People are smart. The tech should try to not “dumb them down” by acting condescendingly, cutting down on their agency and limiting their opportunities of education.
A few people shared replies to the effect that they wish for smart devices that wouldn’t be at odds with the intents of the user. After all, we all want the convenience of tech, so why settle for “dumb tech”, right?
The question here becomes a game of words: what is a “smart device”, after all?
A technically-minded person will would look at smart devices like smartphones, smart TVs, etc., and say “well, they are really computers”, or “they have computers inside”. But if we want to be technically pedantic, what is a computer? Having a Turing-complete microprocessor running programs? My old trusty microwave for sure has a microprocessor, but it’s definitely not a “smart device”. So it’s not about the internals, it definitely has to do with the end-user perception of the device.
The next reasonable approximation towards a definition is that a smart device allows you to install “apps”. You can extend it with more functionality (which is really making use of the fact that it’s a “computer inside”). Smart TVs and smart phones check this box. However, other home appliances like “smart kettles” don’t, the “smartness” comes from being internet-connected. So, generally, it looks like smart devices are things you either run apps in, or control via apps (from another smart device, of course).
So, allowing for running apps pretty much makes something into a computer, right? After all, a computer is a machine for running software. But it’s really interesting to think what is in fact a computer — where do we draw the line. From an end-user perspective, a game console is also a machine for running software — a particular kind of software, games — but it is not a computer. Is a Smart TV a computer? You can install apps in it. But they are also generally restricted to a certain kind of software: streaming services, video and the like.
Something doesn’t feel like a computer unless you can run any kind of software in it. This universality is a key concept. It’s interesting how we’re slowly driven back to the most fundamental definition of a computer, Alan Turing’s definition of a computer as a universal machine. Back in 1936, before the first actual computer was built during World War II, Turing wrote a philosophy section within a mathematics paper where he made this thought exercise of what it means to compute, and in his example he used the idea of a person doing the computations by hand: reading data, applying rules to process data, producing new data, repeat. Everything that computes follows this model: game consoles, the autopilot in an airplane, PCs, the microcontroller in my microwave. And though Turing had a technical notion of universality in mind, the key point for us here is that in our end-user understanding of a computer and what makes us call some things computers and others not, is that the program (or set of allowed programs) is not fixed, and this is what we see (and Turing saw) as universal: that any program that may be expressed in the computer’s language can be written and run on it, not just a finite set.
Are smart devices universal machines, then, in either sense of the word? Well, you can install new apps in them. Then, it can do new things it couldn’t yesterday. Does that make it a computer? What about game consoles? If I buy a new game (which is effectively new software!), it can also do new things, but you won’t really look at it as a computer. The reason is because you’re restricted on the kind of new software you can make this machine run: it only takes games, it’s not universal, at least from an end-user point of view.
But game consoles are getting “smarter” nowadays! They not only play games; maybe it will also have an app for showing you the weather, maybe it will accept some streaming service… but not others — and here we’re hinting at a key point of what “smart” devices are really like. They are, in fact, on the inside, universal machines that satisfy Turing’s definition. But they are not universal machines for you, the owner. For me, my Nintendo Switch is just a game console. For Nintendo, it is a computer: they can install any kind of software in it in their next software update. They can decide that it can play games, and also access Youtube, but not Netflix. Or they could change their mind tomorrow. From Nintendo’s perspective, the Switch is a universal machine, but not from mine.
The same thing happens, for example, with an iPhone. For Apple, it is a computer: they can run anything on it, the possibilities are endless. From the user’s perspective, it is a phone, into which you can install apps, and in fact choose from a zillion apps in the App Store. But the the possibilities, vast as they may be, are not endless. And that vastness doesn’t help much. From a user perspective, it doesn’t matter how many things you can do with something; what matters are what things you want to do with it, which of those you can and which ones you can’t. Apple still decrees what’s allowed and what isn’t in the App Store, and will also run their own software on the operating system, over which you have zero say. A Chromecast may also be a computer on the inside, with all the necessary networking and video capabilities, but Google has decided that it won’t let me easily play my movies with it, and so it can’t, just like that.
And such is the reality of smart devices. My Samsung TV is my TV, but it is Samsung’s computer. My house is filled, more and more, with computers over which I have no control. And they have control over what I can and what I can’t do with the devices I bought. From planned obsolescence, to collecting data on my habits and selling it, to complicating access to functionality that is there — the common thread in smart devices is that there is someone on the other side controlling the experience. And as we progress towards the “ever smarter”, with AI-based voice assistants being added to more devices, a significant part of that experience becomes the ways it “delights and surprises you“, or, in other words, your lack of control of it.
After all, if it wasn’t surprising, if it did just what you expected and nothing more, it wouldn’t be all that “smart”, right? If you take all the “smartness” away, what remains is a “dumb” device, an empty shell, waiting for you to tell it what to do. Press a button to do the thing, and the thing happens. Don’t press, it doesn’t do the thing. Install new functionality, the new functionality is installed. Schedule it to do the thing, it does when scheduled, like a boring old alarm clock. Use it today, it runs today. Put it away, pick it up to use it ten years from now, it runs ten years from now. No surprise upgrades. No surprises.
“But what about the security upgrades”, you ask? Well, maybe I just wanted to vacuum my living room. Can’t we design devices such that the “online” component isn’t an indispensable, always-on necessity? Of course we can. But then my devices wouldn’t be their computers anymore.
And why do they want our devices to be their computers? It’s not to run their software in it and free-ride on our electricity bill — all these companies more enough computers of their own than we can imagine. It’s about controlling our experience. Once the user has control over which software runs, they make the choices. Once they don’t, the choice is made for them. Whenever behavior that used to be user-controlled becomes automatic in a “smart” (i.e., not explicitly user-dictated) way, that is a way where a choice is taken away from you and driven by someone else. And they might hide choices behind “it was the algorithm”, which gives a semblance of impersonality and deniability, but putting the algorithm in place is a deliberate act.
Taking power away from the user is a deliberate act. Take social networks, for example. You choose who to follow to curate your timeline, but then they say “no, we want our algorithm to choose who to display in your timeline!”. Of course, this is always to “delight” you with a “better experience”; in the case of social networks, a more addictive one, in the name of user engagement. And with the lines between tech conglomerates and smart devices being more and more blurred, the effect is such that this control extends into our lives beyond the glass screens.
In the past, any kind of rant like this about the harmful aspects of any piece of tech could well be responded with a “just don’t use it, then!” In the world of smart devices, the problem is that this is becoming less of an option, as the fabric of our social and professional lives increasingly depends on these networks, and soon enough alternatives will not available. We are still “delighted” by the process, but just like, 15 years in, a smartphone is now just a phone, soon enough a smart kettle will be just a kettle, a smart vacuum will be just a vacuum and we won’t be able to clean our houses unless Amazon says it’s alright to do so. We need to build an alternative future, because I don’t want to go back to using a broom.
🔗 Talking htop at the Changelog podcast
I was interviewed at the Changelog podcast about the surprising story of the maintainership transition in htop:
The Changelog 413: How open source saved htop – Listen on Changelog.com
Follow
🐘 Mastodon ▪ RSS (English), RSS (português), RSS (todos / all)
Last 10 entries
- A Special Hand
- How to change the nmtui background color
- Receita de Best Pancakes
- That time I almost added Tetris to htop
- Receita de Orange Chicken
- Receita de frango empanado no panko
- Receita de cebola caramelizada
- Receita rápida de crepe
- Finally upgraded FlatPress
- Sobre o boom das commodities