🔗 That time I almost added Tetris to htop
Confession time: once I *almost* added a terminal version of Tetris as an Easter egg in htop.
I managed to implement a real crude but working version of it code golfing to make it as short as possible and got it pretty tiny, then added it to the help screen so it would activate by typing h, t, o, p (since h would take you to the help screen and the other keys would be nops in that screen).
Then there’s the question of how to hide an Easter egg in a FOSS codebase… The best I could think of was to make it into a long one-liner starting at column 200 so that most people looking at the code without word-wrapping editors would miss it. But after everything was coded, I decided that trying to “sneak code in”, even in my own codebase, was a bad practice and the good intention of innocent fun wasn’t worth it.
My fascination with Tetris goes way back. I first implemented it when I was in high school, and it getting it done really gave me pause: that was a real program, something that people paid real money for in Nintendo cartridges. It was the first time I thought I could really call myself a programmer for real. At the same time, it was my first contact with the ethics of software. I had never heard of FOSS then, and yet I asked myself: “what if my friends ask for the source code? what should I do?”
Years later, when we did the first CD version for our GoboLinux distro, I took an existing ncurses version of Tetris and hacked it into our installer, adding a progress bar that showed the status of files copying from CD to disk, while the user played the game (distro installers took forever back then!). Everyone loved it–except for the fact that it was supposed to auto-quit when the installation was finished but we changed the list of packages last minute so it got the count wrong.
A lot of people just kept playing for a long time without realizing the installation was done! (But it wasn’t too bad, they could just press Esc or something to quit and finish the install.)
Our early Gobo releases were full of little fun tweaks like that. In one release we included an emulator and legend has it that some hidden folder contains a ROM (not Tetris!), but not even I remember where that is, and that ISO probably isn’t even online anymore. (We really should have preserved our old stuff better!)
The memory of the Tetris installer in Gobo having a last-minute bug was another thing that dispelled me from the idea of the Tetris Easter egg in htop: while having bugs is just normal, I couldn’t bear the thought of htop having some serious bug caused by code added for silly reasons…
htop has its fair share of “unnecessary code”, such as the “big-digit LCD” meter and the themes, which are more artsy than utilitarian and I stand by them. If anything, I think software in general should be more artsy.
But “hidden Tetris in htop causes buffer overflow” would be terrible PR for the project (and my reputation by extension, I guess). That along with the bad taste in the mouth of the idea of hiding code in FOSS left made me drop the Easter egg idea.
I wish I still had that code, though! If only to keep it to myself as an autobiographical side-note.
Come to think of it, after writing all of this I realize I probably _should_ have included that code… as a comment!! Maybe that’s the way to do Easter eggs in FOSS? Add a fun/silly feature but leave it commented out, so that someone tinkering with the code finds it, enables it and has fun with it for a bit. I know that *I* would have enjoyed finding something like that in a codebase.
Oh well, maybe someday I’ll pull this off in some project.
🔗 Mini book review: “Guattari/Kogawa”, organized by Anderson Santos
Writing this review in English due to the international appeal of this book, even though it was printed in Portuguese.
This book collects a series of interviews of French psychoanalyst Felix Guattari conducted by Japanese artist and researcher Tetsuo Kogawa.
“Tetsuo is a really very influential figure in underground radio art and media-art theory, with over 30 years of collaboration and connection with some of the most influential artists and thinkers of that period, worldwide (He’s published over 30 books, had a series of interviews with Felix Guttari, has known and collaborated with pioneers of experimental music in Japan from the 50’s on (big guns like Yasanao Tone and Takehisa Kosugi and so on…)).
He’s perhaps best known internationally as the founding father of the micro-fm boom in Japan in the 80’s. Inspired by the Marxist ‘Autonomia’ movement and their pirate radio stations in 1970’s Italy, Kogawa set up Radio Home Run as a resistance to the commodification of subculture; theorising, practically enabeling and kick starting a Japanese boom which saw thousands of tiny radio stations set up and run, by and for communities across the country. They became a space for polymorphous chaos, a kind of chaos found through difference and ‘order through fluctuations’.”
– Arika on Tetsuo Kogawa
This book collects those interviews and add texts by Guattari, Kogawa, and the Brazilian organizers, who were in direct contact with Kogawa while working on this collection.
I found the discussion of the various movements of free/pirate radio in the late 70s and early 80s especially interesting: Autonomia in Italy, free radio in France and mini-FMs in Japan. The parallels with the rise of the free software movements were apparent to me, and in fact, the timelines match (the GNU Manifesto dates from 1983) and in more recent writings Kogawa and the organizers do allude to free software. It was a stark realization to me to finally notice how the lore of the free software movement was (and to the extent that it still exists, still is) propagated entirely in a vacuum, without any real context of the surrounding free culture movements of the time. Seeing Guattari and Kogawa discuss the free radio movements, and their similarities and differences in each country, it is clear to me now how the free software movement was a product of its time — both its genesis in the US in the early 80s and its later boom in Latin America in the early 2000s, with the main event, FISL, happening in Porto Alegre, the same city that hosted the first editions of the World Social Forum featuring the likes of Noam Chomsky, Lula and José Saramago.
Another interesting observation comes from Guattari discussing how the free radio movement in France was a way for the various regions (and their languages!) to break away from “parisian imperialism” — having lived in different parts of Brazil, I have personally observed these phenomena of domestic cultural imperialism for a long time, and how they present themselves, by design, as being mostly invisible.
Kogawa’s more recent discussion of “social autism” relating to media is also insightful: he discussed the collective catharsis of the mini-FM movement as a therapeutic way to break away from the social autism of mass media by scaling it down, and how the ultra downscaling to the individual scale of personal smartphones has led to another kind of social autism, more lethargic and legitimized on a global scale.
Finally, it was impressive to see Guattari discuss back in 1980 how micropolitics — in the form of what we today understand as identity politics, for example — had the potential to produce large-scale political change. Time and again I marvel at how philosophers look at the world with a clarity that makes it seem like they’re reading the future decades in advance. I got the same feeling from reading Bauman.
🔗 Conway’s Law applied to the industry as a whole
Melvin Conway famously said that organizations design systems that mirror their own communication structure. But how about Conway’s Law applied to the entire industry rather than a single company?
The tech industry, and open source (OSS) in particular, are mostly shaped now around the dominating communication structure — GitHub. Nadia Eghbal’s book “Working in Public” does a great job at explaining how OSS’s centralization around a big platform mirrors what happened everywhere on the internet, with us going from personal websites to social networks.
Another huge shift in organizational and communication structure, especially in Open Source, has been the increasing coalescence of maintainership: we historically talk about “a loosely-knit group of contributors” but most OSS nowadays is written by employees of big companies.
The commit stats in big projects like the Linux kernel indicate this, as do GitHub stats and the like. There’s a long tail of small independent contributors, of course, but by quantity major projects are dominated by those hired full-time to work on it.
One thing I haven’t seen discussed a lot is how much this reality changes the way projects are run and developed. Sometimes we see it coming up in particular cases, such as the relationship between Amazon and Rust, but this is a general phenomenon.
When Canonical came into the scene back in 2004-2005, I remember distinctly noticing their impact on OSS; it wasn’t just “more getting done” (yay?) but also what and how—various projects shifted direction around that time (GNOME comes to mind); it didn’t feel like a coincidence.
I don’t mean to imply it’s all bad, just that we don’t discuss enough about how the influence of Big Co development styles affect, in a “Conway’s-law-way”, the development of OSS, and even tech in general, since both open and closed development are so linked nowadays.
OSS has a big impact on how tech in general works (though the reliance of every company on OSS dependencies), and Big Cos have an impact on how OSS works (through their huge presence on the OSS developer community), so in this way they affect everybody. People bring in the experiences they know and how they’re used to working, from coding styles to architecture and deployment patterns to decision processes.
One great example where this is more evident is the “monorepo” discussion, which happens to projects of many sizes nowadays, and where Google and FB experiences are often brought up.
“help our codebase is too big” no, your company is too big. try sharding into microservice entities operating as a cluster in the same management substrate rather than staying as a monolith
The tweet above is such a great insight: we often see conversations about how to deal with huge codebases (using the likes of Google and FB as examples) AND we often see conversations about Big Tech monopolies — and how they’ve grown way beyond the status at which other monopolies were broken up in the past — but those two topics are hardly ever linked.
If we agree that some aspects of Big Tech as organizations are negative, how much of those do they bring into tech as technology practices via Conway’s Law? OSS seems to act as a filter that makes this relationship less evident, because contributions come from individuals, even though they work for these companies, and often replicate their practices, even if unknowingly.
These individuals will often, even if unknowingly, replicate practices from these companies. This is after all, a process of cultures spreading and influencing each other. It just seems to me that we as an industry are not aware enough of this phenomenon, and we probably should be more attuned to this.
🔗 Mini book review: “The West Divided”, Jürgen Habermas
Written in the early 2000s, “The West Divided” is a collection of interviews, newspaper articles and an essay by Habermas about post-9/11 international politics and the different approaches taken by the US through the Bush administration and the EU, through a philosophical lens.
The structure of the book helps the reader, by starting with transcripts from interviews and articles, which use a more direct language and establish some concepts explained by Habermas to the interviewers, and then proceeding with a more formal academic work in the form of the essay of the last part, which is a more difficult read. Habermas discusses the idealist and realist schools of international relations, aligning the idealist school with Kant’s project and the evolution of the EU, and opposing it to the realist school proposed by authors such as Carl Schmitt, a leading scholar from Nazi Germany. Habermas proposes a future of the Kantian project which lies in the ultimate development of cosmopolitan (i.e. worldwide) juridical institutions, such that cosmopolitan law gives rights directly to (all) people, rather than as a layer of inter-national law between states.
It was especially timely to read this book in the wake of the Russian invasion of Ukraine in February 2022, and to have this theoretical background at hand when watching John Mearsheimer’s talk from 2015 on “Why Ukraine is the West’s Fault”. Mearsheimer, an American professor in the University of Chicago and himself a noted realist, gives an excellent talk and makes a very important point that in international conflict one needs to understand how the other side thinks — instead of “idealist” vs. “realist” (which are clearly loaded terms) he used himself the terms “19th century people” to refer to himself, Putin, and the Chinese leadership, as opposed to “21st century people” to refer to the leaders in the EU and in Washington.
It is interesting to see Mearsheimer put the European and American governments in the same bucket, when Habermas’s book very much deals with the opposition of their views, down to the book’s very title. Habermas wrote “The West Divided” during the Bush administration, and the Mearsheimer talk was given during the Obama years, but he stated he saw no difference between Democrats and Republicans in this regard. Indeed, in what concerns international law, what we’ve seen from Obama wasn’t that different from what we saw before or afterwards — perhaps different in rhetoric, but not so much in actions, as he broke his promise of shutting down Guantanamo and continued the policy of foreign interventions.
As much as Mearsheimer’s analysis is useful to understand Putin, Habermas’s debate that there are different ways to see a future beyond endless Schmittian regional conflicts is still a valid one. And we get a feeling that all is not lost whenever we see that there are still leaders looking to build a future of cosmopolitan cooperation more in line with Kant’s ideals. Martin Kimani, the Kenyan envoy to the UN said this regarding the Russian invasion of Ukraine:
This situation echoes our history. Kenya and almost every African country was birthed by the ending of empire. Our borders were not of our own drawing. They were drawn in the distant colonial metropoles of London, Paris, and Lisbon, with no regard for the ancient nations that they cleaved apart.
Today, across the border of every single African country, live our countrymen with whom we share deep historical, cultural, and linguistic bonds. At independence, had we chosen to pursue states on the basis of ethnic, racial, or religious homogeneity, we would still be waging bloody wars these many decades later.
Instead, we agreed that we would settle for the borders that we inherited, but we would still pursue continental political, economic, and legal integration. Rather than form nations that looked ever backwards into history with a dangerous nostalgia, we chose to look forward to a greatness none of our many nations and peoples had ever known. We chose to follow the rules of the Organisation of African Unity and the United Nations charter, not because our borders satisfied us, but because we wanted something greater, forged in peace.
We believe that all states formed from empires that have collapsed or retreated have many peoples in them yearning for integration with peoples in neighboring states. This is normal and understandable. After all, who does not want to be joined to their brethren and to make common purpose with them? However, Kenya rejects such a yearning from being pursued by force. We must complete our recovery from the embers of dead empires in a way that does not plunge us back into new forms of domination and oppression.
Words to build a new future by.
🔗 The algorithm did it!
Earlier today, statistician Kareem Carr posted this interesting tweet, about what people out there mean when they say “algorithm”, which I found to be a good summary:
When people say “algorithms”, they mean at least four different things:
1. the assumptions and description of the model
2. the process of fitting the model to the data
3. the software that implements fitting the model to the data
4. The output of running that software
Unsurprisingly, this elicited a lot of responses from computer scientists, raising the point that this is not what the word algorithm is supposed to mean (you know, a well-defined sequence of steps transforming inputs into outputs, the usual CS definition), including a response from Grady Booch, a key figure in the history of software engineering.
I could see where both of them were coming from. I responed that Carr’s original tweet not was about what programmers mean when we say “algorithms” but what the laypeople mean when they say it or read it in the media. And understanding this distinction is especially important because variations of “the algorithm did it!” is the new favorite excuse of policymakers in companies and governments alike.
Booch responded to me, clarifying that his point is that “even most laypeople don’t think any of those things”, which I agree with. People have a fuzzy definition of what an algorithm is, at best, and I think Carr’s list encompasses rather well the various things that are responsible for the effects that people credit on a vague notion of “algorithm” when people use that term.
Booch also added that “it’s appropriate to establish and socialize the correct meaning of words”, which simultaneously extends the discussion to a wider scope and also focuses it to the heart of the matter about the use of “algorithm” in our current society.
You see, it’s not about holding on to the original meaning of a word. I’m sure a few responses to Carr were of the pedantic variety, “that’s not what the dictionary says!” kind of thing. But that’s short-sighted, taking a prescriptivist rather than descriptivist view of language. Most of us who care about language are past that debate now, and those of us who adhere to the sociolinguistic view of language even celebrate the fact language shifts, adapts and evolves to suit the use of its speakers.
Shriram Krishnamurthi, CS professor at Brown, joined in on the conversation, observing that this shift in the language as a fait accompli:
I’ve been told by a public figure in France (who is herself a world-class computer scientist) — who is sometimes called upon by shows, government, etc. — that those people DO very much use the word this way. As an algorithms researcher it irks her, but that’s how it is.
Basically, we’ve lost control of the world “algorithm”. It has its narrow meaning but it also has a very broad meaning for which we might instead use “software”, “system”, “model”, etc.
Still, I agreed with Booch that this is still a fight worth fighting. But not to preserve our cherished technical meaning of the term, to the dismay of the pedants among our ranks, but because of the observation of the very circumstances that led to this linguistic shift.
The use of “algorithm” as a vague term to mean “computers deciding things” has a clear political intent: shifting blame. Social networks boosting hate speech? Sorry, the recommendation algorithm did it. Racist bias in criminal systems? Sorry, it was the algorithm.
When you think about it, from a linguistic point of view, it is as nonsensical as saying that “my hammer assembled the shelf in my living room”. No, I did, using the hammer. Yet, people are trained to use such constructs all the time: “the pedestrian was hit by a car”. Note the use of passive voice to shift the focus away from the active subject: “a car hit a pedestrian” has a different ring to it, and, while still giving agency to a lifeless object, is one step closer to making you realize that it was the driver who hit the pedestrian, using the car, just like it was I who built the shelf, using the hammer.
This of course leads to the “guns don’t kill people, people kill people” response. Yes, it does, and the exact same questions regarding guns also apply regarding “algorithms” — and here I use the term in the “broader” sense as put forward by Carr and observed by Krishnamurthi. Those “algorithms” — those models, systems, collections of data, programs manipulating this data — wield immense power in our society, even, like guns, resulting in violence, and like guns, deserving scrutiny. And when those in possession of those “algorithms” go under scrutiny, they really don’t like it. One only needs to look at the fallout resulting from the work by Bender, Gebru, McMillan-Major and Mitchell, about the dangers of extremely large language models in machine learning. Some people don’t like hearing the suggestion that maybe overpowered weapons are not a good idea.
By hiding all those issues behind the word “algorithm”, policymakers will always find a friendly computer scientist available to say that yes, an algorithm is a neutral thing, after all, it’s just a sequence of instructions, and they will no doubt profit from this confusion of meanings. And I must clarify that by policymakers I mean those both in public and private sphere, since policies put forward by the private tech giants on their platforms, where we spend so much of our lives, are as effecting on our society as public policies nowadays.
So what do we do? I don’t think it is productive to start well-actually-ing anyone who uses “algorithm” in the broader sense, with a pedantic “Let me interject for a moment — what you mean by algorithm is in reality a…”. But it is productive to spot when this broad term is being used to hide something else. “The algorithm is biased” — What do you mean, the outputs are biased? Why, is the input data biased? The people manipulating that data created a biased process? Who are they? Why did they choose this process and not another? These are better interjections to make.
These broad systems described by Carr above ultimately run on code. There are algorithms inside them, processing those inputs, generating those outputs. The use of “algorithm” to describe the whole may have started as a harmless metonymy (like when saying “White House” to refer to the entire US government), but it has since been proven very useful as a deflection tactic. By using a word that people don’t understand, the message is “computers doing something you don’t understand and shouldn’t worry about”, using “algorithm” handwavily to drift people’s minds away from the policy issues around computation, the same way “cloud” is used with data: “your data? don’t worry, it’s in the cloud”.
Carr is right, these are all things encompassing things that people refer to as “algorithms” nowadays. Krishnamurthi is right, this broad meaning is a reality in modern language. And Booch is right when he says that “words matter; facts matter”.
Holding words to their stricter meanings merely due to our love for the language-as-we-were-taught is a fool’s errand; language changes whether we want it or not. But our duty as technologists is to identify the interplay of the language, our field, and society, how and why they are being used (both the language and our field!). We need to clarify to people what the pieces at play really are when they say “algorithm”. We need to constantly emphasize to the public that there’s no magic behind the curtain, and, most importantly, that all policies are due to human choices.
Follow
🐘 Mastodon ▪ RSS (English), RSS (português), RSS (todos / all)
Last 10 entries
- A Special Hand
- How to change the nmtui background color
- Receita de Best Pancakes
- That time I almost added Tetris to htop
- Receita de Orange Chicken
- Receita de frango empanado no panko
- Receita de cebola caramelizada
- Receita rápida de crepe
- Finally upgraded FlatPress
- Sobre o boom das commodities