🔗 Smart tech — smart for whom?
Earlier today I quipped on social media: “We need dumb tech and smart users, and not the other way around”.
Later, I clarified that I’m not calling users of “smart” devices dumb. People are smart. The tech should try to not “dumb them down” by acting condescendingly, cutting down on their agency and limiting their opportunities of education.
A few people shared replies to the effect that they wish for smart devices that wouldn’t be at odds with the intents of the user. After all, we all want the convenience of tech, so why settle for “dumb tech”, right?
The question here becomes a game of words: what is a “smart device”, after all?
A technically-minded person will would look at smart devices like smartphones, smart TVs, etc., and say “well, they are really computers”, or “they have computers inside”. But if we want to be technically pedantic, what is a computer? Having a Turing-complete microprocessor running programs? My old trusty microwave for sure has a microprocessor, but it’s definitely not a “smart device”. So it’s not about the internals, it definitely has to do with the end-user perception of the device.
The next reasonable approximation towards a definition is that a smart device allows you to install “apps”. You can extend it with more functionality (which is really making use of the fact that it’s a “computer inside”). Smart TVs and smart phones check this box. However, other home appliances like “smart kettles” don’t, the “smartness” comes from being internet-connected. So, generally, it looks like smart devices are things you either run apps in, or control via apps (from another smart device, of course).
So, allowing for running apps pretty much makes something into a computer, right? After all, a computer is a machine for running software. But it’s really interesting to think what is in fact a computer — where do we draw the line. From an end-user perspective, a game console is also a machine for running software — a particular kind of software, games — but it is not a computer. Is a Smart TV a computer? You can install apps in it. But they are also generally restricted to a certain kind of software: streaming services, video and the like.
Something doesn’t feel like a computer unless you can run any kind of software in it. This universality is a key concept. It’s interesting how we’re slowly driven back to the most fundamental definition of a computer, Alan Turing’s definition of a computer as a universal machine. Back in 1936, before the first actual computer was built during World War II, Turing wrote a philosophy section within a mathematics paper where he made this thought exercise of what it means to compute, and in his example he used the idea of a person doing the computations by hand: reading data, applying rules to process data, producing new data, repeat. Everything that computes follows this model: game consoles, the autopilot in an airplane, PCs, the microcontroller in my microwave. And though Turing had a technical notion of universality in mind, the key point for us here is that in our end-user understanding of a computer and what makes us call some things computers and others not, is that the program (or set of allowed programs) is not fixed, and this is what we see (and Turing saw) as universal: that any program that may be expressed in the computer’s language can be written and run on it, not just a finite set.
Are smart devices universal machines, then, in either sense of the word? Well, you can install new apps in them. Then, it can do new things it couldn’t yesterday. Does that make it a computer? What about game consoles? If I buy a new game (which is effectively new software!), it can also do new things, but you won’t really look at it as a computer. The reason is because you’re restricted on the kind of new software you can make this machine run: it only takes games, it’s not universal, at least from an end-user point of view.
But game consoles are getting “smarter” nowadays! They not only play games; maybe it will also have an app for showing you the weather, maybe it will accept some streaming service… but not others — and here we’re hinting at a key point of what “smart” devices are really like. They are, in fact, on the inside, universal machines that satisfy Turing’s definition. But they are not universal machines for you, the owner. For me, my Nintendo Switch is just a game console. For Nintendo, it is a computer: they can install any kind of software in it in their next software update. They can decide that it can play games, and also access Youtube, but not Netflix. Or they could change their mind tomorrow. From Nintendo’s perspective, the Switch is a universal machine, but not from mine.
The same thing happens, for example, with an iPhone. For Apple, it is a computer: they can run anything on it, the possibilities are endless. From the user’s perspective, it is a phone, into which you can install apps, and in fact choose from a zillion apps in the App Store. But the the possibilities, vast as they may be, are not endless. And that vastness doesn’t help much. From a user perspective, it doesn’t matter how many things you can do with something; what matters are what things you want to do with it, which of those you can and which ones you can’t. Apple still decrees what’s allowed and what isn’t in the App Store, and will also run their own software on the operating system, over which you have zero say. A Chromecast may also be a computer on the inside, with all the necessary networking and video capabilities, but Google has decided that it won’t let me easily play my movies with it, and so it can’t, just like that.
And such is the reality of smart devices. My Samsung TV is my TV, but it is Samsung’s computer. My house is filled, more and more, with computers over which I have no control. And they have control over what I can and what I can’t do with the devices I bought. From planned obsolescence, to collecting data on my habits and selling it, to complicating access to functionality that is there — the common thread in smart devices is that there is someone on the other side controlling the experience. And as we progress towards the “ever smarter”, with AI-based voice assistants being added to more devices, a significant part of that experience becomes the ways it “delights and surprises you“, or, in other words, your lack of control of it.
After all, if it wasn’t surprising, if it did just what you expected and nothing more, it wouldn’t be all that “smart”, right? If you take all the “smartness” away, what remains is a “dumb” device, an empty shell, waiting for you to tell it what to do. Press a button to do the thing, and the thing happens. Don’t press, it doesn’t do the thing. Install new functionality, the new functionality is installed. Schedule it to do the thing, it does when scheduled, like a boring old alarm clock. Use it today, it runs today. Put it away, pick it up to use it ten years from now, it runs ten years from now. No surprise upgrades. No surprises.
“But what about the security upgrades”, you ask? Well, maybe I just wanted to vacuum my living room. Can’t we design devices such that the “online” component isn’t an indispensable, always-on necessity? Of course we can. But then my devices wouldn’t be their computers anymore.
And why do they want our devices to be their computers? It’s not to run their software in it and free-ride on our electricity bill — all these companies more enough computers of their own than we can imagine. It’s about controlling our experience. Once the user has control over which software runs, they make the choices. Once they don’t, the choice is made for them. Whenever behavior that used to be user-controlled becomes automatic in a “smart” (i.e., not explicitly user-dictated) way, that is a way where a choice is taken away from you and driven by someone else. And they might hide choices behind “it was the algorithm”, which gives a semblance of impersonality and deniability, but putting the algorithm in place is a deliberate act.
Taking power away from the user is a deliberate act. Take social networks, for example. You choose who to follow to curate your timeline, but then they say “no, we want our algorithm to choose who to display in your timeline!”. Of course, this is always to “delight” you with a “better experience”; in the case of social networks, a more addictive one, in the name of user engagement. And with the lines between tech conglomerates and smart devices being more and more blurred, the effect is such that this control extends into our lives beyond the glass screens.
In the past, any kind of rant like this about the harmful aspects of any piece of tech could well be responded with a “just don’t use it, then!” In the world of smart devices, the problem is that this is becoming less of an option, as the fabric of our social and professional lives increasingly depends on these networks, and soon enough alternatives will not available. We are still “delighted” by the process, but just like, 15 years in, a smartphone is now just a phone, soon enough a smart kettle will be just a kettle, a smart vacuum will be just a vacuum and we won’t be able to clean our houses unless Amazon says it’s alright to do so. We need to build an alternative future, because I don’t want to go back to using a broom.

🔗 Remembering Windows 3.1 themes and user empowerment
This reminiscence started reading a tweet that said:
Unpopular opinion: dark modes are overhyped
Windows 3.1 allowed you to change all system colors to your liking. Linux been fully themeable since the 90s. OSX came along with a draconian “all blue aqua, and maybe a hint of gray”.
People accepted it because frankly it looked better than anything else at the time (a ton of Linux themes were bad OSX replicas). But it was a very “Ford Model T is available in any color as long as it’s black” thing.

The rise of OSX (remember, when it came along Apple had a single-digit slice of the computer market) meant that people eventually got used to the idea of a life with no desktop personalization. Nowadays most people don’t even change their wallpapers anymore.
In the old days of Windows 3.1, it was common to walk into an office and see each person’s desktop colors, fonts and wallpapers tuned to their personalities, just like their physical desk, with one’s family portrait or plants.
![]() |
![]() |
|
I just showed the above screenshots to my sister, and she sighed with a happy nostalgia:
— Remember changing colors on the computer?
— Oh yes! we would spend hours having fun on that!
— Everyone’s was different, right?
— Yes! I’d even change it according to my mood.
Looking back, I feel like this trend of less aesthetic configurability has diminished the sense of user ownership from the computer experience, part of the general trend of the death of “personal computing”.
I almost wrote that a phone UI allows for more self-expression today than a Win/Mac computer. But then I realized how much I struggled to get my Android UI the way I wanted, until I installed Nova Launcher that gave me Linux-levels of tweaking. The average user does not do this.
But at least they are more likely to change wallpaper in their phones than their computers. Nowadays you walk into an office and all computers look the same.

The same thing happened to the web, as we compare the diminishing tweakability of a MySpace page to the blue conformity a Facebook page, for example.
Conformity and death of self-expression are the norm, all under the guise of “consistency”.
User avatars forced into circles.
App icons in phones forced into the same shape.
Years ago, a friend joked that the inconsistency of the various Linux UI toolkits was how he felt the system’s “freedom”. We all laughed and wished for a more consistent UI, of course. But that discourse on consistency was quickly coopted to remove users’ agency.
What begins with aesthetics and the sense of self-expression, continues to a lack of ownership of the computing experience and ends in the passive acceptance of systems we don’t control.
Changes happen, but those are independent from the users’ wishes, and it’s a lottery whether the changes are for better or for worse.
Ever notice how version changes are called “updates” and not “upgrades” anymore?
In that regard, I think Dark Mode is a welcome addition as it allows a tiny bit of control and self-expression to the user, but it’s still kinda sad to see how far we regressed overall.
The hype around it, and how excited users get when they get such crumbles of configurability handed to them, just comes to show how users are unused to getting any degree of control back in their hands.
🔗 When listing repeated things, make pyramids
Often, in code, we have to write lists of repeated things. For example, attribute initialization in Java constructors:
this.foo = foo;
or required modules in Lua:
local foo = require("foo")
There are a few different ways people stack these when they need to list a number of them: randomly, alphabetic, aligned… working on a codebase that has all these approaches in different modules, I realized that “pyramid” is best. Let’s compare a few examples:
Random
This is what you end up doing if you don’t really think about it:
this.medium = medium; this.aLongOne = aLongOne; this.foo = foo; this.veryLongOne = veryLongOne; this.short = short;
⊖ ⊖ very bad to read - your eyes move back and forth horizontally and need to scan the whole thing vertically
⊕ easy to maintain - just add or remove entries arbitrarily
Alphabetical
This is what you end up doing if you get annoyed about the order when writing. I did this for a while.
this.aLongOne = aLongOne; this.foo = foo; this.medium = medium; this.short = short; this.veryLongOne = veryLongOne;
⊖ bad to read - your eyes move back and forth horizontally, but it’s easy to scan vertically
⊕ easy to maintain - no question where a new entry should go
Aligned
This is what you end up doing if you start to get annoyed when reading. Readability is more important than writability, after all!
this.aLongOne = aLongOne; this.foo = foo; this.medium = medium; this.short = short; this.veryLongOne = veryLongOne;
⊕ ⊕ very easy to read easy on the eyes horizontally, and if alphabetical it’s easy vertically as well
⊖ ⊖ very bad to maintain terrible for diffs, changes mess up `git blame` for unrelated lines
Pyramid
Finally, we get to the pyramid, which seems an ideal compromise keeping the advantages of an aligned list while avoiding its drawbacks:
this.veryLongOne = veryLongOne; this.aLongOne = aLongOne; this.medium = medium; this.short = short; this.foo = foo;
⊕ easy to read - easy on the eyes horizontally as the eyes follow the diagonal, and easy vertically as well as you usually know if you’re looking for a long or short word
⊕ easy to maintain - no question where entries go; you can use alphabetical order as a tie breaker for entries of same length
You can of course do the pyramid “ascending” or “descending”. I don’t really have a preference (and I couldn’t find any practical advantages to either yet).
In conclusion, it’s a silly little thing, but something that improves the ergonomics of the code and which I’ll try to adopt in my code more consistently from now on.
(PS: Of course, all of this applies to lists where the entries are not semantically related: when listing color components one would always do “red, green, blue”, and not “green, blue, red” :) )
🔗 What Every Programmer Needs To Know About What Every Programmer Needs To Know
I won’t deny it. I came up with the title for this post before coming up with the actual content. It came to my head and it was just too good to pass, because it entices you to think about that subject. What does every programmer need to know, after all?
-
What Every Computer Scientist Should Know About Floating-Point Arithmetic - What Every Programmer Should Know About Memory
- What every computer science major should know
- What every programmer absolutely, positively needs to know about encodings and character sets to work with text
- The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
- What Every Programmer Needs To Know About Game Networking
- What every programmer should know about time (the blog post is simply called “Time”, but it was featured in Hacker News with the long title)
What every programmer should know about names (actually titled “Falsehoods Programmers Believe About Names” — this is about people’s names)
If you run into any other of those lists, let me know at h@ this website’s domain!
🔗 Writing release announcement emails
Mailing lists are not exactly fashionable nowadays, but some of them remain relevant for some communities. The Lua community is one such example. As of 2017, a lot of what goes on in the Lua module development world still resonates in lua-l. With over 2500 subscribers, it’s a good way to kickstart interest in your new project.
Mailing list users tend to be somewhat pedantic about etiquette guidelines for posting, especially for announcements and the like. So, I usually follow this little formula for writing release announcement emails, which has been effective for me:
- Email subject - this is important; I use a format like “[ANN] MyProject x.y”
- Summary - The first paragraph explains what is the project
- Links and installation - Then a link to the project website, and a one-liner instruction of how to install it (that is, the incantation for the appropriate package manager — in the case of Lua,
luarocks install myproject
). More detailed instructions and documentation should be available from the project website. - Description - Finally, a more detailed description:
- If the announcement is for a new version of an existing project that was previously announced on the list, I include a summarized changelog, essentially “What’s new in version x.y:”
- If this is the first announcement of the project, then a longer description of how the project works. For Lua modules, for example, this may include a really short “hello-world”-type example for the library. This is information that should be in the README.md file for your repository, which in future announcements will be reachable via the link for the project website (often a Github repo URL) mentioned above.
- License - Users should be able to figure out the license of your project easily, so especially in new projects mentioning can be a good idea — but watch out if you’re using a license that’s not the majority option in a given community. You may be unnecessarily flamed for your choice by people who don’t even want to use your project in the first place. If you’re not going with the “majority license” (and remember, license choice is your call as an author, not the community’s) it might be a better idea to avoid mailing list noise and mention the license only in the project website and sources. The goal is not to hide it (interested people should find it easily; do mention it in your project’s README.md and include a LICENSE file) but just to avoid licensing flamewars. Of course, using the majority license has major pros, so if it’s all the same to you go with it, but if you’d prefer another one, don’t let yourself be bullied by a community into picking one free software license over another. It’s your freedom too!
- Be nice! - Finally, remember to sandwich all this technical info with greetings at the top, kudos to contributors, requests for help and feedback, etc. A mailing list is a social medium, after all. :)
An example of an upgrade announcement is here:
[ANN] LuaRocks 2.4.2
Hello, list! I'm happy to announce LuaRocks 2.4.2. LuaRocks is the Lua package manager. (For more information, please visit http://luarocks.org ) http://luarocks.org/releases/luarocks-2.4.2.tar.gz http://luarocks.org/releases/luarocks-2.4.2-win32.zip Those of you on Unix who are running LuaRocks as a rock (i.e. those who previously installed using `make bootstrap`) can install it using: luarocks install luarocks What's new since 2.4.1: * Fixed conflict resolution on deploy/delete * Improved dependency check messages * Performance improvements when removing packages * Support user-defined `platforms` array in config file * Improvements in Lua interpreter version detection in Unix configure script * Relaxed Lua version detection to improve support for alternative implementations (e.g. Ravi) * Plus assorted bugfixes and improvements This release contains commits by Peter Melnichenko, Robert Karasek and myself. As always, all kinds of feedback is greatly appreciated. Thank you, enjoy! -- Hisham
An example of a new project announcement is here:
[ANN] safer - Paranoid Lua programming
Hi, Announcing yet another "strict-mode" style module: "safer". * http://github.com/hishamhm/safer Install with luarocks install safer # Safer - Paranoid Lua programming Taking defensive programming to the next level. Use this module to avoid unexpected globals creeping up in your code, and stopping sub-modules from fiddling with fields of tables as you pass them around. ## API #### `safer.globals([exception_globals], [exception_nils])` No new globals after this point. `exception_globals` is an optional set (keys are strings, values are `true`) specifying names to be exceptionally accepted as new globals. Use this in case you have to declare a legacy module that declares a global, for example. A few legacy modules are already handled by default. `exception_nils` is an optional set (keys are strings, values are `true`) specifying names to be exceptionally accepted to be accessed as nonexisting globals. Use this in case code does feature-testing based on checking the presence of globals. A few common feature-test nils such as `jit` and `unpack` are already handled by default. #### `t = safer.table(t)` Block creation of new fields in this table. #### `t = safer.readonly(t)` Make table read-only: block creation of new fields in this table and setting new values to existing fields. Note that both `safer.table` and `safer.readonly` are implemented creating a proxy table, so: * Equality tests will fail: `safer.readonly(t) ~= t` * If anyone still has a reference to this table prior to creating the safer version, they can still mess with the unsafe table and affect the safe one. About ----- Licensed under the terms of the MIT License, the same as Lua. During its genesis, this module was called "safe", but I renamed it to "safer" to remind us that we are never fully safe. ;) -- Hisham http://hisham.hm/ - @hisham_hm
Hope this helps!
Follow
🐘 Mastodon ▪ RSS (English), RSS (português), RSS (todos / all)
Last 10 entries
- Why I no longer say "conservative" when I mean "cautious"
- Sorting "git branch" with most recent branches last
- Frustrating Software
- What every programmer should know about what every programmer should know
- A degradação da web em tempos de IA não é acidental
- There are two very different things called "package managers"
- Last day at Kong
- A Special Hand
- How to change the nmtui background color
- Receita de Best Pancakes