🔗 Why I no longer say “conservative” when I mean “cautious”
As you can guess from the title, this piece is about politics and language. Still, I need to preface it with a disclaimer. I was very deliberate about my title: I am not telling you how to use language, I am only telling you how I use it. I obviously understand the implications of the previous sentences, given that telling others how to use language has become a sticking point in political discourse. The very way I am approaching this paragraph is itself insinuating a certain position, one that is perhaps against the so-called language policing of the so-called identity politics. But don’t get me wrong: while I do have contentions with regard to identity politics, they come from a place of finding them not misguided but insufficient. I feel that the interests of liberalism have been well-served by the superficial treatment of oppression shaped instead as identity politics kept in a vacuum. In plain words, in case it’s not obvious, yes, we must continue and strengthen our defense of transgender rights; not just in a self-serving “if we tolerate this then who might be next”, true as that may be, but because this is another instance of treating people as people. In the recent past, those in power were happy to accomodate this into an “identity issue” and add pronoun boxes to their user interfaces to keep their well-paid transgender programmers content and productive, along with other watered-down displays of Corporate Pride. But even by the late 2010s that was already a contention strategy to delay the inevitable: the growing solidarity among the struggle of the oppressed. Once the conversation progressed from pronouns and rainbows to systemic discrimination, unionization, and ultimately the concentration of economic power, then there was no longer accomodation and they came crashing down. Power found the threat to be real and decided to act; this is where we are as of 2025.
But that doesn’t mean that language isn’t important. I am not saying that language, or even identity politics, are a distraction. The right tried, and to a big extent succeeded, in making identity politics into a distraction—and here, in the grand scheme of things, it helps to perceive the American liberal left as part of the right, heartbreaking as it may be to so many well-meaning Americans. The fact that they made it into a distraction does not take away that identities are part of politics: we must not throw the baby with the bathwater because our adversaries succeeded in shaping the discourse for so long. And as I proceed closer to the point I actually want to make, I feel the need to dispel in the reader the focus on the transgender topic. I used it as an example of the relationship between power and the oppressed because I knew it would come to people’s minds as soon as I started talking about politics and language. So I chose not to walk around it, even though it’s not really my theme to discuss. Identical points as the above could have been made instead replacing the example to racism in the US and the Black Lives Matter movement, or to the treatment of immigrants in Europe, or to women’s rights anywhere in the world, ultimately the largest oppressed group of all. All of these are stories of oppression which have had a period of liberal containment through language and accomodation.
This containment broke apart as soon as the oppressed groups themselves became able to bring their own narrative to the forefront, and now that the hypocritical appeasement is gone, I can finally arrive at the point I want to make, which is that the shaping of language as done by the right has been a lot more effective than we give it credit for. It happens in three fronts: they shape the language of the right (in both radical and mainstream varieties), they shape language of the mainstream left, and, most invisibly, they shape the language of the public in general.
At first it may seem odd to make that third distinction, especially in a time where everyone seems to have been fit into a “right” or “left” bucket. But first, this bucketing is in reality far from being the case, even though it doesn’t seem so within our politically-engaged bubbles. And second, what I mean by shaping of language for the public in general, I mean that which crosses the barriers and spreads into the vernacular of people both in the left and in the right. And no example of that is more amazing than that of “conservative”.
In recent years, I have observed a phenomenon, which, thanks to my own age, I am pretty certain that has not been the case since forever. All the time, I see people, from the left, from the right, and everything in between, using the word “conservative” with the meaning of “careful, cautious, well-measured”, especially in non-political contexts. And, by extension, “liberal” adopts the opposite meaning of “not careful, lavish, unmeasured”.
When I pointed this to people, they were quick to disagree, but then I gave them an example: if you’re baking a cake and the recipe says “apply cinnamon liberally on top”, what does that mean? If I told you I was making a soup and say “the recipe didn’t specify how much pepper to put, so I went conservative about it”, what does that mean?
You might now say that well, these are just the meanings of the words, but — really? What does a cinnamon topping on a cake has to with liberalism? Where is the liberty? Is it because you’re at liberty to put how much you want? Not really, because that liberty would mean you’re free to put a little, or a lot. But if I tell you “add sugar liberally”, does anyone ever understand it to “add just a little?”. No, any person will understand that as “add quite a bit”. People understand “being liberal” as meaning “don’t be sparing”.
Likewise, what does the spiciness of a soup has to do with conservatism? What are you conserving? When one says they were being conservative about adding pepper, everyone understand that it means that the person didn’t want to add too much pepper. But that wasn’t about conserving pepper, even though putting not too much pepper would save pepper in the end. It is clearly understood as being careful about not making the food to spicy to whoever will eat it. People treat “being conservative” in everyday language as “being careful about the end-result”.
To which I reply: what is the effect of introjecting that concept in people’s minds? Are conservatives, now in a political sense, really careful about the end-result? When the left pushes for environmental policies, deeply concerned about the immediate future of our planet in the face of climate effects, and the conservatives resist these initiatives, which side is being cautious and which side is being reckless?
In politics, what conservatism really fights for is conserving the status quo of their power relations. That’s where their name really comes from. They will adopt cautious positions when they serve that goal, and they will adopt reckless positions when that is the one that promotes the perpetuation of their power. That is why you hear today people talking about a “conservative left” when groups defend more egalitarian economics alongside social policies that throw minorities under the bus; it is a way to appeal to the majority’s vote by means of their own prejudices.
But the widespread use of “conservative” to mean a well-measured approach and “liberal” to mean a carefree aproach makes a strong subconscious argument that the conservative approach is that of the “adults in the room”.
A second-order effect is perpetuating this false dicothomy between “conservative” and “liberal”, on which so much of the American perception of mainstream politics is founded. By framing them as opposites, it sounds like the spectrum has been covered, when in reality, true leftist politics are left out of mainstream discourse.
This is so much the case that one can perceive the difference across languages. Due to the vast cultural influence of the US in the Western world, I do see the same phenomenon with the word “conservative” happen in the Portuguese language, here in Brazil. However, because here the political establishment of the left is different from that of the US, we do not have the same linguistic phenomenon happen with the word “liberal”. I could translate my conservative/soup example word-for-word into Portuguese and that would sound idiomatic, but I couldn’t do the same for my liberal/cake example. This is because here, “liberal” is an adjective that is not considered to be part of the left, but instead of the right: in Latin America, the people who label themselves liberals are those aligned with what the global left would call neo-liberals1. In this context, it is common to find people labeling themselves as “conservative liberals”, which might at first blow minds in the US, but which makes perfect sense once one thinks of those Americans who label themselves “fiscally conservative, socially liberal” — a milquetoast position that comes from a position of comfort, defending a watered-down appeasement in social politics that fails to admit that truly dismantling the systems of social oppression will inevitably require fighting the forces of the economic status quo defended by conservatives. Consider now a mirror form of “conservative liberal”, which is how the term is most used in Brazil: those who are socially conservative, defending the maintenance of the existing systems of oppression, and economically liberal, defending unregulated laissez-faire markets that preserve the powerful in power. In the US, that is just what one calls a “conservative”.
One might argue that this commonplace meaning of “cautious” is that regular meaning of the word, and that the political conservatives are the ones who hijacked the word’s meaning for the sake of their ideology. I disagree, given that the word itself is somewhat recent, and their ideology is not so much about being careful as it is really about conserving (their power). Etymologically, the political meaning matches the word better. And if word frequency in book corpuses is anything to go by, the expression “conservative estimate” appeared a few decades after the word “conservatism” itself.
That is why I decided to stop using “conservative” in that non-political sense: it is essentially a very effective form of propaganda that has gotten ingrained the language. But the reason why I am not telling you stop using it is because that would be a very weak form of activism: changing reality is not changing the language. This is what the liberal establishment wants you to believe: change your language and that’s sufficient, you’ve made a change. This goes back to the “political correctness” movement of the 1990s, which was a form of institutionalized hypocrisy. Saying “you shouldn’t use racist language” is very different from “you shouldn’t be a racist”. The former is a way of preserving racism by hiding it from plain sight. The latter is about changing human relations, of which a change in the language is just one consequence. Changing the language is not a way to change reality. If the reality of oppression itself doesn’t change, the change in the language just accomodates the reality underneath, and over time the new term becomes loaded with the oppressive charge and people decide to change it again, in an inflationary chain of euphemisms or neologisms.
What needs to happen is not a change in the language, but a change in perception. Racism is shattered not by political correctness, but by perceiving other races as equally valid people. Changing perceptions changes reality, and that then changes language. My evolving perception of what it means to be conservative affects how I use the word.
But didn’t I say that the right is effective at shaping language? Isn’t that changing language to change reality? No, Language as propaganda is a way to change perception, and from there then change reality. And this is done is a much more subtle way than just saying “don’t call it X, call it Y”, which just leads to hypocritical euphemism. When they succeed at associating the idea of a “conservative approach” with that of the “adults in the room”, or when they use terms such as “private initiative” or “intellectual property”, they are using language as a means in their advocacy to affect the world, and not making their advocacy as a means to affect language. We need to understand the power of language. We need to change language. But most importantly we need to change the world, otherwise they will keep conserving their position of power in the world, while they keep us busy changing language.
1 - It is interesting to note how much “neo-liberal” is a term strongly derided by the neo-liberals themselves, to the point that one of them once told me that “neo-liberalism doesn’t exist”. They know the power of language and they want to frame their position as being the true liberalism: they want to normalize their stance as a naturalized “love of freedom”, and not as the particular strand of reckless economics that it is.
🔗 Frustrating Software
There’s software that Just Works, and then there’s Frustrating Software.
htop Just Works. LuaRocks is Frustrating Software. I wrote them both.
As a user and an author of Frustrating Software, there’s a very particular brand of frustration caused by its awkward workflows.
I recognize it as a user myself when using software by others, and unfortunately I recognize it in my users when they fail to use my software. I know the answer in both cases is “well, the workflow is awkward because reasons”. There’s always reasons, they’re always complicated.
I wonder if I would know that were I not a developer myself.
Well-intentioned awkward free software still beats slick ill-intentioned proprietary software any day of the week. Both cause frustration, but the nature of the frustration is so, so different. The latter pretends it Just Works, and the frustration is injected for nefarious reasons. The frustration in the former is an accidental emergent behavior. I feel empathy to that, but it’s no less frustrating.
I wonder if non-developer end-users feel the difference, or if the end result is just the same: “this doesn’t work”. I’ve seen people not realizing they were being manipulated by slick ill-intentioned software. I’ve seen people dismissing awkward well-intentioned software outright with “this is broken”.
If users were looking at a person performing a task in front of them (say, an office clerk) rather than a piece of code, everyone would be able to tell the difference instantly.
In the end, all we can do as authors of well-intentioned free software is to be aware when we ended up building Frustrating Software.
Don’t be mad at users when they don’t “get it” that it’s “because reasons”.
Don’t embrace the awkwardness retroactively as a design decision; just because it can explained and “that’s how it is” it doesn’t mean that “that’s how it should be” (and definitely don’t turn it into a “badge of honor” to tell apart the “initiated”).
Once we step back after the defensive kneejerk reaction when our work is criticized, it is not that hard to tell apart someone just trolling from genuine frustration from someone who really tried and failed to use our software. Instead of trying to explain their frustration away to those people, take that as valuable design feedback into trying to improve your project into something that Just Works.
As for me? LuaRocks has a long way to go (because reasons!), but we’ll get there.
🔗 Mini book review: “The West Divided”, Jürgen Habermas
Written in the early 2000s, “The West Divided” is a collection of interviews, newspaper articles and an essay by Habermas about post-9/11 international politics and the different approaches taken by the US through the Bush administration and the EU, through a philosophical lens.
The structure of the book helps the reader, by starting with transcripts from interviews and articles, which use a more direct language and establish some concepts explained by Habermas to the interviewers, and then proceeding with a more formal academic work in the form of the essay of the last part, which is a more difficult read. Habermas discusses the idealist and realist schools of international relations, aligning the idealist school with Kant’s project and the evolution of the EU, and opposing it to the realist school proposed by authors such as Carl Schmitt, a leading scholar from Nazi Germany. Habermas proposes a future of the Kantian project which lies in the ultimate development of cosmopolitan (i.e. worldwide) juridical institutions, such that cosmopolitan law gives rights directly to (all) people, rather than as a layer of inter-national law between states.
It was especially timely to read this book in the wake of the Russian invasion of Ukraine in February 2022, and to have this theoretical background at hand when watching John Mearsheimer’s talk from 2015 on “Why Ukraine is the West’s Fault”. Mearsheimer, an American professor in the University of Chicago and himself a noted realist, gives an excellent talk and makes a very important point that in international conflict one needs to understand how the other side thinks — instead of “idealist” vs. “realist” (which are clearly loaded terms) he used himself the terms “19th century people” to refer to himself, Putin, and the Chinese leadership, as opposed to “21st century people” to refer to the leaders in the EU and in Washington.
It is interesting to see Mearsheimer put the European and American governments in the same bucket, when Habermas’s book very much deals with the opposition of their views, down to the book’s very title. Habermas wrote “The West Divided” during the Bush administration, and the Mearsheimer talk was given during the Obama years, but he stated he saw no difference between Democrats and Republicans in this regard. Indeed, in what concerns international law, what we’ve seen from Obama wasn’t that different from what we saw before or afterwards — perhaps different in rhetoric, but not so much in actions, as he broke his promise of shutting down Guantanamo and continued the policy of foreign interventions.
As much as Mearsheimer’s analysis is useful to understand Putin, Habermas’s debate that there are different ways to see a future beyond endless Schmittian regional conflicts is still a valid one. And we get a feeling that all is not lost whenever we see that there are still leaders looking to build a future of cosmopolitan cooperation more in line with Kant’s ideals. Martin Kimani, the Kenyan envoy to the UN said this regarding the Russian invasion of Ukraine:
This situation echoes our history. Kenya and almost every African country was birthed by the ending of empire. Our borders were not of our own drawing. They were drawn in the distant colonial metropoles of London, Paris, and Lisbon, with no regard for the ancient nations that they cleaved apart.
Today, across the border of every single African country, live our countrymen with whom we share deep historical, cultural, and linguistic bonds. At independence, had we chosen to pursue states on the basis of ethnic, racial, or religious homogeneity, we would still be waging bloody wars these many decades later.
Instead, we agreed that we would settle for the borders that we inherited, but we would still pursue continental political, economic, and legal integration. Rather than form nations that looked ever backwards into history with a dangerous nostalgia, we chose to look forward to a greatness none of our many nations and peoples had ever known. We chose to follow the rules of the Organisation of African Unity and the United Nations charter, not because our borders satisfied us, but because we wanted something greater, forged in peace.
We believe that all states formed from empires that have collapsed or retreated have many peoples in them yearning for integration with peoples in neighboring states. This is normal and understandable. After all, who does not want to be joined to their brethren and to make common purpose with them? However, Kenya rejects such a yearning from being pursued by force. We must complete our recovery from the embers of dead empires in a way that does not plunge us back into new forms of domination and oppression.
Words to build a new future by.
🔗 The algorithm did it!
Earlier today, statistician Kareem Carr posted this interesting tweet, about what people out there mean when they say “algorithm”, which I found to be a good summary:
When people say “algorithms”, they mean at least four different things:
1. the assumptions and description of the model
2. the process of fitting the model to the data
3. the software that implements fitting the model to the data
4. The output of running that software
Unsurprisingly, this elicited a lot of responses from computer scientists, raising the point that this is not what the word algorithm is supposed to mean (you know, a well-defined sequence of steps transforming inputs into outputs, the usual CS definition), including a response from Grady Booch, a key figure in the history of software engineering.
I could see where both of them were coming from. I responed that Carr’s original tweet not was about what programmers mean when we say “algorithms” but what the laypeople mean when they say it or read it in the media. And understanding this distinction is especially important because variations of “the algorithm did it!” is the new favorite excuse of policymakers in companies and governments alike.
Booch responded to me, clarifying that his point is that “even most laypeople don’t think any of those things”, which I agree with. People have a fuzzy definition of what an algorithm is, at best, and I think Carr’s list encompasses rather well the various things that are responsible for the effects that people credit on a vague notion of “algorithm” when people use that term.
Booch also added that “it’s appropriate to establish and socialize the correct meaning of words”, which simultaneously extends the discussion to a wider scope and also focuses it to the heart of the matter about the use of “algorithm” in our current society.
You see, it’s not about holding on to the original meaning of a word. I’m sure a few responses to Carr were of the pedantic variety, “that’s not what the dictionary says!” kind of thing. But that’s short-sighted, taking a prescriptivist rather than descriptivist view of language. Most of us who care about language are past that debate now, and those of us who adhere to the sociolinguistic view of language even celebrate the fact language shifts, adapts and evolves to suit the use of its speakers.
Shriram Krishnamurthi, CS professor at Brown, joined in on the conversation, observing that this shift in the language as a fait accompli:
I’ve been told by a public figure in France (who is herself a world-class computer scientist) — who is sometimes called upon by shows, government, etc. — that those people DO very much use the word this way. As an algorithms researcher it irks her, but that’s how it is.
Basically, we’ve lost control of the world “algorithm”. It has its narrow meaning but it also has a very broad meaning for which we might instead use “software”, “system”, “model”, etc.
Still, I agreed with Booch that this is still a fight worth fighting. But not to preserve our cherished technical meaning of the term, to the dismay of the pedants among our ranks, but because of the observation of the very circumstances that led to this linguistic shift.
The use of “algorithm” as a vague term to mean “computers deciding things” has a clear political intent: shifting blame. Social networks boosting hate speech? Sorry, the recommendation algorithm did it. Racist bias in criminal systems? Sorry, it was the algorithm.
When you think about it, from a linguistic point of view, it is as nonsensical as saying that “my hammer assembled the shelf in my living room”. No, I did, using the hammer. Yet, people are trained to use such constructs all the time: “the pedestrian was hit by a car”. Note the use of passive voice to shift the focus away from the active subject: “a car hit a pedestrian” has a different ring to it, and, while still giving agency to a lifeless object, is one step closer to making you realize that it was the driver who hit the pedestrian, using the car, just like it was I who built the shelf, using the hammer.
This of course leads to the “guns don’t kill people, people kill people” response. Yes, it does, and the exact same questions regarding guns also apply regarding “algorithms” — and here I use the term in the “broader” sense as put forward by Carr and observed by Krishnamurthi. Those “algorithms” — those models, systems, collections of data, programs manipulating this data — wield immense power in our society, even, like guns, resulting in violence, and like guns, deserving scrutiny. And when those in possession of those “algorithms” go under scrutiny, they really don’t like it. One only needs to look at the fallout resulting from the work by Bender, Gebru, McMillan-Major and Mitchell, about the dangers of extremely large language models in machine learning. Some people don’t like hearing the suggestion that maybe overpowered weapons are not a good idea.
By hiding all those issues behind the word “algorithm”, policymakers will always find a friendly computer scientist available to say that yes, an algorithm is a neutral thing, after all, it’s just a sequence of instructions, and they will no doubt profit from this confusion of meanings. And I must clarify that by policymakers I mean those both in public and private sphere, since policies put forward by the private tech giants on their platforms, where we spend so much of our lives, are as effecting on our society as public policies nowadays.
So what do we do? I don’t think it is productive to start well-actually-ing anyone who uses “algorithm” in the broader sense, with a pedantic “Let me interject for a moment — what you mean by algorithm is in reality a…”. But it is productive to spot when this broad term is being used to hide something else. “The algorithm is biased” — What do you mean, the outputs are biased? Why, is the input data biased? The people manipulating that data created a biased process? Who are they? Why did they choose this process and not another? These are better interjections to make.
These broad systems described by Carr above ultimately run on code. There are algorithms inside them, processing those inputs, generating those outputs. The use of “algorithm” to describe the whole may have started as a harmless metonymy (like when saying “White House” to refer to the entire US government), but it has since been proven very useful as a deflection tactic. By using a word that people don’t understand, the message is “computers doing something you don’t understand and shouldn’t worry about”, using “algorithm” handwavily to drift people’s minds away from the policy issues around computation, the same way “cloud” is used with data: “your data? don’t worry, it’s in the cloud”.
Carr is right, these are all things encompassing things that people refer to as “algorithms” nowadays. Krishnamurthi is right, this broad meaning is a reality in modern language. And Booch is right when he says that “words matter; facts matter”.
Holding words to their stricter meanings merely due to our love for the language-as-we-were-taught is a fool’s errand; language changes whether we want it or not. But our duty as technologists is to identify the interplay of the language, our field, and society, how and why they are being used (both the language and our field!). We need to clarify to people what the pieces at play really are when they say “algorithm”. We need to constantly emphasize to the public that there’s no magic behind the curtain, and, most importantly, that all policies are due to human choices.
🔗 Smart tech — smart for whom?
Earlier today I quipped on social media: “We need dumb tech and smart users, and not the other way around”.
Later, I clarified that I’m not calling users of “smart” devices dumb. People are smart. The tech should try to not “dumb them down” by acting condescendingly, cutting down on their agency and limiting their opportunities of education.
A few people shared replies to the effect that they wish for smart devices that wouldn’t be at odds with the intents of the user. After all, we all want the convenience of tech, so why settle for “dumb tech”, right?
The question here becomes a game of words: what is a “smart device”, after all?
A technically-minded person will would look at smart devices like smartphones, smart TVs, etc., and say “well, they are really computers”, or “they have computers inside”. But if we want to be technically pedantic, what is a computer? Having a Turing-complete microprocessor running programs? My old trusty microwave for sure has a microprocessor, but it’s definitely not a “smart device”. So it’s not about the internals, it definitely has to do with the end-user perception of the device.
The next reasonable approximation towards a definition is that a smart device allows you to install “apps”. You can extend it with more functionality (which is really making use of the fact that it’s a “computer inside”). Smart TVs and smart phones check this box. However, other home appliances like “smart kettles” don’t, the “smartness” comes from being internet-connected. So, generally, it looks like smart devices are things you either run apps in, or control via apps (from another smart device, of course).
So, allowing for running apps pretty much makes something into a computer, right? After all, a computer is a machine for running software. But it’s really interesting to think what is in fact a computer — where do we draw the line. From an end-user perspective, a game console is also a machine for running software — a particular kind of software, games — but it is not a computer. Is a Smart TV a computer? You can install apps in it. But they are also generally restricted to a certain kind of software: streaming services, video and the like.
Something doesn’t feel like a computer unless you can run any kind of software in it. This universality is a key concept. It’s interesting how we’re slowly driven back to the most fundamental definition of a computer, Alan Turing’s definition of a computer as a universal machine. Back in 1936, before the first actual computer was built during World War II, Turing wrote a philosophy section within a mathematics paper where he made this thought exercise of what it means to compute, and in his example he used the idea of a person doing the computations by hand: reading data, applying rules to process data, producing new data, repeat. Everything that computes follows this model: game consoles, the autopilot in an airplane, PCs, the microcontroller in my microwave. And though Turing had a technical notion of universality in mind, the key point for us here is that in our end-user understanding of a computer and what makes us call some things computers and others not, is that the program (or set of allowed programs) is not fixed, and this is what we see (and Turing saw) as universal: that any program that may be expressed in the computer’s language can be written and run on it, not just a finite set.
Are smart devices universal machines, then, in either sense of the word? Well, you can install new apps in them. Then, it can do new things it couldn’t yesterday. Does that make it a computer? What about game consoles? If I buy a new game (which is effectively new software!), it can also do new things, but you won’t really look at it as a computer. The reason is because you’re restricted on the kind of new software you can make this machine run: it only takes games, it’s not universal, at least from an end-user point of view.
But game consoles are getting “smarter” nowadays! They not only play games; maybe it will also have an app for showing you the weather, maybe it will accept some streaming service… but not others — and here we’re hinting at a key point of what “smart” devices are really like. They are, in fact, on the inside, universal machines that satisfy Turing’s definition. But they are not universal machines for you, the owner. For me, my Nintendo Switch is just a game console. For Nintendo, it is a computer: they can install any kind of software in it in their next software update. They can decide that it can play games, and also access Youtube, but not Netflix. Or they could change their mind tomorrow. From Nintendo’s perspective, the Switch is a universal machine, but not from mine.
The same thing happens, for example, with an iPhone. For Apple, it is a computer: they can run anything on it, the possibilities are endless. From the user’s perspective, it is a phone, into which you can install apps, and in fact choose from a zillion apps in the App Store. But the the possibilities, vast as they may be, are not endless. And that vastness doesn’t help much. From a user perspective, it doesn’t matter how many things you can do with something; what matters are what things you want to do with it, which of those you can and which ones you can’t. Apple still decrees what’s allowed and what isn’t in the App Store, and will also run their own software on the operating system, over which you have zero say. A Chromecast may also be a computer on the inside, with all the necessary networking and video capabilities, but Google has decided that it won’t let me easily play my movies with it, and so it can’t, just like that.
And such is the reality of smart devices. My Samsung TV is my TV, but it is Samsung’s computer. My house is filled, more and more, with computers over which I have no control. And they have control over what I can and what I can’t do with the devices I bought. From planned obsolescence, to collecting data on my habits and selling it, to complicating access to functionality that is there — the common thread in smart devices is that there is someone on the other side controlling the experience. And as we progress towards the “ever smarter”, with AI-based voice assistants being added to more devices, a significant part of that experience becomes the ways it “delights and surprises you“, or, in other words, your lack of control of it.
After all, if it wasn’t surprising, if it did just what you expected and nothing more, it wouldn’t be all that “smart”, right? If you take all the “smartness” away, what remains is a “dumb” device, an empty shell, waiting for you to tell it what to do. Press a button to do the thing, and the thing happens. Don’t press, it doesn’t do the thing. Install new functionality, the new functionality is installed. Schedule it to do the thing, it does when scheduled, like a boring old alarm clock. Use it today, it runs today. Put it away, pick it up to use it ten years from now, it runs ten years from now. No surprise upgrades. No surprises.
“But what about the security upgrades”, you ask? Well, maybe I just wanted to vacuum my living room. Can’t we design devices such that the “online” component isn’t an indispensable, always-on necessity? Of course we can. But then my devices wouldn’t be their computers anymore.
And why do they want our devices to be their computers? It’s not to run their software in it and free-ride on our electricity bill — all these companies more enough computers of their own than we can imagine. It’s about controlling our experience. Once the user has control over which software runs, they make the choices. Once they don’t, the choice is made for them. Whenever behavior that used to be user-controlled becomes automatic in a “smart” (i.e., not explicitly user-dictated) way, that is a way where a choice is taken away from you and driven by someone else. And they might hide choices behind “it was the algorithm”, which gives a semblance of impersonality and deniability, but putting the algorithm in place is a deliberate act.
Taking power away from the user is a deliberate act. Take social networks, for example. You choose who to follow to curate your timeline, but then they say “no, we want our algorithm to choose who to display in your timeline!”. Of course, this is always to “delight” you with a “better experience”; in the case of social networks, a more addictive one, in the name of user engagement. And with the lines between tech conglomerates and smart devices being more and more blurred, the effect is such that this control extends into our lives beyond the glass screens.
In the past, any kind of rant like this about the harmful aspects of any piece of tech could well be responded with a “just don’t use it, then!” In the world of smart devices, the problem is that this is becoming less of an option, as the fabric of our social and professional lives increasingly depends on these networks, and soon enough alternatives will not available. We are still “delighted” by the process, but just like, 15 years in, a smartphone is now just a phone, soon enough a smart kettle will be just a kettle, a smart vacuum will be just a vacuum and we won’t be able to clean our houses unless Amazon says it’s alright to do so. We need to build an alternative future, because I don’t want to go back to using a broom.

Follow
🐘 Mastodon ▪ RSS (English), RSS (português), RSS (todos / all)
Last 10 entries
- Why I no longer say "conservative" when I mean "cautious"
- Sorting "git branch" with most recent branches last
- Frustrating Software
- What every programmer should know about what every programmer should know
- A degradação da web em tempos de IA não é acidental
- There are two very different things called "package managers"
- Last day at Kong
- A Special Hand
- How to change the nmtui background color
- Receita de Best Pancakes