🔗 Mini book review: “The West Divided”, Jürgen Habermas
Written in the early 2000s, “The West Divided” is a collection of interviews, newspaper articles and an essay by Habermas about post-9/11 international politics and the different approaches taken by the US through the Bush administration and the EU, through a philosophical lens.
The structure of the book helps the reader, by starting with transcripts from interviews and articles, which use a more direct language and establish some concepts explained by Habermas to the interviewers, and then proceeding with a more formal academic work in the form of the essay of the last part, which is a more difficult read. Habermas discusses the idealist and realist schools of international relations, aligning the idealist school with Kant’s project and the evolution of the EU, and opposing it to the realist school proposed by authors such as Carl Schmitt, a leading scholar from Nazi Germany. Habermas proposes a future of the Kantian project which lies in the ultimate development of cosmopolitan (i.e. worldwide) juridical institutions, such that cosmopolitan law gives rights directly to (all) people, rather than as a layer of inter-national law between states.
It was especially timely to read this book in the wake of the Russian invasion of Ukraine in February 2022, and to have this theoretical background at hand when watching John Mearsheimer’s talk from 2015 on “Why Ukraine is the West’s Fault”. Mearsheimer, an American professor in the University of Chicago and himself a noted realist, gives an excellent talk and makes a very important point that in international conflict one needs to understand how the other side thinks — instead of “idealist” vs. “realist” (which are clearly loaded terms) he used himself the terms “19th century people” to refer to himself, Putin, and the Chinese leadership, as opposed to “21st century people” to refer to the leaders in the EU and in Washington.
It is interesting to see Mearsheimer put the European and American governments in the same bucket, when Habermas’s book very much deals with the opposition of their views, down to the book’s very title. Habermas wrote “The West Divided” during the Bush administration, and the Mearsheimer talk was given during the Obama years, but he stated he saw no difference between Democrats and Republicans in this regard. Indeed, in what concerns international law, what we’ve seen from Obama wasn’t that different from what we saw before or afterwards — perhaps different in rhetoric, but not so much in actions, as he broke his promise of shutting down Guantanamo and continued the policy of foreign interventions.
As much as Mearsheimer’s analysis is useful to understand Putin, Habermas’s debate that there are different ways to see a future beyond endless Schmittian regional conflicts is still a valid one. And we get a feeling that all is not lost whenever we see that there are still leaders looking to build a future of cosmopolitan cooperation more in line with Kant’s ideals. Martin Kimani, the Kenyan envoy to the UN said this regarding the Russian invasion of Ukraine:
This situation echoes our history. Kenya and almost every African country was birthed by the ending of empire. Our borders were not of our own drawing. They were drawn in the distant colonial metropoles of London, Paris, and Lisbon, with no regard for the ancient nations that they cleaved apart.
Today, across the border of every single African country, live our countrymen with whom we share deep historical, cultural, and linguistic bonds. At independence, had we chosen to pursue states on the basis of ethnic, racial, or religious homogeneity, we would still be waging bloody wars these many decades later.
Instead, we agreed that we would settle for the borders that we inherited, but we would still pursue continental political, economic, and legal integration. Rather than form nations that looked ever backwards into history with a dangerous nostalgia, we chose to look forward to a greatness none of our many nations and peoples had ever known. We chose to follow the rules of the Organisation of African Unity and the United Nations charter, not because our borders satisfied us, but because we wanted something greater, forged in peace.
We believe that all states formed from empires that have collapsed or retreated have many peoples in them yearning for integration with peoples in neighboring states. This is normal and understandable. After all, who does not want to be joined to their brethren and to make common purpose with them? However, Kenya rejects such a yearning from being pursued by force. We must complete our recovery from the embers of dead empires in a way that does not plunge us back into new forms of domination and oppression.
Words to build a new future by.
🔗 The algorithm did it!
Earlier today, statistician Kareem Carr posted this interesting tweet, about what people out there mean when they say “algorithm”, which I found to be a good summary:
When people say “algorithms”, they mean at least four different things:
1. the assumptions and description of the model
2. the process of fitting the model to the data
3. the software that implements fitting the model to the data
4. The output of running that software
Unsurprisingly, this elicited a lot of responses from computer scientists, raising the point that this is not what the word algorithm is supposed to mean (you know, a well-defined sequence of steps transforming inputs into outputs, the usual CS definition), including a response from Grady Booch, a key figure in the history of software engineering.
I could see where both of them were coming from. I responed that Carr’s original tweet not was about what programmers mean when we say “algorithms” but what the laypeople mean when they say it or read it in the media. And understanding this distinction is especially important because variations of “the algorithm did it!” is the new favorite excuse of policymakers in companies and governments alike.
Booch responded to me, clarifying that his point is that “even most laypeople don’t think any of those things”, which I agree with. People have a fuzzy definition of what an algorithm is, at best, and I think Carr’s list encompasses rather well the various things that are responsible for the effects that people credit on a vague notion of “algorithm” when people use that term.
Booch also added that “it’s appropriate to establish and socialize the correct meaning of words”, which simultaneously extends the discussion to a wider scope and also focuses it to the heart of the matter about the use of “algorithm” in our current society.
You see, it’s not about holding on to the original meaning of a word. I’m sure a few responses to Carr were of the pedantic variety, “that’s not what the dictionary says!” kind of thing. But that’s short-sighted, taking a prescriptivist rather than descriptivist view of language. Most of us who care about language are past that debate now, and those of us who adhere to the sociolinguistic view of language even celebrate the fact language shifts, adapts and evolves to suit the use of its speakers.
Shriram Krishnamurthi, CS professor at Brown, joined in on the conversation, observing that this shift in the language as a fait accompli:
I’ve been told by a public figure in France (who is herself a world-class computer scientist) — who is sometimes called upon by shows, government, etc. — that those people DO very much use the word this way. As an algorithms researcher it irks her, but that’s how it is.
Basically, we’ve lost control of the world “algorithm”. It has its narrow meaning but it also has a very broad meaning for which we might instead use “software”, “system”, “model”, etc.
Still, I agreed with Booch that this is still a fight worth fighting. But not to preserve our cherished technical meaning of the term, to the dismay of the pedants among our ranks, but because of the observation of the very circumstances that led to this linguistic shift.
The use of “algorithm” as a vague term to mean “computers deciding things” has a clear political intent: shifting blame. Social networks boosting hate speech? Sorry, the recommendation algorithm did it. Racist bias in criminal systems? Sorry, it was the algorithm.
When you think about it, from a linguistic point of view, it is as nonsensical as saying that “my hammer assembled the shelf in my living room”. No, I did, using the hammer. Yet, people are trained to use such constructs all the time: “the pedestrian was hit by a car”. Note the use of passive voice to shift the focus away from the active subject: “a car hit a pedestrian” has a different ring to it, and, while still giving agency to a lifeless object, is one step closer to making you realize that it was the driver who hit the pedestrian, using the car, just like it was I who built the shelf, using the hammer.
This of course leads to the “guns don’t kill people, people kill people” response. Yes, it does, and the exact same questions regarding guns also apply regarding “algorithms” — and here I use the term in the “broader” sense as put forward by Carr and observed by Krishnamurthi. Those “algorithms” — those models, systems, collections of data, programs manipulating this data — wield immense power in our society, even, like guns, resulting in violence, and like guns, deserving scrutiny. And when those in possession of those “algorithms” go under scrutiny, they really don’t like it. One only needs to look at the fallout resulting from the work by Bender, Gebru, McMillan-Major and Mitchell, about the dangers of extremely large language models in machine learning. Some people don’t like hearing the suggestion that maybe overpowered weapons are not a good idea.
By hiding all those issues behind the word “algorithm”, policymakers will always find a friendly computer scientist available to say that yes, an algorithm is a neutral thing, after all, it’s just a sequence of instructions, and they will no doubt profit from this confusion of meanings. And I must clarify that by policymakers I mean those both in public and private sphere, since policies put forward by the private tech giants on their platforms, where we spend so much of our lives, are as effecting on our society as public policies nowadays.
So what do we do? I don’t think it is productive to start well-actually-ing anyone who uses “algorithm” in the broader sense, with a pedantic “Let me interject for a moment — what you mean by algorithm is in reality a…”. But it is productive to spot when this broad term is being used to hide something else. “The algorithm is biased” — What do you mean, the outputs are biased? Why, is the input data biased? The people manipulating that data created a biased process? Who are they? Why did they choose this process and not another? These are better interjections to make.
These broad systems described by Carr above ultimately run on code. There are algorithms inside them, processing those inputs, generating those outputs. The use of “algorithm” to describe the whole may have started as a harmless metonymy (like when saying “White House” to refer to the entire US government), but it has since been proven very useful as a deflection tactic. By using a word that people don’t understand, the message is “computers doing something you don’t understand and shouldn’t worry about”, using “algorithm” handwavily to drift people’s minds away from the policy issues around computation, the same way “cloud” is used with data: “your data? don’t worry, it’s in the cloud”.
Carr is right, these are all things encompassing things that people refer to as “algorithms” nowadays. Krishnamurthi is right, this broad meaning is a reality in modern language. And Booch is right when he says that “words matter; facts matter”.
Holding words to their stricter meanings merely due to our love for the language-as-we-were-taught is a fool’s errand; language changes whether we want it or not. But our duty as technologists is to identify the interplay of the language, our field, and society, how and why they are being used (both the language and our field!). We need to clarify to people what the pieces at play really are when they say “algorithm”. We need to constantly emphasize to the public that there’s no magic behind the curtain, and, most importantly, that all policies are due to human choices.
🔗 Smart tech — smart for whom?
Earlier today I quipped on social media: “We need dumb tech and smart users, and not the other way around”.
Later, I clarified that I’m not calling users of “smart” devices dumb. People are smart. The tech should try to not “dumb them down” by acting condescendingly, cutting down on their agency and limiting their opportunities of education.
A few people shared replies to the effect that they wish for smart devices that wouldn’t be at odds with the intents of the user. After all, we all want the convenience of tech, so why settle for “dumb tech”, right?
The question here becomes a game of words: what is a “smart device”, after all?
A technically-minded person will would look at smart devices like smartphones, smart TVs, etc., and say “well, they are really computers”, or “they have computers inside”. But if we want to be technically pedantic, what is a computer? Having a Turing-complete microprocessor running programs? My old trusty microwave for sure has a microprocessor, but it’s definitely not a “smart device”. So it’s not about the internals, it definitely has to do with the end-user perception of the device.
The next reasonable approximation towards a definition is that a smart device allows you to install “apps”. You can extend it with more functionality (which is really making use of the fact that it’s a “computer inside”). Smart TVs and smart phones check this box. However, other home appliances like “smart kettles” don’t, the “smartness” comes from being internet-connected. So, generally, it looks like smart devices are things you either run apps in, or control via apps (from another smart device, of course).
So, allowing for running apps pretty much makes something into a computer, right? After all, a computer is a machine for running software. But it’s really interesting to think what is in fact a computer — where do we draw the line. From an end-user perspective, a game console is also a machine for running software — a particular kind of software, games — but it is not a computer. Is a Smart TV a computer? You can install apps in it. But they are also generally restricted to a certain kind of software: streaming services, video and the like.
Something doesn’t feel like a computer unless you can run any kind of software in it. This universality is a key concept. It’s interesting how we’re slowly driven back to the most fundamental definition of a computer, Alan Turing’s definition of a computer as a universal machine. Back in 1936, before the first actual computer was built during World War II, Turing wrote a philosophy section within a mathematics paper where he made this thought exercise of what it means to compute, and in his example he used the idea of a person doing the computations by hand: reading data, applying rules to process data, producing new data, repeat. Everything that computes follows this model: game consoles, the autopilot in an airplane, PCs, the microcontroller in my microwave. And though Turing had a technical notion of universality in mind, the key point for us here is that in our end-user understanding of a computer and what makes us call some things computers and others not, is that the program (or set of allowed programs) is not fixed, and this is what we see (and Turing saw) as universal: that any program that may be expressed in the computer’s language can be written and run on it, not just a finite set.
Are smart devices universal machines, then, in either sense of the word? Well, you can install new apps in them. Then, it can do new things it couldn’t yesterday. Does that make it a computer? What about game consoles? If I buy a new game (which is effectively new software!), it can also do new things, but you won’t really look at it as a computer. The reason is because you’re restricted on the kind of new software you can make this machine run: it only takes games, it’s not universal, at least from an end-user point of view.
But game consoles are getting “smarter” nowadays! They not only play games; maybe it will also have an app for showing you the weather, maybe it will accept some streaming service… but not others — and here we’re hinting at a key point of what “smart” devices are really like. They are, in fact, on the inside, universal machines that satisfy Turing’s definition. But they are not universal machines for you, the owner. For me, my Nintendo Switch is just a game console. For Nintendo, it is a computer: they can install any kind of software in it in their next software update. They can decide that it can play games, and also access Youtube, but not Netflix. Or they could change their mind tomorrow. From Nintendo’s perspective, the Switch is a universal machine, but not from mine.
The same thing happens, for example, with an iPhone. For Apple, it is a computer: they can run anything on it, the possibilities are endless. From the user’s perspective, it is a phone, into which you can install apps, and in fact choose from a zillion apps in the App Store. But the the possibilities, vast as they may be, are not endless. And that vastness doesn’t help much. From a user perspective, it doesn’t matter how many things you can do with something; what matters are what things you want to do with it, which of those you can and which ones you can’t. Apple still decrees what’s allowed and what isn’t in the App Store, and will also run their own software on the operating system, over which you have zero say. A Chromecast may also be a computer on the inside, with all the necessary networking and video capabilities, but Google has decided that it won’t let me easily play my movies with it, and so it can’t, just like that.
And such is the reality of smart devices. My Samsung TV is my TV, but it is Samsung’s computer. My house is filled, more and more, with computers over which I have no control. And they have control over what I can and what I can’t do with the devices I bought. From planned obsolescence, to collecting data on my habits and selling it, to complicating access to functionality that is there — the common thread in smart devices is that there is someone on the other side controlling the experience. And as we progress towards the “ever smarter”, with AI-based voice assistants being added to more devices, a significant part of that experience becomes the ways it “delights and surprises you“, or, in other words, your lack of control of it.
After all, if it wasn’t surprising, if it did just what you expected and nothing more, it wouldn’t be all that “smart”, right? If you take all the “smartness” away, what remains is a “dumb” device, an empty shell, waiting for you to tell it what to do. Press a button to do the thing, and the thing happens. Don’t press, it doesn’t do the thing. Install new functionality, the new functionality is installed. Schedule it to do the thing, it does when scheduled, like a boring old alarm clock. Use it today, it runs today. Put it away, pick it up to use it ten years from now, it runs ten years from now. No surprise upgrades. No surprises.
“But what about the security upgrades”, you ask? Well, maybe I just wanted to vacuum my living room. Can’t we design devices such that the “online” component isn’t an indispensable, always-on necessity? Of course we can. But then my devices wouldn’t be their computers anymore.
And why do they want our devices to be their computers? It’s not to run their software in it and free-ride on our electricity bill — all these companies more enough computers of their own than we can imagine. It’s about controlling our experience. Once the user has control over which software runs, they make the choices. Once they don’t, the choice is made for them. Whenever behavior that used to be user-controlled becomes automatic in a “smart” (i.e., not explicitly user-dictated) way, that is a way where a choice is taken away from you and driven by someone else. And they might hide choices behind “it was the algorithm”, which gives a semblance of impersonality and deniability, but putting the algorithm in place is a deliberate act.
Taking power away from the user is a deliberate act. Take social networks, for example. You choose who to follow to curate your timeline, but then they say “no, we want our algorithm to choose who to display in your timeline!”. Of course, this is always to “delight” you with a “better experience”; in the case of social networks, a more addictive one, in the name of user engagement. And with the lines between tech conglomerates and smart devices being more and more blurred, the effect is such that this control extends into our lives beyond the glass screens.
In the past, any kind of rant like this about the harmful aspects of any piece of tech could well be responded with a “just don’t use it, then!” In the world of smart devices, the problem is that this is becoming less of an option, as the fabric of our social and professional lives increasingly depends on these networks, and soon enough alternatives will not available. We are still “delighted” by the process, but just like, 15 years in, a smartphone is now just a phone, soon enough a smart kettle will be just a kettle, a smart vacuum will be just a vacuum and we won’t be able to clean our houses unless Amazon says it’s alright to do so. We need to build an alternative future, because I don’t want to go back to using a broom.
🔗 Protests and the space launch
I am not going to talk directly about the US protests. Instead, I will briefly note the role of the State in them, both as cause—promoting institutional racism—and as a continuing instigator. But aren’t the protests for justice, which supposedly needs to come from the State? But the government is clearly not interested in justice. So we have on one side the people, on the other side the State. But it didn’t have to be this way by definition: it’s _this_ particular state that’s the problem. And this makes me think of Peter Thiel.
Peter Thiel has said, in no uncertain terms, that he does not believe in democracy and ultimately wants the destruction of any form of State. That’s what he went up on stage in the elections for. That’s what the alt-right stands for. This is not a crisis, this is an ongoing plan. Historians will look at this as a multi-decade process, with the early 1980s under Reagan and Thatcher as the first inflection point, and the last few years with the alt-right, Trump and Brexit as the second one.
The first stage, neoliberalism, was about crippling the state in order to declare it inefficient and privatize it. In the 90s in particular this was sped up in practice and played down in discourse, but early on this was the stated goal.
Now at the second stage, it is no longer about crippling government institutions. It is about crippling the concept of government itself: get the incompetent to power, so that the supposed flaws of the democratic model become evident. Then offer something else.
Of course, the trickery there is that invariably the incompetent were led to power in various places by apparently legal but effectively criminal means: mass propaganda done in breach of campaign funding laws, voter supression, buying congress.
So now we’re at the stage where there’s a useful minority of radicalized fascists ensuring we get the worst possible government, and a mass of average people whose heroes are billionaires, ranging from your monopolist-turned-philantropist to your tweeting-techbro bigot. It may seem contradictory that the forces that are ultimately destroying the notion of nation-state ostensibly employ the discourse of nationalism. But if you pay attention, they are priming their base on adapting to the upcoming state of things.
This pandemic is the first time in history when we see a national crisis being addressed by the government by giving a press release and giving names of _companies_ who are going to be doing this and that to deal with the issue. This was very, very startling to see.
The notion that is up for companies and not the state to organize society is being normalized. America is at the forefront of this process, and has been for a long time. Other places still have things like functioning health and educational systems, but the pressure exists. In that sense, the landmark space launch — once a matter of national pride, now privatized into another triumph of business over country — led by Thiel’s associate Musk, is not at all disconnected from the ongoing events. They are deeply, closely connected.
Thiel’s company Palantir — named after the all-seeing eye in LOTR, another techbro favorite — provides machinery for mass surveillance in the the US. The surveillance people worldwide are under is already managed by a private company.
When you think of Russia and its oligarchs, or the growing number of billionaires in China’s parliament — why lobby the middleman, just buy a Congress seat for yourself — we see that the increasingly direct control of superpowers by businessmen is not an American-only phenomenon. It might be that the façade of a state will stick around for long, as a useful device as people cling to their flags and anthems, but I feel the foundations of the modern nation-state slowly crumble into dust under my feet, and I do not like what’s taking its place.
Zygmunt Bauman saw it back in the 1990s: in a world of companies, we’re no longer citizens, only consumers. And the rippling effects of this are much worse than they initially sound, but this is a conversation for another time.
🔗 Remembering Windows 3.1 themes and user empowerment
This reminiscence started reading a tweet that said:
Unpopular opinion: dark modes are overhyped
Windows 3.1 allowed you to change all system colors to your liking. Linux been fully themeable since the 90s. OSX came along with a draconian “all blue aqua, and maybe a hint of gray”.
People accepted it because frankly it looked better than anything else at the time (a ton of Linux themes were bad OSX replicas). But it was a very “Ford Model T is available in any color as long as it’s black” thing.
The rise of OSX (remember, when it came along Apple had a single-digit slice of the computer market) meant that people eventually got used to the idea of a life with no desktop personalization. Nowadays most people don’t even change their wallpapers anymore.
In the old days of Windows 3.1, it was common to walk into an office and see each person’s desktop colors, fonts and wallpapers tuned to their personalities, just like their physical desk, with one’s family portrait or plants.
I just showed the above screenshots to my sister, and she sighed with a happy nostalgia:
— Remember changing colors on the computer?
— Oh yes! we would spend hours having fun on that!
— Everyone’s was different, right?
— Yes! I’d even change it according to my mood.
Looking back, I feel like this trend of less aesthetic configurability has diminished the sense of user ownership from the computer experience, part of the general trend of the death of “personal computing”.
I almost wrote that a phone UI allows for more self-expression today than a Win/Mac computer. But then I realized how much I struggled to get my Android UI the way I wanted, until I installed Nova Launcher that gave me Linux-levels of tweaking. The average user does not do this.
But at least they are more likely to change wallpaper in their phones than their computers. Nowadays you walk into an office and all computers look the same.
The same thing happened to the web, as we compare the diminishing tweakability of a MySpace page to the blue conformity a Facebook page, for example.
Conformity and death of self-expression are the norm, all under the guise of “consistency”.
User avatars forced into circles.
App icons in phones forced into the same shape.
Years ago, a friend joked that the inconsistency of the various Linux UI toolkits was how he felt the system’s “freedom”. We all laughed and wished for a more consistent UI, of course. But that discourse on consistency was quickly coopted to remove users’ agency.
What begins with aesthetics and the sense of self-expression, continues to a lack of ownership of the computing experience and ends in the passive acceptance of systems we don’t control.
Changes happen, but those are independent from the users’ wishes, and it’s a lottery whether the changes are for better or for worse.
Ever notice how version changes are called “updates” and not “upgrades” anymore?
In that regard, I think Dark Mode is a welcome addition as it allows a tiny bit of control and self-expression to the user, but it’s still kinda sad to see how far we regressed overall.
The hype around it, and how excited users get when they get such crumbles of configurability handed to them, just comes to show how users are unused to getting any degree of control back in their hands.
Follow
🐘 Mastodon ▪ RSS (English), RSS (português), RSS (todos / all)
Last 10 entries
- A Special Hand
- How to change the nmtui background color
- Receita de Best Pancakes
- That time I almost added Tetris to htop
- Receita de Orange Chicken
- Receita de frango empanado no panko
- Receita de cebola caramelizada
- Receita rápida de crepe
- Finally upgraded FlatPress
- Sobre o boom das commodities