hisham hm

🔗 Fun hack to redirect stdout and stderr in order

Prologue

This is anecdote about roundabout ways to get stuff done. Pierre mentioned in the comments below that a proper way to solve this is to use unbuffer (though it does _not_ produce the exact same order as the terminal!). But if you want to read the improper way to do this, read on! :)

The story

Due to buffering, the terminal messes with the order of stdout and stderr of a program when redirecting to a file or another program. It prints the outputs of both descriptors in correct order relative to each other when printing straight to the terminal:

] ./my_program
stdout line 1
stdout line 2
stderr line 1
stdout line 3
stderr line 2
stderr line 3

This doesn’t change the order:

] ./my_program 2>&1
stdout line 1
stdout line 2
stderr line 1
stdout line 3
stderr line 2
stderr line 3

but it changes the order when saving to a file or redirecting to any program:

] ./my_program 2>&1 | cat
stderr line 1
stderr line 2
stderr line 3
stdout line 1
stdout line 2
stdout line 3

This behavior is the same in three shells I tested (bash, zsh, dash).

A weird “solution”

I wanted to save the log while preserving the order of events. So I ended up with this evil hack:

] strace -ewrite -o trace.txt -s 2048 ./my_program; sed 's,^[^"]*"\(.*\)"[^"]*$,\1,g;s,\\n,,g;' trace.txt > mytrace.txt
] cat mytrace.txt
stdout line 1
stdout line 2
stderr line 1
stdout line 3
stderr line 2
stderr line 3
+++ exited with 0 +++

It turns out that strace does log each write in the correct order, so I’m catching the write syscall.

Note the limitations: it truncates lines to 2048 characters (good enough for my logs) and I was simply cutting off n and not cleaning up any other escape characters. But it worked well enough so I could read my ordered logs in a text editor!

🔗 On the word “latino”

One of my least-favorite American English words is “latino”, for two reasons:

First, a linguistic reason: because it’s not inflected when used. When you’re used to the fact that in Spanish and Portuguese “latino” refers only to men and “latina” only to women, hearing “latino woman” sounds really weird (weirder than, say, “handsome woman”). Even weirder “latino women”, mixing a Spanish/Portuguese word and English grammar. “Bonito girls”? :)

Second, a sociological reason: because using a foreign loanword reinforces the otherness. Nobody calls the Italian community in America “italiano”, although that’s their name in Italian. The alternative “Hispanic” is not ideal because it doesn’t really make sense when including Brazil, which was never a Spanish colony (plus, the colonial past is something most countries want to leave behind).

I can’t change the language by myself, so I just avoid the term and use more specific ones whenever possible (Colombians, Argentines, Brazilians, South Americans, Latin Americans when referring to people from the area in general, etc.)

After writing the above, I checked Wikipedia and it seems the communites in the US agree with me:

« In a recent study, most Spanish-speakers of Spanish or Hispanic American descent do not prefer the term “Hispanic” or “Latino” when it comes to describing their identity. Instead, they prefer to be identified by their country of origin. When asked if they have a preference for either being identified as “Hispanic” or “Latino,” the Pew study finds that “half (51%) say they have no preference for either term.”[43] A majority (51%) say they most often identify themselves by their family’s country of origin, while 24% say they prefer a pan-ethnic label such as Hispanic or Latino. Among those 24% who have a preference for a pan-ethnic label, “‘Hispanic’ is preferred over ‘Latino’ by more than a two-to-one margin—33% versus 14%.” Twenty-one percent prefer to be referred to simply as “Americans.” »

I think the awkwardness in the grammar from point one actually reinforces point two, because it strikes me as something that no Spanish or Portuguese native speaker would come up with by themselves. So it sounds tacked upon.

Don’t get me wrong, I fully identify as a Brazilian, a South American and a Latin American — travellling abroad helps a lot to widen your cultural identity! — and I have no problem when people wear the term “latino” proudly, but I always pay close attention to the power of language and how it represents and propagates ideas.

🔗 Fixing permission errors for scanning on Linux: running XSane as a regular user

I decided to stop running XSane as root. Here’s what I had to do:

* Restored the ownership of ~/.sane back to my user: sudo chown -R hisham ~/.sane
* Made /var/lock/sane writable by a regular user: sudo chmod a+rwxt /var/lock/sane
* As a quick-and-dirty fix, as suggested here, I gave permissions to my USB devices: sudo chmod a+rw -R /dev/bus/usb/*
* For a more permanent (and still untested) fix, I added a 40-libsane.rules file (found here) in /etc/udev/rules.d. This file already includes the USB id for my scanner. You may have to add yours.

🔗 You can’t automate SemVer, or: There is no way around Rice’s Theorem

Rice’s Theorem, proved in 1951, states that it is impossible to write a program that performs precisely any non-trivial analysis of the execution of other programs. More precisely, that’s impossible to code an analyzer for some non-trivial property that is able to decide whether an any given analyzed program has that property or not. And by “trivial property” we mean a property that either _all_ algorithms in the world have or _none_ has. So, yeah, “non-trivial property” is basically any property you can think of: “does it ever calculate 5 + 2”, “does it always use less than 10MB of memory”, “does it ever print something to the screen”, “does it ever access the network”?

At this point you might say “wait! I can write a program that checks if programs access the network or not! We can parse the code and if there are no calls whatsoever in it to any networking code such as connect(), then it doesn’t access the network!”. Sure, you can do that: but if the code has calls to connect(), you can’t decide for sure that it will access the network when it’s executed.

In 1936 Alan Turing proved that it is impossible to write a program that solves the Halting Problem, that is, to write an analyzer that checks programs and tells if it always terminates (”halts”) or it might enter an endless loop given some specific input. Okay, that’s a classic result, but that’s one property, how can Rice’s Theorem say we can’t make an analyzer for any property at all, even the silliest ones?

The proof for this amazingly powerful theorem is surprisingly simple. Turns out that if we had an analyzer for any silly property, we could use it to make a Halting Problem analyzer (which Turing proved to be impossible). Like this:

bool my_halting_problem_analyzer(Code analyzedProgram) {
   Code modifiedProgram = analyzedProgram + "; someCodeWithSillyProperty();"
   return my_silly_property_analyzer(modifiedProgram);
}

If the code in analyzedProgram always terminates, then the code in modifiedProgram will always reach the part that has the silly property, so my_silly_property_analyzer will return true, and my_halting_problem_analyzer returns true as well. If there is some input that makes the analyzedProgram hang in a loop, that means there’s some input that makes the silly property fail, resulting in false. Yay, we solved the Halting Problem using the silly property analyzer! Not.

Of course, this explanation is quite simplified1, so head to Wikipedia and your favorite formal languages book for the precise details. But the point stands that general semantic analysis of programs is impossible.

In particular, you can’t write a program that takes versions 1.0 and 1.1 of any program X and answer the question: “do they behave the same?”. In other words, it’s impossible to write an analyzer that looks at your master branch before you make a release and answers the question “should your new release tag be a major, minor or tiny release” according to the rules of SemVer (or any other API-compatibility-bound set of rules, for that matter).

This is because API compatibility is not only based on syntactically-expressible issues (that is, type signatures for functions and data structures). Any semantic changes to the code also break compatibility. A function may change its behavior but not its type signature (it still returns a string, but it used to be lower-case and it’s now upper-case), a struct can change they way it is used but the fields remain the same (field foo returned numbers from 0 to 10 and -1 when executed on Sundays, now it returns -1 on Saturdays as well). An automated tool won’t catch all this.

So, it is possible only to write a “pessimistic” tool, that may detect lots of situations syntactically and give the bad news: “hey, you must increment the major version here!”. But you can’t write a tool that is always able to look at code semantically and say the good news: “I assure you that no API behaviors have changed, you can safely name this a tiny version increase.”2

Yes, you can use test suites as an approximation for detecting semantic changes in API behaviors beyond type signatures and data structures. That would certainly improve your pessimistic analyzer — you’d be able to detect more situations where “you must increment major”. But even then it can only go so far, because in practice one can’t test for every possible input/output combination, so you still can’t be 100% sure. fuzz testing has uncovered bugs and unexpected behaviors even in programs with extensive test suites; as Dijkstra famously said, “Testing shows the presence, not the absence of bugs.” — likewise, test suites can show inconsistencies to the API specification, but not their adherence. So they can’t be taken to represent the semantics of a program entirely.

Anyway, in the end of the day, Rice’s Theorem shows us that general bullet-proof analysis of program behavior is not attainable, so no tool will ever be able to compare codebases and always tell us precisely that a new release is really “tiny-safe”. Semantic versioning just can’t be automated.

🔗 First time playing with OpenBSD

A list of notes on my first experience with OpenBSD:

* It wasn’t obvious at first which file to download, navigating their FTP site. I ended up with cd58.iso, which was the right choice. A tiny 7.8MB ISO!
* As soon as I typed “OpenBSD” as the VM name in VirtualBox, it gave me OpenBSD defaults. It defaulted to 64MB of RAM (only!), but I chose 256M, and the default 2G disk image.
* The first boot gave me a few options, including “Install” and “Autoinstall”. I chose “Autoinstall” since I thought that would install with the defaults, but it looked for an installation script in the local network (and obviously didn’t find it). Reboot, “Install” and off we go.
* No Dvorak options in the list of keyboards in the installer.
* I went with the default options and installed all offered packages (it offered me a list of coarse-grained bundles). It asked me about creating users, partitions, configuring network and if I wanted to run sshd by default.
* To enable Dvorak, the internet told me to do kbd us.dvorak nice.
* To make it permanent, I just grepped /etc for kbd and found out that cat us.dvorak > /etc/kbdtype would suffice.
* To install things, become root and then use pkg_add. For example: pkg_add wget
* By default it includes df but not free.
* To get color in the terminal, set TERM=wsvt25. htop showed in its usual colors!
* The default PATH includes the current (.) directory! Take that, Linux status quo!
* iconv is installed under /usr/local, and its library exports symbols with names such as libiconv_open instead of the usual iconv_open, which fools the typical AC_CHECK_LIB test in Autoconf. (In the source code, iconv_open works, so I guess iconv.h uses #define to translate the name. Added a hack to Dit to make it build cleanly out-of-the-box.


Follow

🐘 MastodonRSS (English), RSS (português), RSS (todos / all)


Last 10 entries


Search


Admin